How AI could change threat detection

Early threat detection practices mostly involved identifying “something bad on a device by detecting that it matched a known signature,” explained Kayne McGladrey, a senior member of IEEE, a nonprofit professional association, and field CISO at Hyperproof. This signature-based detection was, and still is, a key part of threat detection, but other rules-based detection practices — where computer activities are analyzed to determine if they follow set rules — have become foundational components of threat detection over the years, too.

AI in Cybersecurity: The Good and the Bad

“[AI] allows a threat actor to scale a lot faster and across multiple channels,” Kayne McGladrey, chief information security officer at compliance management company Hyperproof, told Built In. “And the defensive tools haven’t quite caught up. Unfortunately, none of this stuff is going away. This has now become a fixture of the landscape. It’s part of our new, modern cybersecurity hellscape that we inhabit continuously.”

How Safe and Secure Is GenAI Really?

“After all, AI serves as both a force accelerator, as it will allow those threat actors to operate at large scale without having to increase the size of their workforce. At the same time, the ability of AI to generate convincing-enough speech in another language will serve to open new markets to threat actors who might have previously employed linguists,” says Kayne McGladrey, IEEE Senior Member.

The GRC Maturity Model

Companies with mature GRC programs have an advantage over their competitors. However, something has been missing in the GRC world: the ability to truly understand an organization’s GRC maturity and the steps it would take to build the business case for change. That’s where the GRC Maturity Model comes in.

Hyperproof’s GRC Maturity model is a practical roadmap for organizations to improve their GRC maturity business processes to enter new markets and successfully navigate our rapidly changing regulatory and legal space. By providing a vendor-agnostic roadmap for how companies can improve key business operations, we can help even the playing field for everyone in GRC.

This extensive, peer-reviewed model written by Kayne McGladrey includes:

  • An overview and definition of Governance, Risk, and Compliance (GRC)

  • A summary of the four maturity levels defined in the model: Traditional, Initial, Advanced, and Optimal

  • An overview of the most common business practices associated with governance, risk, and compliance

  • A simplified maturity chart listing the attributes associated with each maturity level

  • A list of observable behaviors or characteristics associated with the maturity level to help you assess where your organization falls

  • A set of high-level recommendations for how to move from a lower level to a higher level

Compliance as a Critical Business Enabler (podcast)

Kayne McGladrey, the Field CISO at Hyperproof, is a renowned cybersecurity expert with an extensive background in enhancing security landscapes across various industries. His career is marked by significant contributions in developing robust security frameworks, managing complex risk scenarios, and driving comprehensive compliance initiatives. With a deep commitment to transforming the cybersecurity field, Kayne’s insights and strategies continue to influence how organizations approach security and regulatory compliance, making him a sought-after voice in the industry.

InfoSec Pros: Carmen Marsh and Confidence Staveley

During this Hyperproof live stream series, leaders in information security shed light on crucial topics that shape the modern cybersecurity landscape. This month’s episode features Carmen Marsh, President and CEO at United Cybersecurity Alliance, Confidence Staveley, Founder & Executive Director at CyberSafe Foundation, and our host, Kayne McGladrey, Field CISO at Hyperproof. Guided by Kayne and audience questions, Carmen and Confidence will share insights into their current work and past experiences in the field.

Twelve Essential Soft Skills for Early-Career Cybersecurity Professionals

In the realm of cybersecurity, early-career professionals often prioritize the development and demonstration of technical prowess. However, as someone with nearly three decades of experience in cybersecurity leadership roles, I firmly assert that interpersonal skills wield a profound influence over one’s career trajectory. Unlike certifications and degrees, which may lose relevance over time, interpersonal skills persist and can be cultivated through deliberate practice. This article sheds light on these often-overlooked attributes, providing a holistic perspective on what it takes to excel in cybersecurity beyond technical acumen.

AI system poisoning is a growing threat — is your security regime ready?

Although motivations like that mean any organization using AI could be a victim, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), a nonprofit professional association, and field CISO at Hyperproof, says he expects hackers will be more likely to target the tech companies making and training AI systems.

But CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems.

How to Operationalize Your Risk Assessments at Data Connectors Dallas

Thursday, May 16, 2024

Risk assessments have moved beyond a check-the-box approach, especially with the SEC’s new disclosure requirements. Join us for our session, How to Operationalize Your Risk Assessment Process, to get practical guidance on navigating the complexities of risk assessments to drive tangible business outcomes. Kayne McGladrey, Field CISO at Hyperproof, will navigate through the essential steps required to operationalize risk assessments effectively within diverse organizational structures. From conceptualization to execution, participants will gain actionable insights into crafting and implementing tailored risk assessment strategies tailored to their unique organizational contexts.

AI models inch closer to hacking on their own

The big picture: AI model operators don’t have a good way of reigning in these malicious use cases, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), told Axios. Allowing LLMs to digest and train on CVE data can help defenders synthesize the wave of threat alerts coming their way each day. Operators have only two real choices in this type of situation: allow the models to train on security vulnerability data or completely block them from accessing vulnerability lists, he added. “It’s going to be a feature of the landscape because it is a dual-use technology at the end of the day,” McGladrey said.