How Safe and Secure Is GenAI Really?

“After all, AI serves as both a force accelerator, as it will allow those threat actors to operate at large scale without having to increase the size of their workforce. At the same time, the ability of AI to generate convincing-enough speech in another language will serve to open new markets to threat actors who might have previously employed linguists,” says Kayne McGladrey, IEEE Senior Member.

The GRC Maturity Model

Companies with mature GRC programs have an advantage over their competitors. However, something has been missing in the GRC world: the ability to truly understand an organization’s GRC maturity and the steps it would take to build the business case for change. That’s where the GRC Maturity Model comes in.

Hyperproof’s GRC Maturity model is a practical roadmap for organizations to improve their GRC maturity business processes to enter new markets and successfully navigate our rapidly changing regulatory and legal space. By providing a vendor-agnostic roadmap for how companies can improve key business operations, we can help even the playing field for everyone in GRC.

This extensive, peer-reviewed model written by Kayne McGladrey includes:

  • An overview and definition of Governance, Risk, and Compliance (GRC)

  • A summary of the four maturity levels defined in the model: Traditional, Initial, Advanced, and Optimal

  • An overview of the most common business practices associated with governance, risk, and compliance

  • A simplified maturity chart listing the attributes associated with each maturity level

  • A list of observable behaviors or characteristics associated with the maturity level to help you assess where your organization falls

  • A set of high-level recommendations for how to move from a lower level to a higher level

Compliance as a Critical Business Enabler (podcast)

Kayne McGladrey, the Field CISO at Hyperproof, is a renowned cybersecurity expert with an extensive background in enhancing security landscapes across various industries. His career is marked by significant contributions in developing robust security frameworks, managing complex risk scenarios, and driving comprehensive compliance initiatives. With a deep commitment to transforming the cybersecurity field, Kayne’s insights and strategies continue to influence how organizations approach security and regulatory compliance, making him a sought-after voice in the industry.

InfoSec Pros: Carmen Marsh and Confidence Staveley

During this Hyperproof live stream series, leaders in information security shed light on crucial topics that shape the modern cybersecurity landscape. This month’s episode features Carmen Marsh, President and CEO at United Cybersecurity Alliance, Confidence Staveley, Founder & Executive Director at CyberSafe Foundation, and our host, Kayne McGladrey, Field CISO at Hyperproof. Guided by Kayne and audience questions, Carmen and Confidence will share insights into their current work and past experiences in the field.

Twelve Essential Soft Skills for Early-Career Cybersecurity Professionals

In the realm of cybersecurity, early-career professionals often prioritize the development and demonstration of technical prowess. However, as someone with nearly three decades of experience in cybersecurity leadership roles, I firmly assert that interpersonal skills wield a profound influence over one’s career trajectory. Unlike certifications and degrees, which may lose relevance over time, interpersonal skills persist and can be cultivated through deliberate practice. This article sheds light on these often-overlooked attributes, providing a holistic perspective on what it takes to excel in cybersecurity beyond technical acumen.

AI system poisoning is a growing threat — is your security regime ready?

Although motivations like that mean any organization using AI could be a victim, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), a nonprofit professional association, and field CISO at Hyperproof, says he expects hackers will be more likely to target the tech companies making and training AI systems.

But CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems.

How to Operationalize Your Risk Assessments at Data Connectors Dallas

Thursday, May 16, 2024

Risk assessments have moved beyond a check-the-box approach, especially with the SEC’s new disclosure requirements. Join us for our session, How to Operationalize Your Risk Assessment Process, to get practical guidance on navigating the complexities of risk assessments to drive tangible business outcomes. Kayne McGladrey, Field CISO at Hyperproof, will navigate through the essential steps required to operationalize risk assessments effectively within diverse organizational structures. From conceptualization to execution, participants will gain actionable insights into crafting and implementing tailored risk assessment strategies tailored to their unique organizational contexts.

AI models inch closer to hacking on their own

The big picture: AI model operators don’t have a good way of reigning in these malicious use cases, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), told Axios. Allowing LLMs to digest and train on CVE data can help defenders synthesize the wave of threat alerts coming their way each day. Operators have only two real choices in this type of situation: allow the models to train on security vulnerability data or completely block them from accessing vulnerability lists, he added. “It’s going to be a feature of the landscape because it is a dual-use technology at the end of the day,” McGladrey said.

The Jobs of Tomorrow: Insights on AI and the Future of Work

Kayne McGladrey, IEEE Senior Member, noted that the use of generative AI models in business hinges on their ability to provide accurate information. He cited as examples studies of AI models’ abilities to extract information from documents used for financial sector regulation that are frequently relied on to make investment decisions. “Right now, the best AI models get 80 percent of the questions right,” McGladrey said. “They hallucinate the other 20 percent of the time. That’s not a good sign if you think you are making investment decisions based on artificial intelligence telling you this is a great strategy four out of five times.”

What are the biggest ethical considerations of security technology?

Algorithmic bias is one of the primary risks associated with emerging physical surveillance technologies. While the risks of facial recognition software are well known and documented, efforts are being taken to adapt computer vision to new and novel use cases. For example, one of the more deeply flawed failures was an attempt to detect aggressive behaviour or body language, which was unfeasible as there was not enough training data available. Other physical security systems will face a similar challenge of not discriminating against individuals based on protected factors due to a lack of training data, or more likely, a lack of gender or racially unbiased training data. Companies considering purchasing advanced or emerging physical security systems should enquire about the training data used in the development of those systems to not be subject to civil penalties resulting from discrimination caused by using said systems.