InfoSec Pros On the Road: Brenda Bernal, VP, Product Security and Compliance at Digicert

In this episode of InfoSec Pros On the Road at RSA 2024, I had the pleasure of interviewing Brenda Bernal, VP of Product Security and Compliance at Digicert. It was a great opportunity to finally meet Brenda in person after numerous Zoom calls. We discussed various topics, starting with the advancements in AI governance and the key risks organizations should focus on, including data privacy, security, and third-party risk management.

Brenda shared her insights on integrating AI into existing control frameworks and the importance of sustainability and adaptability in AI governance. She emphasized the need for transparency in AI implementations and how it parallels the evolution of ESG reporting.

We also explored the benefits of automation in GRC processes, drawing from Brenda’s experience as an external auditor and her current work with platforms like Hyperproof. The discussion highlighted the significant time savings and improved risk management that automation brings to compliance efforts.

An Analysis of Section 1C Disclosures in Q1 of 2024

Late in 2023, the Securities and Exchange Commission (SEC) in the United States published Regulation S-K Item 106, which requires public companies to describe their processes for assessing, identifying, and managing material risks from cybersecurity threats. Historically, companies were not required to disclose these processes to investors or market regulators, and there were no established guidelines for what a “good” disclosure would look like. Hyperproof reviewed disclosures from nearly 3,000 companies across over three hundred industries and have identified trends for what goes into a robust, meaningful disclosure.

SEC Cyber Risk Disclosures: What Companies Need to Know

In this video interview with Information Security Media Group at the Cybersecurity Implications of AI Summit, McGladrey also discussed:

  • Why companies should use tools and software to collect and automatically gather evidence of compliance;

  • The consequences of false cyber risk disclosures;

  • The impact that SEC requirements have on private companies and supply chains.

Twelve Essential Soft Skills for Early-Career Cybersecurity Professionals

In the realm of cybersecurity, early-career professionals often prioritize the development and demonstration of technical prowess. However, as someone with nearly three decades of experience in cybersecurity leadership roles, I firmly assert that interpersonal skills wield a profound influence over one’s career trajectory. Unlike certifications and degrees, which may lose relevance over time, interpersonal skills persist and can be cultivated through deliberate practice. This article sheds light on these often-overlooked attributes, providing a holistic perspective on what it takes to excel in cybersecurity beyond technical acumen.

AI system poisoning is a growing threat — is your security regime ready?

Although motivations like that mean any organization using AI could be a victim, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), a nonprofit professional association, and field CISO at Hyperproof, says he expects hackers will be more likely to target the tech companies making and training AI systems.

But CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems.

How to Operationalize Your Risk Assessments at Data Connectors Dallas

Thursday, May 16, 2024

Risk assessments have moved beyond a check-the-box approach, especially with the SEC’s new disclosure requirements. Join us for our session, How to Operationalize Your Risk Assessment Process, to get practical guidance on navigating the complexities of risk assessments to drive tangible business outcomes. Kayne McGladrey, Field CISO at Hyperproof, will navigate through the essential steps required to operationalize risk assessments effectively within diverse organizational structures. From conceptualization to execution, participants will gain actionable insights into crafting and implementing tailored risk assessment strategies tailored to their unique organizational contexts.

AI models inch closer to hacking on their own

The big picture: AI model operators don’t have a good way of reigning in these malicious use cases, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), told Axios. Allowing LLMs to digest and train on CVE data can help defenders synthesize the wave of threat alerts coming their way each day. Operators have only two real choices in this type of situation: allow the models to train on security vulnerability data or completely block them from accessing vulnerability lists, he added. “It’s going to be a feature of the landscape because it is a dual-use technology at the end of the day,” McGladrey said.

The Jobs of Tomorrow: Insights on AI and the Future of Work

Kayne McGladrey, IEEE Senior Member, noted that the use of generative AI models in business hinges on their ability to provide accurate information. He cited as examples studies of AI models’ abilities to extract information from documents used for financial sector regulation that are frequently relied on to make investment decisions. “Right now, the best AI models get 80 percent of the questions right,” McGladrey said. “They hallucinate the other 20 percent of the time. That’s not a good sign if you think you are making investment decisions based on artificial intelligence telling you this is a great strategy four out of five times.”

What are the biggest ethical considerations of security technology?

Algorithmic bias is one of the primary risks associated with emerging physical surveillance technologies. While the risks of facial recognition software are well known and documented, efforts are being taken to adapt computer vision to new and novel use cases. For example, one of the more deeply flawed failures was an attempt to detect aggressive behaviour or body language, which was unfeasible as there was not enough training data available. Other physical security systems will face a similar challenge of not discriminating against individuals based on protected factors due to a lack of training data, or more likely, a lack of gender or racially unbiased training data. Companies considering purchasing advanced or emerging physical security systems should enquire about the training data used in the development of those systems to not be subject to civil penalties resulting from discrimination caused by using said systems.

Boards need to brush up on cybersecurity governance, survey finds

CISOs now face substantial personal risks, as seen in cases like Uber and SolarWinds where the SEC has taken legal action against the security chiefs. The primary risk is both personal and professional liability for the CISO, according to Kayne McGladrey, field CISO at Hyperproof. The problem, however, is that boards unaware of the business risks from poor cybersecurity are unlikely to include the CISO in the Directors & Officers insurance policy. “This exposes CISOs to substantial risk,” McGladrey told Cybersecurity Dive.

Podcast: Art of Cyber Defense: Insights from a Theatrical Minded CISO with Kayne McGladrey

Prepare to laugh until your stomach hurts with our most hilarious episode yet, featuring the one and only theater kid turned cybersecurity guru, Kayne McGladrey, Field CISO at Hyperproof. Join us for a rollercoaster of emotions as we dive into the absurdity of security info in 10K filings, engage in heated debates over the polarizing cinnamon sticky bun ale, and champion the cause for more singing and dancing in cybersecurity. Think of it as the “Cybersecurity’s Got Talent” episode you never knew you needed! Kayne’s journey is packed with invaluable insights and captivating stories that are as unique as they are engaging.