Radio interview on KXL-FM (Portland)
Live radio interview today at 1 PM Pacific on KXL-FM (Portland) discussing robotics, AI, and why cyber security matters in the classroom.
Kayne McGladrey, Field CISO at Hyperproof and IEEE Senior Member, noted that the use of generative AI models in business hinges on their ability to provide accurate information. He cited as examples studies of AI models’ abilities to extract information from documents used for financial sector regulation that are frequently relied on to make investment decisions.
“Right now, the best AI models get 80 percent of the questions right,” McGladrey said. “They hallucinate the other 20 percent of the time. That’s not a good sign if you think you are making investment decisions based on artificial intelligence telling you this is a great strategy four out of five times.”
Algorithmic bias is one of the primary risks associated with emerging physical surveillance technologies. While the risks of facial recognition software are well known and documented, efforts are being taken to adapt computer vision to new and novel use cases. For example, one of the more deeply flawed failures was an attempt to detect aggressive behaviour or body language, which was unfeasible as there was not enough training data available. Other physical security systems will face a similar challenge of not discriminating against individuals based on protected factors due to a lack of training data, or more likely, a lack of gender or racially unbiased training data. Companies considering purchasing advanced or emerging physical security systems should enquire about the training data used in the development of those systems to not be subject to civil penalties resulting from discrimination caused by using said systems.
IEEE Senior Member Kayne McGladrey said that “These threats are not merely theoretical, although at the moment, they are still relatively limited in their application. It is reasonable to expect that threat actors will continue to find innovative new uses of generative AI, extending beyond business email compromise, deepfakes and the generation of attack code.”
Despite this guidance mandating only four disclosures (identifying and managing risks, disclosing material breaches, board oversight, and management’s role), over 40% of the 2,100+ 10-K filings I’ve reviewed between January 1 and March 11, 2024 disclosed eleven distinct topics.
Companies are disclosing more information than required in their 10-K filings for various reasons. One is that they lack a broad consensus how much detail to disclose in Section 1C. The recent civil litigation of SEC vs. Tim Brown and SolarWinds (case 1:23-cv-09518 in the Southern District of New York) significantly influences the disclosure requirements.
In this video interview with Information Security Media Group at the Cybersecurity Implications of AI Summit, McGladrey also discussed:
Why companies should use tools and software to collect and automatically gather evidence of compliance;
The consequences of false cyber risk disclosures;
The impact that SEC requirements have on private companies and supply chains.
When CISOs work with go-to-market teams, cybersecurity transforms from a mere cost center into a valuable business function. This change is crucial in B2B interactions where robust cybersecurity controls offer a competitive advantage. A centralized inventory of cybersecurity controls, grounded in current and past contracts, helps businesses gauge the financial impact of these partnerships. This inventory also identifies unnecessary or redundant controls, offering an opportunity for cost reduction and operational streamlining. By updating this centralized list after the termination of contracts, the business can further optimize both its security posture and operational costs. This integrated strategy empowers the business to make well-informed, data-driven decisions that enhance profitability while maintaining robust security controls.