Meaningful Momentum or Running in Place: Kayne McGladrey
AI integration is one of the most significant breakthroughs changing cybersecurity in 2025. What are some of the advantages and challenges?
Your blog category
AI integration is one of the most significant breakthroughs changing cybersecurity in 2025. What are some of the advantages and challenges?
“Agentic AI requires comprehensive data integration that’s fundamentally different from today’s siloed approach, meaning the risk multiplies instead of simply adding up,” IEEE Senior Member Kayne McGladrey said.
The current crop of consumer algorithms processes data for specific purposes, and they usually ask for permission.
“Agentic AI proactively gathers information across multiple domains and makes autonomous decisions about how to use it,” McGladrey said. “Today’s systems typically require user approval for actions, but agentic AI is designed to operate independently with minimal human oversight, creating new categories of liability exposure.”
“Because the attack blends in with just normal, legitimate activity, it’s quite hard to detect what’s unusual and what’s atypical,” Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers, told Axios.
This week’s special guest Kayne McGladrey, (blog: kaynemcgladrey.com ), CISO-in-Residence at Hyperproof, outlines the business challenges that CISO’s face, as we discuss new types of risk in daily threat management.
“ If a manufacturing strategy can be exfiltrated from even one part of the supply chain it gives enemies an inside look at how equipment works. If they leverage that knowledge, warfighter lives are at risk. ”
Kayne McGladrey
Winner of one of the top 50+ Cybersecurity Influencers to Follow in 2025
Since the 1990s, security convergence evolved from merging physical and network security into integrating physical, digital, and operational security. Initially, organizations combined controls to address risks from siloed measures. In the 2000s, connections between physical systems and IT security led to unified governance frameworks. By the 2010s, convergence became holistic, driven by cloud computing and mobile devices. Today, a unified framework aligns all security domains, integrating controls for cloud services, IoT, and industrial systems. Looking ahead, convergence will leverage AI, machine learning, and predictive analytics to enhance threat detection and response, while privacy regulations like GDPR and CCPA shape measures to protect user privacy.
Integrating AI and cloud technology is reshaping auditing processes, requiring GRC and cybersecurity professionals to adapt to new tools that centralize risk and compliance activities. This shift improves efficiency and accuracy in audits, allowing for real-time monitoring and streamlined workflows. Companies increasingly use AI-driven solutions to automate routine tasks, such as data analysis and cybersecurity anomaly detection, freeing up professionals to focus on more complex issues. Globally, auditors are expected to implement AI tools for tasks like sampling, risk identification, and data analysis. While this may increase audit efficiencies, audit clients are likely to ask for cost concessions.
Kayne McGladrey, Field CISO at Hyperproof and Senior IEEE Member, says cybersecurity is also fertile ground for AI. “CISOs are looking at AI and automation solutions that handle common cybersecurity tasks. These include collecting evidence of control operations for the internal audit team, testing that evidence automatically, and producing regular reports on such things as false-positive cybersecurity events. These tasks help overworked cybersecurity analysts and engineers to focus on the parts of the job that they love without burdening them with excessive paperwork.”
It’s time we heard from people who live and breathe cybersecurity. Join me as we discuss the highs and lows of working in this industry, the topics that need clarifying, and those that need the B.S. removed. Kayne is active in the community and has offered some GRC maturity models to help anyone.
“Overwhelmed employees may become discouraged, leading to security nihilism, where they feel that breaches are inevitable and give up on maintaining security measures,” McGladrey said. “This can result in a lack of communication about potential threats, making it harder for security teams to respond effectively.”
“Companies should conduct thorough risk assessments to identify and mitigate potential harms associated with AI products, understanding their limitations and potential misuse,” McGladrey said. “Maintaining clear documentation of AI system metrics and methodologies, along with disclosing any known risks or limitations to customers, is essential for transparency.”