AI models inch closer to hacking on their own
The big picture: AI model operators don’t have a good way of reigning in these malicious use cases, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), told Axios. Allowing LLMs to digest and train on CVE data can help defenders synthesize the wave of threat alerts coming their way each day. Operators have only two real choices in this type of situation: allow the models to train on security vulnerability data or completely block them from accessing vulnerability lists, he added. “It’s going to be a feature of the landscape because it is a dual-use technology at the end of the day,” McGladrey said.