Data loss prevention vendors tackle gen AI data risks
“Employees across industries are finding new and innovative ways to perform their tasks at work faster,” says Kayne McGladrey, IEEE senior member and field CISO at Hyperproof. “However, this can lead to the sharing of confidential or regulated information unintentionally. For instance, if a physician sends personal health information to an AI tool to assist in drafting an insurance letter, they may be in violation of HIPAA regulations.” The problem is that many public AI platforms are continually trained based on their interactions with users. This means that if a user uploads company secrets to the AI, the AI will then know those secrets — and will spill them to the next person who asks about them. It’s not just public AIs that have this problem. An internal large language model that ingested sensitive company data might then provide that data to employees who shouldn’t be allowed to see it.