If you’re a Verizon customer, you expect the best considering those monthly bills. We live on our phones, keeping important contacts, financial info, and maybe even some photos you don’t want the world to see on our trusty devices.
So, when it was revealed last week that Verizon left PINs, names, addresses, and the account info from millions of customers out in the open for anyone to download, you can be forgiven if you thought, “WTF?”
What might make you even more mad: It wasn’t hackers behind the breach. It was good old human error—and a crisis many companies are facing: a dearth of tech workers with cybersecurity chops and the dizzying use of third-party contractors.
Verizon put the number of customers exposed at 6 million but the Mountain View security firm UpGuard, which discovered the exposed data on a publicly accessible Amazon cloud storage account, said it was as many as 14 million—and that it took the telecommunications giant more than a week to fix the problem.
“The long duration of time between the initial June 13th notification to Verizon by UpGuard of this data exposure, and the ultimate closure of the breach on June 22nd, is troubling,” UpGuard officials posted on their blog.
Verizon said no one accessed the exposed data in that time. “There has been no loss or theft of Verizon or Verizon customer information,” Verizon said in a statement.
The Verizon debacle joins a lengthy list of incidents where companies and government agencies have accidentally published people’s confidential information, a problem that experts say may be getting harder to fix as more companies their storage to the cloud.
Chris Vickery, director of cyber-risk research at UpGuard, found the Verizon data trove sitting in a critical data repository managed by a third vendor based in Israel. The repository had been misconfigured—a human error—leaving it unprotected.
Thanks to a chronic shortage of skilled tech workers, it’s hard to find employees with the necessary skills and training to consistently avoid such mistakes, Vickery says. Tech workers setting up cloud systems or in-house servers can misunderstand the settings on the software they’re setting up, or cut corners to make data more easily accessible within the organization.
“If you have a large amount of people using any product to store data, and that product allows for public access, then a certain percentage of people for whatever reason are going to turn on those public access settings,” he says. “It’s just the laws of statistics—you have sufficient number, somebody’s gonna do it.”
Vickery has a reputation for finding these types of breaches, from personal information about U.S. voters to membership data from an HIV-positive dating app and user information from a Mac security tool.
Modern cloud systems from vendors like Amazon and Google usually provide a variety of security options to suit customer needs.
“At least we know at the infrastructure level, often there are very rigorous procedures and policies in place,” says Mark Testoni, president of SAP National Security Services.
He echoes Vickery’s contention that the problem is with techies who are not properly trained in cloud security. The trouble is, with many cloud systems designed to be easy to use and set up, workers who haven’t received proper training can find themselves in over their heads, put in charge of systems they haven’t had the time or guidance to learn.
“The bar for entry is lowered, so there’s a lot of situations where people are put in unfair roles that they weren’t necessarily trained or hired for,” Vickery says.
Even software developers often lack formal security training, says Kayne McGladrey, director of information security services at Boulder, Colorado security consulting firm Integral Partners. And even those who do can face pressure to roll code out quickly from employers impatient to see new features and fixes in production, he says.
“Anything that reduces that time to value is pushed aside—and often, that’s security,” says McGladrey, who regularly gives talks on security issues for the Institute of Electrical and Electronics Engineers, an industry group.
Training also doesn’t help protect against malicious employees who might deliberately leak data, of course, and it may be only of limited value if workers are victims of deception, like targeted phishing attacks. Some research has found that anti-phishing training is only of limited effectiveness, meaning a determined attacker can still trick employees into sending sensitive data where it doesn’t belong.
One potential solution is security software that can detect when information moves to unusual places or servers are configured in overly permissive ways. Vickery says UpGuard offers tools that would catch if a server were set to be publicly accessible.
Security companies are pouring serious money into that kind of software, but even that often requires someone to be monitoring for alerts of suspicious behavior. Companies generally aren’t willing to immediately block anything that gets flagged out of fear of false positives impeding productivity, but that means they need to be willing to expend the resources to have someone watching for security alerts.
“You don’t just provide the product, a magical box that you put in your building and it goes beep, and provided it goes beep, everything’s working,” McGladrey says.
Some help may ultimately come from insurers, as cyber-risk policies become more prevalent in business, Vickery says. If insurance companies drop clients who don’t follow security guidelines or refuse to pay out when they incur breaches, those guidelines are likely to be taken more seriously.
“People will do the minimum amount required to get the insurance and make sure their claim is not denied by the insurance company when they have a disaster,” Vickery says. “The insurance companies are going to be the regulators.”