The back-to-school sales circulars are arriving, a reminder that fall is on its way. For most organizations, fall also brings an annual budgetary exercise for which many mid-level managers and executives will be unprepared.
Unfortunately, according to Willis-Tower-Watson, “Among executives, there is little consensus on how to allocate cyber budgets” – but very close responses were given between “technology to harden cyber-defenses” and “IT talent acquisition and skills training/development.” Be ready for 2019 budgetary questions and planning by starting early and investigating essential cyber security technologies, instead of a panicked, late-night whirlwind of RFPs and industry reports.
Mitigating Insider Threats Is The Top Priority
According to Veriato, “a 53% majority (of organizations) have confirmed insider attacks against their organization in the previous 12 months.” This includes human error, which acccording to AIG “continues to be a significant factor in the majority of cyber insurance claims.”
“The primary risk factor was too many users with excessive access privileges,” according to Veriato. Furthermore, Thales Security found that “the single most dangerous threat actor category that our survey identified was privileged users.” These findings and others led Gartner to recommend Privileged Access Management (PAM) as the top priority for CISOs in 2018, noting that “this project is intended to make it harder for attackers to access privileged accounts and should allow security teams to monitor behaviors for unusual access.”
The challenge most organizations have with limiting or controlling privileged access with a PAM product is that they take a project approach. A project has a clear start, middle and an end where the people assigned to the project can work on other projects. PAM projects are not tactical. Organizations that control privileged access understand that it requires a program approach, and long-term strategic planning.
This is because PAM programs are not about installing and configuring PAM products; rather, they’re about changing the behavior of privileged users. Requiring users to stop sharing passwords via yellow sticky note and instead store those passwords in a centralized vault requires a change in how people do their job. This becomes obvious when setting up automated password rotation so that administrators must use the PAM solution to check out a privileged account.
All but the smallest organizations should budget for a year-long PAM program for the first year. When considering the costs of the software plus maintenance or the SaaS subscription, plan to negotiate for a two or three-year discount from the vendor. This is because most organizations will not have fully deployed and integrated the PAM solution into both their network environment and their organizational culture in less than a year.
Organizations should budget for either professional services from the vendor or for consulting from a third-party consultancy. The benefit of professional services is rapid deployment of the solution; however, many organizations find that a “quick start” or a “jump start” approach does not scale and glosses over the thorny political and process issues associated with cultural changes. The benefit of a third-party consultancy is their independence from the vendor, and a higher amount of flexibility and time of services offered. Unless a team has prior domain experience with PAM, under no circumstances should attending a training class be considered adequate investment.
However, both the consultants and the professional services team will go home at some point. Successful organizations should plan on at least two-and-a-half FTEs for a PAM program in the first year. Two of the FTEs will deploy the design (provided by the vendor or the consultancy) and a project manager should be assigned at a minimum of 50% to help coordinate meetings and working sessions with all the stakeholders and each logical group of privileged account owners.
Quickly Identifying Breaches Is The Second Priority
Dwell time is the time that a threat actor can stay undetected on an organization’s network. Many recent cyber security reports have found that the average dwell time is two months or more. FireEye, for example, found that, “The Americas median dwell time decreased slightly from 99 days in 2016 to 75.5 days in 2017.” The risk is that a threat actor with time to spare can exfiltrate data, use an organization’s infrastructure for launching attacks and create downstream liability by attacking partners, suppliers and vendors.
Part of the reason for these long dwell times is the inability of a modern Security Operations Center (SOC) team to process and investigate the daily volume of alerts from their deployed Security Information and Event Management (SIEM) solution. Cisco foundthat “44% of security operations managers see more than 5,000 security alerts per day.” Yet, “among organizations that receive daily security alerts, an average of 44%…are not investigated.” A more recent report by BAE Systems found that “on average, of the alerts that make it through the current security tools these organizations have in place, fewer than 20% are actually investigated.” Worse, BAE also found that a “shocking 7% — as many as over 1,200 U.S. medium-sized businesses – are doing nothing with the alerts they receive.” SOC teams are overburdened and drowning in false positive alerts, and threat actors are using tradecraft smokescreen attacks to increase the number of alerts an organization receives.
Companies will be breached. Thales Security found that “almost half (46%) of U.S. companies experienced a data breach in the last year.” Yet SANS found that “20% (of respondents) said they did not know if they had been breached.”
Deception technologies offer a compelling solution to cut through the noise of false alarms while recognizing the reality that breaches will continue despite an organization’s best efforts. In the typical cyber security kill-chain, the threat actor gains an entry point through a compromise, performs reconnaissance on the compromised system, and then tries to move laterally and escalate their privileges. Threat actors will investigate the cookies stored on the system and may look for credential hashes using tools like Mimikatz. This data, collected during reconnaissance, helps the threat actor to decide where to move laterally on the network.
The underlying premise of deception technologies is to leave falsified credentials, cookies, administrative password hashes and unencrypted Excel spreadsheets named “passwords.xlsx” on as many machines on the network and in the cloud as workable. The data points to virtual machines deployed on the organization’s network or cloud. As an enticement to the threat actor, these virtual machines appear to have exploitable vulnerabilities due to common service misconfigurations or not having deployed recent security patches.
The catch is that the entire infrastructure – the Excel files full of passwords, the administrative password hashes, the browser cookies, the virtual machines – is fake. Deception technologies run this false infrastructure at scale. When a threat actor interacts with a virtual machine, whether connecting to an SMB share named “Payroll” or logging in with an administrative password, an alert is sent to the SOC. Unlike the thousands of false positives, this alert is genuine, and shows a breach in progress. A SOC team can then decide how to best engage the threat actor, from quarantining the compromised machine at once up to monitoring their tactics, techniques and procedures (TTPs) to better defend against the threat actor. The organization is at a reduced level of risk once the threat actor engages with the fake infrastructure as no real data is stored there.
Although this technology is new and appears to be easy to deploy, a program approach is still necessary for long-term success. Multiple parts of the organization need to be involved as part of the initial deployment and also the ongoing operation of the deception infrastructure. Organizations should assign two FTEs and a half-time project manager to a deception project for the first three to six months. The FTEs will deploy and configure the solution, and the project manager will handle coordination with multiple groups within the organization. Successful deception technology deployments are high-touch at first. The assigned staff need to consider how to:
-
Avoid false positives by having the network/security team create exclusions in any automated or manual vulnerability or other network scanning services
-
Create convincing deception virtual machines that look identical to other machines on the network, including departmental/regional/organizational customization for hostnames, operating systems installed, software packages installed and file and naming conventions
-
Update the policies for investigating a breach to cover when to shut off a threat actor’s connection when they are attacking a deception machine
-
Update the procedures for breach investigation and recovery to include the new options provided by the deception solution
As this an emerging technology, it is unlikely that an organization’s staff will have prior domain experience. Organizations should plan to take advantage of any services offered if the deception solution vendor offers professional services beyond the initial proof of concept. If the vendor has consulting or systems integrator partners, organizations can look to create a program to deploy the solution across multiple parts of the organization.
Securing Application Passwords Is The Third Priority
An organization that follows the first two recommendations will see the benefits:
-
Accessing administrative accounts will require an approved check-out of those credentials based on the PAM program. Administrative account passwords may be changed as frequently as when an administrator checks out the password.
-
Falsified credentials will lead external and internal threat actors into deception environments to mitigate the effects of a breach.
Yet dozens or hundreds of highly privileged account credentials will stay static with minimal monitoring for inappropriate usage. If compromised, these credentials can lead to massive breaches due to data exfiltration.
Application to Application Password Management (AAPM) is one tragedy of modern cyber security. Organizations charge software developers in a traditional software development life cycle (SDLC) or DevOps environment with developing applications rapidly to meet the needs of the business and of customers. They do not base the bonuses and compensation for these developers on writing secure software; rather, it’s on delivering usable software quickly knowing that a software patch or future release can address vulnerabilities in the software. Faced with this compensation structure and the challenges of modern software development, most development teams choose to hard-code or embed secrets and credentials in their software.
Threat actors know this. GittyLeaks, Git Secrets, Repo Supervisor, Truffle Hog and Git Hound are all examples of open-source tools that search code stored on GitHub repositories for AWS Secret Keys, hard-coded passwords, SSH keys and static usernames. GitHub recognizes this is a challenge and posted a guide for teams that have accidentally leaked credentials.
However, not all software development teams use GitHub, nor is GitHub the problem. The challenge is that developers may store AWS Secret Keys, SSH Keys, hard-coded passwords and usernames (collectively referred to as “secrets”) in configuration files, registry keys or compiled into the source code of an application. If a threat actor gets one of these secrets, either via brute force, an incorrectly secured configuration file or another means, most organizations would find it difficult to identify that a breach had occurred and change the compromised secret hastily to stop the breach. Developers and DevOps teams fear changing these secrets in case the application does not come back online. IDC estimates that the “cost of a failure in a critical application is $500,000 to $1 million” per hour.
An effective AAPM program requires both training and tools to succeed. However, it should not require new employees. In the first year, organizations should plan on assigning one or more software architects to identifying existing secrets storage methods and developing mitigation plans. Organizations following a traditional SDLC model should plan on assigning existing project managers with coordinating the transition to the new secrets storage and the associated code releases. Development teams following Agile, Scrum or other methodologies should also plan to incorporate AAPM into existing job responsibilities. Larger organizations may consider a dedicated full-time program manager for oversight and coordination across all development teams for the first year.
Both commercial and open-source application security testing tools exist that will help organizations rapidly identify secrets stored for existing applications. Many of these tools use static code analysis; organizations should plan on licensing enough copies to incorporate the tools into existing software code review processes. Developers should also be trained on the use of these tools. This approach will give an organization visibility into the size of the problem.
Organizations with operational PAM programs should check with their PAM vendor to determine if they offer a solution for secrets storage as part of AAPM. While many PAM vendors have incorporated this functionality into their solutions or platforms, the market is not consistent and several dedicated secrets storage solutions exist for applications. If an organization already uses a PAM vendor with an AAPM solution, the organization should use that solution unless there is a compelling reason to introduce another vendor into the organization. This is also faster as it builds upon existing infrastructure and so installing and configuring the secrets provider can be treated as a short project. Software development policies should also mandate the use of the secrets provider on all new applications.
Once the organization has identified those applications with hard-coded secrets and has deployed a new secrets provider for automatic storage and rotation of those secrets, they should give each development team training and direction on how to incorporate the new secrets provider in their application. This is a program which requires extensive coordination and communication across teams and across the organization. Thankfully, it does not introduce new processes—the existing software development, testing and release processes can be used, as this is a series of code changes that replace hard-coded secrets with API calls (or similar) to retrieve and use secrets stored securely in the secrets provider.
Budgeting Means Planning To Succeed
Planning for cyber security programs in 2019 well before an organization’s budgetary cycle is both proactive and practical. These three recommended programs all mitigate existing risks and cut the likelihood of future incidents. By taking the time now to prepare for budgetary conversations this fall, you’ll help your organization defend against dangerous cyber-threats.