Attackers live in production because that’s where valuable data resides. Organizations spend $3 million annually combating bad actors with their Security Operations Centers (SOCs), according to Ponemon. That same study found reducing false positives to be the single most important activity for security teams.
In this article, I’ll explain how application security teams prioritize risks more efficiently when they understand their production environment.
Security Teams Lack Production Visibility
Production Visibility is a Manual Process
Architecture visibility is a manual effort for most teams. Determining what, precisely, exists in production typically includes these steps:
- Ask engineering teams to document and explain their code
- Test code modules in the QA or staging environments
- Monitor web traffic to understand their data flows
This manual process results in an incomplete (and often inaccurate) view of production.
Scaling production visibility is challenging. Many companies have 1 application security engineer for every 100 developers. Often, the engineers’ code documentation differs from what’s deployed. Software security teams must piece together bits of information from many sources to prioritize an ever-changing threat landscape. Rather than reducing risk, teams get stuck searching for visibility.
Vulnerability Management Needs Production Insights
Traditional application security testing (AST) tools excel at finding vulnerabilities. Vulnerability awareness, while important, inundates enterprises with thousands of unresolved issues. It’s no surprise that certain software applications are more susceptible to exploitation than others – and that some apps contain more sensitive information. The prioritization breakdown occurs when teams try to manage vulnerabilities without context or correlation to the overall business impact.
Application security teams who understand their deployed applications see a larger return on their efforts — the following sections detail how production context can affect vulnerabilities.
Example 1: The Vulnerability Doesn’t Actually Exist in Production
Packages aren’t loaded until runtime. Vulnerabilities detected by software composition analysis (SCA) tools in the pipeline may be irrelevant when an application is deployed. These false positives, or security findings that don’t actually create risk, generate a massive cost for businesses that could invest those remediation efforts elsewhere.
Example 2: Attackers Can’t Reach the Vulnerability
The core risk management approaches are avoid, reduce, transfer, and accept. For each code weakness, it’s essential to ask, “Can an attacker reach this vulnerability?” Pieces of software that aren’t internet-facing will often include more accepted risks – even if only temporarily. Publicly accessible vulnerabilities, on the other hand, are far more likely to be prioritized for remediation. Application security teams that understand their deployed architecture target the substantial risks more efficiently.
Example 3: Breaches Vary in Business Significance
Security breaches are inevitable. Attackers only need to be right one time to break into a system. The susceptible data in a breach is, therefore, a vital piece of the prioritization puzzle. Data flows are the key to understanding data sensitivity. Correlating software vulnerabilities to their data flow’s producer, consumer, request protocol, encryption level, and content allows security teams to prioritize the correct issues confidently.
Here are two examples of how context affects vulnerability prioritization. Bionic ASPM scores each vulnerability based on severity, connections, and exploitability, which is directly connected to where the application is deployed.
The first example shows the highest possible risk score because the application service where the vulnerability occurs is in production and has:
- multiple critical policy violations (this could be a vulnerability, application misconfiguration, or a specific standard or guardrail that an organization wants to enforce)
- access to sensitive customer data (PII)
- A connection to Tableau, a third-party service
- A connection to the internet
In the second example, the service where the vulnerability occurs is in development and has:
- Low risk policy violations
- No connections to sensitive data
- No third-party or internet connection
Production is Impossible to Reproduce
Deployed Code Doesn’t Always Match the Repository
Monorepos can deploy to multiple microservices. Multiple repositories can combine to deploy a single application. To add to the complexity, the main branch of a repo might not be what’s deployed to production today. Code repositories are an essential part of the story – but they can’t give complete visibility of what’s running in the real world.
Environment Variables Affect Software Compilation
Environment variables (commonly referred to as .env) prevent hardcoding. Rather than typing in the IP address of a database each time it’s called, you can simply reference DB_CUSTOMER, for example. The best practice is to avoid checking these values in a code repository. If the values aren’t sensitive, they may be stored somewhere locally. Or, if it’s something important like a password, it may be stored in a secrets manager. In either case, you won’t understand how environment variables affect the runtime environment by looking at the repository. Plus, the runtime must be analyzed to confirm that the environment variables were not incorrectly entered. Cross-environment contamination is a common mistake with .env values.
Configuration Files can Introduce Weakness
Configuration files (configs) contain information used by applications during runtime. The config contents often vary between development, staging, and production environments. Furthermore, configuration files are usually stored locally rather than in a code repository. The uncertainty of what’s running in production can seriously affect your security posture.
Environment variables are often stored inside configuration files. Every security concern discussed in the “.env” section applies to configs.
Additionally, configuration files can add or remove security guardrails. Common mistakes include:
- Users are granted elevated privileges.
- Sensitive log files output to unsafe locations.
- Incorrect values are assigned to variables.
- Variable values are unexpectedly overwritten.
- Unintuitive naming of variables or configuration files leading to undesired behavior.
Command Line Arguments Appear at Compilation
Database locations and other pertinent information can be passed on the command line (cmd). Typically, this is done instead of defining environment variables. If command line arguments modify a program’s locations (or guardrails), they’re undetectable outside the runtime.
How Security Teams Measure Their Production Security Posture
Eliminating software vulnerabilities early in development is essential. Preventing risks from seeing the light of day keeps customers safe while helping security teams sleep at night. But, in a world that isn’t vulnerability-free, production tells you what matters the most. And, perhaps more importantly, it tells you what doesn’t matter.
Read my blog on measuring right to learn how Application Security Posture Management (ASPM) compares to other ways of measuring your production security posture.