Sensitive Data Exposure
Mitigating the Risks of Sensitive Data Exposure in Applications
Protect Sensitive Data ExposureTable of Contents
Any industry that collects, stores, or processes sensitive data is at risk for a data breach. In 2020, the average cost of a data breach is estimated to cost $3.86 million to contain, as a result of both direct and indirect costs. Lack of protection and inability to comply with laws, spot vulnerabilities, and provide proper protection of data could put companies in danger of a massive breach. This, in turn, places them at risk for hefty penalties or closures.
What is sensitive data exposure?
Sensitive data is any information that is meant to be protected from unauthorized access. Sensitive data can include anything from personally identifiable information (PII), such as Social Security numbers, to banking information, to login credentials. When this sensitive data is accessed by an attacker as a result of a data breach, users are at risk for sensitive data exposure. Data breaches that result in the exposure of sensitive credentials can come with costs in the millions of dollars, destroying a company’s reputation along with it. During the 21st century, the use of mobile devices has increased internet usage dramatically. As a result, banks, hospitals, retail, and many other industries have made it their mission to create a user-friendly and efficient online presence, one which applications have improved dramatically.
Attackers target applications with vulnerabilities that leave sensitive data unprotected. Today, data breaches are common and pose a bigger threat than ever as legacy application security falls far behind advanced attack techniques used to exploit application vulnerabilities and gain access to sensitive data.
Biggest data exposures to sensitive data
Data exposure has become a term recognized around the world. Billions of people are at risk for sensitive data exposure, which should leave security at the top of the list of deployment priorities. This, however, is not the case, as developers rush to meet the demands brought on by the push to digital conversion. Attackers know fast-paced development cycles are prone to vulnerabilities, targeting those that they can exploit in an application attack. Two of the largest breaches in the past century affected nearly 3.5 billion people, the largest of which went unnoticed for three years.
The largest instance of sensitive data exposure to date impacted 3 billion user accounts. The Yahoo! data breach started in 2013, resulting in hackers gathering 1 billion user credentials, including their email addresses, passwords, and security questions and answers. A different set of hackers targeted Yahoo! in 2014, this time affecting 500 million users. Both events were not announced until 2016, three years after the original breach. After all was said and done, 3 billion accounts were compromised and customer confidence sank, dropping the company value by millions.
In 2017, Equifax, the leading credit report agency in the United States, was the victim of a breach. Millions of user credentials and PII were stolen from hackers who found a vulnerability across servers and an expired encryption certificate. With access into the Equifax system, hackers were able to steal credentials in plain-text form, using them to access accounts of both users and administrators. Attackers used Java’s open-source network to their advantage, sending HTTP requests with malicious code. Unable to detect any suspicious behavior, these malicious executions were successful, surpassing authorizations and keeping access for nearly two months. Equifax took a huge hit to their reputation, failing to report the breach for over a month after it was first discovered.
Ways in which sensitive data can be exposed
Any time an organization lacks security methods, sensitive data is at risk of exposure. To enhance strategies of mitigation on potential application attacks, development and security teams must first have a firm grasp on the ways that sensitive data is prone to exposure including:
Data in transit
Data is often on the move, sending commands and requests across networks to other servers, applications, or users. Data in transit is highly vulnerable, especially when moving across unprotected channels or to the application programming interface (API) that allows applications to communicate with one another. One attack that targets data in transit is a man-in-the-middle (MITM) attack, which intercepts traffic and monitors communications. Cyber criminals rest in between the two entities, able to intercept all data in motion-including login credentials. Another takes advantage of a vulnerability in the protocols for creating secure sockets layer (SSL) code. SSL code is used to encrypt data, making it more difficult to decode into plain text if intercepted. An SSL attack can mimic secure script, deceiving users into clicking on malicious code. The vulnerability in SSL protocols could leave room for code injection attacks such as cross-site scripting (XSS) that can run corrupted browser-side requests.
Data at rest
Data at rest is housed in a system, be it a computer or network. It is thought to be less vulnerable without the threat of attacks in passing, but more valuable. Attackers use different vectors to get ahold of housed data, often using malware like Trojan horses or computer worms. Both of these gain access into systems housing data through direct downloading from a malicious USB drive or by clicking malicious links that are sent via email or instant message. If data is housed in a server, attackers could get ahold of information stored in files outside of the normal authenticated areas of access. This increases the probability of a directory traversal or path traversal attack, where access is gained to unauthorized areas within a server with otherwise restricted access.
Attacks that expose sensitive data
There are different types of application attacks that can expose sensitive data. These include:
SQL injection attacks
SQL injection attacks are the most frequent application attack. Applications with exploitable vulnerabilities experienced an SQL injection attack 65% of the time. During an SQL injection attack, bad actors manipulate SQL requests that execute malicious commands. If servers do not have a tough line of defense against identifying manipulated code, cyber criminals could successfully manipulate commands into retrieving access to sensitive data. Depending on the strength of the command or request programmed into the malicious code injection, attackers could gain persistent access into unauthorized areas of the application, able to come and go as they please.
Network compromise
When a network is compromised, all data is left at risk of exposure. This is especially true if attackers hold a constant yet silent presence, common in attacks like session hijacking. The time users are logged in is referred to as a session, labeled with a unique session ID. If attackers access this ID, they can access cookies that hold onto activity and credentials across different websites. With an exploitable vulnerability, bad actors can launch attacks, leaving few indicators of compromise (IOCs). If left undetected, cyber criminals have data at their disposal, leaving users at risk of sensitive data exposure or identity theft.
Broken access control attacks
Networks and applications come programmed with limits that users can and cannot access. When this access is broken, users gain authentication to areas that are out of these limits, some of which house sensitive data. Broken access control attacks are common, ranking No. 3 in the OWASP Top 10. Their commonality comes from their ability to bypass legacy security scanning tools, including dynamic application security testing (DAST) that takes a deeper understanding of how data works within an application. The false-negative result produced by DAST tools leaves a vulnerability unpatched, which could result in a successful broken access control attack. This leaves user confidentiality and web servers in danger of exposure or complete takeover.
Ransomware attacks
Ransomware is a type of malware that encrypts files on the infected device. Integrating this malicious software onto devices is commonly executed via an attachment or link users believe to be from a trusted source. Once clicked, ransomware is downloaded and files are decrypted into unreadable code that attackers leverage for ransom. Attackers hold the key to decrypting information, sending an email demanding money or information for decryption. Since attackers have the key to decryption, they have access to all information within the computer system and can do with it as they please.
Phishing attacks
Phishing attacks often fool users by leading them to believe they are visiting or logging into a trusted website. Targets are most often contacted via email or text message by attackers posing as legitimate organizations. If targets are tricked into thinking the attack is a representative of the trusted source, they are likely lured into providing sensitive details that bad actors could use to hack into their accounts, steal their credit card information, or acquire Social Security numbers.
Insider threat attacks
Insider threats are a risk all organizations face as they most often involve a current or former employee. Anyone within the company who has access to sensitive details could initiate a data breach, breaking in and stealing unauthorized information. This misuse of access often goes unnoticed, as organizations focus on attacks coming from outside sources and spend little time implementing defenses on insider attacks.
Impact of sensitive data exposure
The 2020 Cost of a Data Breach Report, released in collaboration with IBM and Ponemon Institute, showed that the global average cost of a data breach hit $3.86 million. It also pointed out a large weakness in cybersecurity, showing that the average time breaches are left undetected or uncontained is 280 days, one day more than the previous year’s study. The impacts of sensitive data exposure come from both direct and indirect sources, jeopardizing an organization’s value and reputation.
Sensitive data exposure impact on brand
Attacks that gain access into a system and are left to rummage around in unauthorized areas undetected can cause an immense amount of damage, sacrificing the integrity of an organization. Organizations suffer when they are the victim of a data breach. Users begin to see them as untrusted or unsecured, even when breaches are patched, making them more hesitant to provide them with personal information. Client confidence is a huge factor in an organization’s success, and without it, organizations would surely fail. Once a data breach has reached large proportions and affected millions, it catches the attention of the media. Media exposure and negative associations with brand security taint a company’s reputation and can follow them for years to come.
Cost of sensitive data exposure
Loss of confidence in an organization is one of the largest hits to revenue companies face. It accounts for 40% of the average total cost of a data breach. The time it takes to detect a breach is also a huge factor that drives up costs. The longer it takes to detect and contain, the higher the cost. Faster detection times could save large amounts of money and even salvage some sensitive customer details in the process. If companies can detect breaches in less than 200 days, they could save, on average, $1.1 million on containment and cleanup. Other expenses included in bouncing back from a breach include rebranding and reputation cleanup.
Operational outage/interruption due to data breach
Any interruption of business and network operations can cause financial losses. When malicious activity is detected within an organization, operations come to a halt. It takes time to get the site back into an operating state, resulting in loss of client interest and/or confidence. Applications make online actions faster and more efficient, complimenting the busy lifestyle of the average 9-to-5 worker. The longer operations are down, the more impatient clients could become, resulting in loss of business.
Cost of compliance fines
Organizations that collect any type of sensitive data are labeled “responsible” in the wake of a data breach. The push for data protection led to the creation of laws that hold organizations responsible for the proper collection and storage of data. To stay within the law, organizations must keep up to date and make sure they are steering clear of compliance fines. This drives up costs further, leaving companies to outsource experts that know these laws. Take the United States as an example, which has 52 different state privacy laws, all separated by state and territory.
Looking at more specific fees tied to compliance, Health Insurance Portability and Accountability Act (HIPAA) fines are based on the level of damage caused or negligence displayed—which could cost up to $50,000 per violation. Per the Data Breach Report from IBM and Ponemon Institute, the healthcare industry faces the highest amount of costs racked up by a data breach—$7.13 million to be exact.
The European Union launched a set of standards known as the General Data Protection Regulation (GDPR) to keep up with the evolution of digital transformations. Any organizations that handle data are subject to penalties if data is not handled properly. The amount of GDPR penalties vary depending on the amount of data exposed and the severity of the exploited application vulnerability.
Online credit card use is rapidly growing. Dealing with money and PII, systems that store payment information are vulnerable to attack. To keep current with Payment Card Industry Data Security Standards (PCI DSS), organizations of all sizes must pay for permissions to accept credit card payments. The amount varies by size and may also take a third-party audit to guarantee full compliance, also adding to the overall cost. It is a lot of work to keep data protected, but the cost of protection and compliance is nothing compared to the tragic losses that come from a data breach.
Avoiding a sensitive data exposure
Most data breaches are a result of weak application security. Taking steps to protect applications from the inside out can prevent data breaches that lead to sensitive data exposure. Protecting credentials should be a top priority during development, but legacy security testing takes time and resources. After testing, a certified expert looks over findings, deciding which application vulnerabilities need patching. Every step along the way is subject to error, and with grueling deployment demands, often gets pushed aside.
Assessment/POC of sensitive data exposure
One of the first steps to avoiding sensitive data exposure relies on an in-depth assessment of application security. Using a proof-of-concept (POC) exploit, cybersecurity teams stage an attack to prove it can be done. The theory is, if they can breach the system, so too can bad actors. When security teams find weaknesses, they must also compare vulnerability versus exploit to find out if it is worth the time, cost, and energy to mitigate. The high costs of a potential breach make security important, but with rapid release dates pushed onto developers, security often falls short on the list of priorities.
Penetration testing
Penetration testing provides security teams with a better idea as to the types of vulnerabilities in which an application is exposed. A penetration test is a simulated attack. As it is launched manually, it takes some sort of planning before execution. While a penetration test adds insight about an application’s defenses, it takes a highly skilled expert to launch it, and the results lack risk prioritization and context. Further, it consumes valuable time on the part of application security experts to crawl through the findings. On top of these issues, penetration testing occurs far into the development process, which drives up cost significantly. And relying on a signature-based engine, penetration testing tools are bound to miss true vulnerabilities (false negatives). All of this elongates development cycles and leaves development and security teams frustrated.
Legacy application security scanning
Often as a supplement to penetration testing, security teams use legacy application security scanning tools to identify potential vulnerabilities. These consist of legacy static application security testing (SAST) and dynamic application security testing (DAST) tools. Both take an outside-in security approach.
Legacy SAST tools are problematic because they only capture a point-in-time view and have a high degree of inaccuracy. This results in piles of security alerts that are generated in PDF reports, which require manual triage and diagnosis by specialized application security experts. As many of these are false positives, the amount of wasted time can quickly spiral. Legacy SAST tools also rely on signature-based engines that often miss true vulnerabilities. Plus, the enormity of the security alerts they flag can quickly overwhelm security and development teams and lead to alert fatigue and missed vulnerabilities.
Legacy DAST tools provide security teams with an idea of how their applications will perform if attacked. Before planning and staging, a DAST tool will scan the application from the outside, triggering potential weaknesses in perimeter security. Using methods like SQL injection attacks, XSS attacks, and even brute-force attacks, cybersecurity teams stage attacks based on the triggered weaknesses to test for exploitability. While legacy vulnerability scanning using DAST tools is an essential step in understanding application vulnerabilities, it often leaves room for error. Hours of planning and staging mixed with an in-depth analysis of findings pushes back deployment schedules and adds unnecessary stress to security teams.
Legacy scanning, like penetration testing, produces huge volumes of security alerts that must be triaged and diagnosed by specialized application security experts. Legacy application security scanning itself also requires specialized expertise. With cybersecurity staff, including application security professionals, hard to find and retain, this can be problematic. In addition to the above, both legacy SAST and DAST tools struggle to analyze application programming interfaces (APIs) connected to applications. Thus, while the application code may be deemed safe, the API connections may not be.
Web application firewalls (WAFs)
Perimeter defense web application firewalls (WAFs) also take an outside-in security approach. WAFs have been in use for two decades. They are time-consuming to configure and implement and present even greater challenges for ongoing tuning and management. All of this time requires significant resources. In addition, because WAFs rely on signature-based engines and detect all attack probes rather than those that can exploit a true vulnerability, WAFs produce a huge number of security alerts. The resulting PDF reports require significant time from security operations (SecOps) teams to triage and diagnose. Often, there are so many alerts that SecOps teams develop alert fatigue and miss true vulnerabilities.
Security instrumentation is continuous and accurate
A better means for securing and protecting applications is inside-out application security. Embedding security within software eliminates point-in-time views of penetration testing and legacy application security scanning in exchange for continuous analysis from development through production. In the case of development, interactive application security testing (IAST) enables developers to integrate security into their integrated development environment (IDE) and continuous integration/continuous deployment (CI/CD) pipeline. They get real-time vulnerability alerts based on application runtime. This eliminates the false positives of legacy penetration testing and application security scanning as well as the missed vulnerabilities that plague both as well. IAST also removes development roadblocks and delays.
For applications in production, security instrumentation enables runtime application self-protection (RASP). RASP security continuously monitors applications in runtime and identifies attacks that can exploit application vulnerabilities and block them before the exploitation can take place. In contrast to WAFs, RASP solutions are easy and fast to deploy and easy to manage.
Advanced solutions to secure sensitive data
Sensitive data exposure is a serious issue, one that can destroy an organization. Data breaches will continue to be a serious problem and application vulnerabilities will be one of, if not the highest, causes for them. Organizations seeking to tap the opportunities of digital transformation must not only accelerate development cycles and code releases but transform their legacy application security approaches to meet the demands of modern software.
Contrast is the clear customers’ choice
Contrast is named a Customers’ Choice in the 2021 Gartner Peer Insights “Voice of the Customer”: Application Security Testing report. With the highest percentage of 5-star ratings, this is the third consecutive year Contrast has received this powerful endorsement from customers.
Built for Developers. Trusted by Security.
Learn Secure Code
CROSS SITE SCRIPTING (XSS)
Learn about Cross site scripting (XSS) and how it affects your Java source code
SQL INJECTION
Learn about SWL injection and how it affects your Java source code
CLIENT SIDE INJECTION
Learn about client-side injection and how it can affect your source code