Data: The gold mine of cybercrime, say multiple reports
’Tis the season. No, not the one just past, filled with lights, gifts, vacations, and a measure of hope and good cheer. Now, in just about every industry, ’tis the season of reports on the previous year.
And cybersecurity, based on the findings of a couple of reports that dropped in the past few weeks, appears to have continued working its way down the ladder of success in 2021, illustrated by more software vulnerabilities and more data breaches or compromises.
The National Vulnerability Database (NVD), an arm of the National Institute of Standards and Technology within the U.S. Department of Commerce, issued a chart in December showing that the number of software vulnerabilities in 2021 set a record for the fifth year in a row, at 18,378.
And just last week, the nonprofit Identity Theft Resource Center (ITRC) released its 2021 Annual Data Breach Reportthat says the 1,862 data compromises it tracked set a record — up 68% from 2020 and up 23% from the previous record high set in 2017.
Of course, anything involving statistics needs some context. The NVD found that while the total number of vulnerabilities set a record, those of high severity dropped 16%, from 4,381 in 2020 to 3,646 last year, while those of medium and low severity increased.
That would suggest a somewhat promising trend — high-risk vulnerabilities ought to be the priority and reducing them could improve overall cybersecurity. But yet another report, this one from Bugcrowd, found a 186% increase on its platform over the past year in critical vulnerabilities that it calls Priority One submissions. Those can allow exploits like privilege escalation from unprivileged to administrative, remote code execution, and financial theft.
So, it apparently depends in part on where you’re getting your numbers.
Another element of context is the actual impact of vulnerabilities and breaches. The ITRC report found that while 2021 breaches were at a record level, the number of individuals affected dropped 5% from 2020. That’s another indicator of what looks like a very good trend. The decrease in that statistic over the past several years averaged vastly more than 5%. In 2018 it was about 2.28 billion worldwide. Last year it was 294 million, a drop of 87% in four years.
More context
But that needs some context too, said ITRC President and CEO Eva Velasquez. “It should not be misinterpreted as a success story — 294 million in a single year is still unacceptably high,” she said, adding that “threat actors have simply shifted their tactics from accumulating massive amounts of data to being more precise in their efforts. We may look back at 2021 as the year when we moved from the era of identity theft to identity fraud.”
She said as organizations try to improve security measures like fraud detection and verification, criminals increase their efforts to defeat those improvements.
For example, when states moved to add verification and authentication requirements, such as driver’s license information, to apply for unemployment, “the threat actors concentrated their efforts on auto insurance entities. Rather than hunting for large amounts of data, they went right to a known source for that specific data, and were successful,” she said.
James E. Lee, chief operating officer at the ITRC, added that besides cybercriminals becoming more strategic, some types of attacks may be underreported. He said that is likely the case with supply chain attacks “since 607 cyberattack-related notices did not include information that would have revealed the root cause of an attack.”
Jamie Boote, senior consultant with the Synopsys Software Integrity Group, said yet another element of context is how the counting is done. “The NVD looks at vulnerabilities in software, not vulnerability classes,” he said. “A vulnerability class is surprisingly long-lived. The first buffer overflow exploit in the wild was found in a worm that spread in 1988. Nearly 16 years later, the buffer overflow was present in the 2004 OWASP [Open Web Application Security Project] Top 10 vulnerability list.”
“These vulnerability classes such as remote code execution, injection attacks, misconfigurations, and others are common pitfalls,” he said, “and as long as the languages in use allow these classes to exist, they will continue to spread.”
Known unknowns
Still other elements of context are what the late U.S. Defense Secretary Donald Rumsfeld famously called “known unknowns.”
Obviously, the more software that is written, the more vulnerabilities there are, just like the more cars that are on the roads and the more miles they drive, the more accidents there are. But if there are twice as many cars going twice as many miles while the number of accidents, injuries, and deaths increase by just 10%, that’s a statistical improvement, even though more accidents is not a good thing.
Similarly, it would be useful to know how the average number of software vulnerabilities per 10,000 lines of code has changed. If that percentage is declining, that’s progress even if the raw number is increasing. But we know we don’t know that.
Another known unknown, as Lee noted, is how many vulnerabilities or breaches aren’t getting reported — or, on the other hand, if a record number of vulnerabilities and breaches is due to reporting getting better thanks to pressure from government or private sector compliance requirements.
“There is more regulation that puts the onus on supply chain security, and as a result, scanning of third-party and open source components are on the rise,” Boote said, “so it’s fairly normal to see a rise in vulnerability counts as more software is being scanned. I expect the numbers to continue to increase as more industries find themselves under cybersecurity and supply chain regulations.”
But whether trends are encouraging or ominous, there is general agreement on a couple of fundamentals. First, any number of breaches or vulnerabilities is too many. Second, on the causes of increasing vulnerabilities and breaches and what to do about them.
Andrew Kilbourne, managing director with the Synopsys Software Integrity Group, said one reason for an increase in strategic cyberattacks could be that major breaches like those of credit reporting giant Equifax or health care giant Anthem are now more than four years in the rearview, and the attackers who stole the data of hundreds of millions of people may just now be starting to exploit it.
“Some of those attackers are nation states,” Kilbourne said, “and they can sit on the data and wait for things to get complacent. The credit agencies gave everybody free credit monitoring for a few years when it happened, but those years have ended. And maybe a lot of people don’t have files locked or their credit frozen.”
Another reason is the pandemic, which prodded organizations to use more open source, third-party and legacy code components and to rush the development and production of applications without rigorous testing of the software.
Deeper in debt
Kilbourne said beyond that, too many organizations don’t maintain the security of their software by keeping it up-to-date and installing available patches — a problem is commonly called “security debt.”
“It’s like trying to catch up to a running horse,” Kilbourne said, adding that the problem many times is that organizations know about it but simply don’t address it.
“Large corporations will hire us to test their software but then claim the severity of the defects we find are less than they really are so they won’t have to spend the money to fix them.”
As has been shown for decades now, that kind of short-term gain can lead to long-term pain. Given that software is essentially running just about every business, if your software is vulnerable, your business is vulnerable.
The good news is that there are multiple ways available to make software more trustworthy by building security into it. The ITRC’s Velasquez said for organizations to protect and defend the data they hold, “it’s essential that everyone practice good cyber hygiene.”
That includes a list of measures that should by now be familiar, including:
- Unique passwords with 12 characters or more across all accounts.
- Multifactor authentication.
- Protecting devices with a PIN or biometric lock.
- Ensuring that employees don’t reply to or clicking on any links in emails if they didn’t initiate a communication. This helps them avoid phishing and other scams.
For creators of software, it means using the multiple testing tools available to scan code for defects while it is being written, while it’s running and while it’s interacting with users.
And Boote said security now has to go beyond cyber hygiene. “Mobile and online applications have found that the data they collect from their users are far more profitable than other monetization options available, and as such, more data is harvested and stored in more places,” he said. “An identity only has to be stolen once for use in fraud, and increasing the amount of this data that can be stolen will only increase exposure.”
Lee agrees. “Gone are the days when a network firewall and good end-point protection was all you needed,” he said. Now, effective cybersecurity requires specific protections for web applications, cloud apps, containers with native security, runtime solutions for enterprise apps already in production, robust pen testing, and a patching regimen that measures time to patch in minutes, not months or years.”
The bottom line, Kilbourne said, is that awareness has to lead to action. “We found some severe vulnerabilities for a major, global bank,” he said, “but they insisted on calling them ‘medium’ because they said they didn’t have the budget to fix them.”
“That won’t work. Organizations can’t sweep defects under the carpet or negotiate their way out of security debt. The focus needs to be on fixing, not just finding.”