When breach information sharing declines, so does cybersecurity, authors say

Taylor Armerding
7 min readJan 30, 2023

People respond to incentives, hence the political cliché: “If you want more of something, subsidize it. If you want less of something, tax it.”

That principle applies to more than politics. Welcome to the world of information sharing — or lack of information sharing — about cybersecurity breaches, even though there is general agreement that it would be, at least in theory, a very helpful thing. If everybody knows how and why something bad happened to one organization, they have a better chance of learning how to prevent it from happening to themselves.

But if you want that, you need to “subsidize” it by offering some liability protection to companies that are willing to share how and why they got breached, especially if some of that sharing is about how they screwed up.

And at present that rarely happens in any meaningful way because existing incentives discourage it. Companies that disclose what led to a breach — including things they failed to do, did wrong, or could have done better — are much more likely to get “taxed” for doing so, through everything from regulatory sanctions to class-action lawsuits brought by customers who were damaged by the breach.

In fact, according to three university professors, perverse incentives mean that information sharing on cyberattacks is going from bad to worse. Daniel Schwarcz (University of Minnesota Law School), Josephine Wolff (Fletcher School of Law and Diplomacy, Tufts University), and Daniel Woods (University of Edinburgh), in a paper titled “How Privilege Undermines Cybersecurity,” note that lawyers are increasingly in charge of the entire incident response process after a breach.

Nothing in writing

This means they are directing the activities and communications of not just their client company, but also any cyber insurer and/or forensic cybersecurity firm that may be involved. And in their quest to maintain the confidentiality of information related to the breach that could lead to legal or regulatory damages, they’re increasingly making sure that information doesn’t exist, at least in written form.

One could easily argue that’s a logical response to an incentive created by the courts, which have increasingly denied attempts to keep reports or other written communication about a breach confidential through attorney-client privilege.

A famous and apparently somewhat precedent-setting case the authors cite is the 2019 breach of financial giant Capital One, which led to the compromise of personal data of 100 million of its customers including credit card applications, Social Security numbers, and bank account numbers.

Capital One hired cybersecurity firm Mandiant to help it respond and recover, and Mandiant eventually generated a report that included analysis of the extent of the compromise, where the company’s security controls failed, and steps the company should take to improve its cybersecurity. Capital One’s attorneys tried to keep that report private, citing attorney-client privilege, but the court denied it.

And other lawyers quickly took note, concluding that written information about a breach, if it has to be disclosed, would give a “roadmap” to potential plaintiffs looking to sue for damages from a breach. So to eliminate that roadmap, they have increasingly adopted an iconic piece of advice from an old Boston political boss: “Never write if you can speak; never speak if you can nod; never nod if you can wink,” a saying updated years later by former New York governor Eliot Spitzer, who was brought down by a prostitution scandal: “Never put anything in an email.”

In a recent “Skating on Stilts” podcast, Wolff told host Stewart Baker, an attorney at Steptoe & Johnson, that lawyers she interviewed said they “don’t have the forensics people write reports anymore. One lawyer said it used to be 75% of the time he’d ask for a report, and now it’s maybe 20% to 25%. And you’re only going to do it in cases where you did everything right, and you just had a really sophisticated attacker.”

Beyond that, she and her coauthors also found that lawyers “frequently direct forensic providers to refrain from making recommendations to clients about how to enhance their cyber defenses,” because that would obviously imply that the client company could and should have done better. “If there’s anything that could be used against you, there is this fear that you’re not going to be able to protect it, so you’re going to try to keep that from being written down,” Wolff said.


But this, while it may bring a measure of protection from regulatory or legal damages to the client that got breached, is not good for cybersecurity overall. “Collectively, these lawyer-driven strategies substantially impair impacted firms’ ability to learn from cybersecurity incidents and implement long-term remediation efforts,” the authors declared.

And Wolff told Baker, “From the perspective of somebody like me who wants to study cyberattacks, who wants to learn from them and collect more data, it’s really heartbreaking.”

Indeed, according to the paper (confirmed by almost daily headlines), there’s not nearly enough learning going on. Cyberattacks “have not only cost victims countless billions of dollars, but have undermined consumer privacy, distorted world geopolitics, and even resulted in death and bodily harm.”

This doesn’t surprise Christine Marciano, president and CEO of Cyber Data Risk Managers. She said the short-term benefits from eliminating written reports and recommendations may hurt breached companies in the long term.

“It may expose attorneys and their clients to potential risk and liability over the long term due to the lack of forensics documentation to support their incident response and recommendations that can help their clients learn from and understand how to prevent a future security incident,” she said.

Collective effort needed

But changing the current incentive structure will take a collective effort. And so far, it isn’t coming from government, even though there have been multiple initiatives focused on the issue.

  • The Cybersecurity Information Sharing Act of 2015 grants certain protections to organizations that share “cyber threat indicators” and “defensive measures” for a “cybersecurity purpose,” including protection from liability and waiver of any privileges for sharing such information. But according to the paper’s authors, “these protections are subject to a host of limitations and caveats.”
  • The Strengthening American Cybersecurity Act, which became law in March 2022, mandates increasingly stringent federal requirements covering the reporting of cyber incidents, but it applies only to critical infrastructure sectors. As proposed, it would require a full description of the incident, including the estimated date range and impact on the operations of the impacted entity; a description of the vulnerability exploited and the defenses in place at the time; and the categories of information that may have been compromised. But the deadline for the so-called “rulemaking” process to be completed is still more than two-and-a-half years away — plenty of time for lobbying to water it down.
  • The Federal Communications Commission has proposed new breach notification rules for telecommunications companies. But the focus of the proposal is faster reporting of what happened and what information was compromised, not necessarily how or why it happened.

Overall, the authors call existing government reporting requirements “fairly minimal,” since they don’t require “many details about how they those incidents were perpetrated or what steps were taken to remediate them.”

They propose a solution they say would involve both carrot and stick. It would allow for some privilege — as in confidentiality — protections, but that would be coupled with “new requirements that breached firms disclose specific forensic evidence and analysis.”

Schwarcz, who also participated in the podcast, said he and his coauthors aren’t trying to eliminate the risk of litigation. “We don’t inherently think litigation is unhelpful,” he said via email. “It can provide compensation to individuals who are harmed, and it can deter excessively lax security measures. Our concern is that firms seeking to avoid litigation risk will end up employing strategies that ultimately undermine their cybersecurity.”

The authors suggest that reporting requirements could be modeled after those of the Payment Card Industry Data Security Standard (PCI DSS), which requires that a certified investigator establish the facts of a breach.

“A second, and more ambitious, model might build on the PCI DSS to establish a mandatory forensic evidence collection pipeline that was entirely distinct from incident response,” they wrote. “Private firms could coordinate this process, or the obligation could be placed on independent technology providers … to preserve server logs, disk images, files, and other forensic evidence, which would be turned over to plaintiffs’ attorneys as part of the discovery process.”

Political reality

In the podcast, Baker said while he didn’t have any material objection to their proposal, he thought it was politically unrealistic. “I don’t think there’s a way you can get that through any legislature I’m familiar with,” he said, “in part because it’s not clear who wins, so nobody’s lobbying for it, and everybody assumes they lose, so they’ll all lobby against it.”

Any long-term solution is likely to require some kind of government involvement, however. And Emile Monette, director of government contracts and value chain security with Synopsys, said any workable solution is going to need buy-in from all the players. “It’s all about incentives,” he said. “It needs to be clear what a reporting company will get for sharing. I think it would have to include some WIFM [what’s in it for me?]. Ideally, government would offer some liability protection or other safe harbor, or treat participation in such a program as a mitigating factor in determining whether the company might be subject to some penalty.”

Marciano said she thinks both sides are engaging in an “arms race” — attorneys for breached companies are eliminating written information, while the attorneys for potential plaintiffs are seeking “too much information.”

She recommended requiring a summary forensics report that wouldn’t have to include “sensitive information on a defendant’s cyber security vulnerabilities and measures taken or not taken,” because if that information became public, it would “further increases their security risk.”



Taylor Armerding

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.