Insider security threats: You need to take them seriously

Taylor Armerding
7 min readSep 8, 2020

--

Photo by NeONBRAND on Unsplash

Every organization knows it can be hurt by external attacks. That’s why they spend time and money on security measures — both physical and digital — to defend against malicious outsiders.

But they can also be hurt by insiders — malicious or not. Sometimes the damage can be worse than from an external attack, since insiders frequently don’t have to breach or break into anything; they have privileges that give them easy access to sensitive, confidential information.

That’s the point of declaring September Insider Threat Awareness Month 2020.

The designation comes from several federal agencies, led by the National Counterintelligence and Security Center (NCSC). And while the website dedicated to the initiative, maintained by the Center for Development of Security Excellence (CDSE), says it’s meant to increase awareness and promote reporting, it immediately adds that it is “not about curtailing protected free speech or suppressing legitimate whistleblowing.”

Instead, “it is about preventing the exploitation of authorized access to cause harm to an organization or its resources.”

Since this is coming from government agencies, most of the focus is understandably on threats from within government or its contractors. The website features brief case studies on insider attacks, including Shamai “Samuel” Leibowitz, a linguist working as a contractor for the FBI, who pleaded guilty in 2009 to leaking classified documents to a blogger who made them public, and Christopher Paul Hasson, a U.S. Coast Guard lieutenant, who pleaded guilty in 2019 to three felony weapons charges and one felony drug charge in connection with studying violence, stockpiling assault weapons and expressing intent to attack journalists, Democratic politicians and minorities.

Both public and private sectors at risk

But insider threats are just as present in the private sector, and in many cases, public and private overlap. Much of the nation’s critical infrastructure, including the electrical grid and water treatment/distribution facilities are in private hands.

The potential damage from insiders is similar to that perpetrated by outside attackers: theft of confidential or classified information, bringing services or production to a halt by ransomware or other malware, and even causing injury or loss of life through taking control of critical infrastructure or devices that are part of the Internet of Things (IoT).

In the private sector, data theft can lead to a now-familiar parade of horribles: identity theft, brand damage, loss of customers, compliance sanctions, legal liability, recovery costs and more. In the public sector it can undermine national security, especially when the attack comes from, or in behalf of, a hostile nation state.

The theme of this year’s Insider Threat Awareness Month is resilience, enabled by what the agencies call “proactive insider threat reporting” that can help organizations “deter, detect, and mitigate” the threat. One of the slogans is similar to those we have seen at airports since 9/11: If you see something, say something.

But it all begins with understanding the different forms the threats can take. As noted above, they can come both from malicious attackers and from loyal employees who are simply clueless or careless.

In fact, while the CDSE website focuses almost exclusively on malicious insiders, the reality is that most insider-enabled breaches come from those who aren’t malicious — people who are loyal but fall for social engineering or phishing attacks from the outside.

Larry Trowell, principal consultant at Synopsys, said that while it is always “more dramatic to imagine the world of corporate espionage, the more common story is that someone misconfigured a server or clicked a random rogue phishing email.”

But however a threat presents itself, there is plenty of material to help with understanding it. Because this is not a new problem. The Leibowitz case is from more than a decade ago.

Deter, detect, mitigate

Nor are the methods to address it new. Deter, detect and mitigate echo a 2015 paper by Ted Harrington, executive partner at Independent Security Evaluators titled “The Enemy You Know: An Analysis of the Insider Threat,” except he labeled the defenses “prevention, deterrence, or mitigation.”

Harrington, who writes more about the insider threat in the forthcoming book “Hackable: How to Do Application Security Right,” said the motivations of insiders aren’t necessarily all that different from those of external attackers. They can range from a simple desire for notoriety to political activism, resentment, money or geopolitical interests.

The difference, he wrote in his 2015 paper, is that inside attackers “have additional trust and access.” He divides them into four categories: accidental (unintentionally clueless or careless), opportunistic (giving in to a bribe or some other temptation), disgruntled (formerly loyal but now out for revenge because of a perceived slight or injustice), and determined (premeditated malice).

Each category requires a different type of defense. Employees who are loyal but careless can harm their organization through risky actions that are now depressingly familiar: using weak passwords and reusing the same ones across different sites and services, clicking on malicious links in emails, and falling victim to social engineering, where an attacker poses as a trusted member of an organization and tricks the person into giving up credentials or introducing malware.

Defenses against those risks are also well established: data encryption, requiring multifactor authentication, limiting employee access only to what they need to do their jobs, and training.

Helping employees develop what many security awareness trainers call “a healthy paranoia” can transform employees from being your “weakest link” to a major asset.

Opportunistic insiders don’t start with an intent to harm their company, but their highest loyalty is to themselves. Given an unexpected opportunity to profit, or simply to gain notoriety from stealing and/or exposing an asset, they take it.

Negative incentives

The best defense against this kind of attacker is for an organization to make employees aware that they are likely to be caught if they try to abuse their privileges.

Harrington said technology, including employee monitoring, could help, “Logging, monitoring, and digital rights management are a few examples that are effective,” he wrote.

Trowell is dubious about that, however. “The time and energy it would take to monitor all the actions and interactions of employees would be vast and could not realistically extend past the buildings and the devices owned by the employer,” he said.

“Even when advanced AI is used to try to predict malicious actions, if people are determined, they will find away. It is better to train employees to be more careful and to watch for actions that seem careless or suspect.”

Disgruntled employees could be easy or difficult to spot, depending on how open they are about whatever happened to turn them against their employer. If there are external signs of anger, disinterest, or a vengeful attitude, then William Evanina, director of the NCSC, advocates for “proactive reporting” by other employees who observe it.

But the response doesn’t always have to be punitive, according to William Leitzau, director of the Defense Counterintelligence and Security Agency (DCSA). In a post on the CDSE website, he wrote that proactive reporting can enable “proactive intervention and assistance before those behaviors become a risk.”

Even without external signs, though, organizations can make it more difficult for potential attackers. There are numerous examples online about how disgruntled employees are likely to leave digital “breadcrumbs” at work.

Those can include attempting to access servers or data they shouldn’t or that are unrelated to their duties. They may take to moving data onto external USB thumb drives. They may try to use non-IT approved apps like Dropbox.

Malicious and dangerous

Finally, malicious insiders are the most dangerous for several reasons. They are likely to have the skills to be more effective at covering their tracks. And they know what they are looking for.

One of this year’s most notorious examples of this is Greg Priore, who had managed the rare book and archive collection in the Oliver Room at the Carnegie Library in Pittsburgh starting in 1992. He designed the room with a “defense in depth” strategy — a series of overlapping systems to thwart theft. But, almost 30 years later, he pleaded guilty to what was called “the most extensive theft from an American library in at least a century” — treasures estimated to be worth more than $8 million.

While Priore didn’t have outside help, malicious insiders frequently have the support and resources of an external group.

An example of that of that is Dongfan “Greg” Chung, a Chinese national who became a U.S. citizen and worked for both Rockwell and Boeing for more than 20 years. During that time, he stole and sent to China more than 250,000 documents relating to the U.S. space shuttle program, B-1 bomber, C-17 military cargo plane, F-15 fighter jet, and Chinook 47 and 48 helicopters.

He was convicted in 2009 of economic espionage in behalf of China. “The damage to the U.S. is impossible to quantify,” Harrington said.

How to defend against such determined, skilled adversaries? It comes down to detecting “anomalous” behavior — the kinds of things an organization should be monitoring for to detect external attacks as well.

Potent privileges

Many of the same monitoring tools already mentioned can help. But one in particular can at least limit the damage even if an attacker isn’t caught immediately — the principle of least privilege.

In the middle of a widespread work-from-home paradigm due to an ongoing pandemic, that can be difficult, given that there are thousands more endpoints, mobile devices and cloud servers that may be outside a company’s networks and therefore harder track and control.

But limiting access privileges means limiting the “places” an attacker can go. It also makes it easier to spot someone trying to elevate privileges on their own.

The reality, of course, is that insider threats will never disappear, just as crime in general will never disappear. But organizations, both public and private, don’t have to make it easy for attackers.

As is the case with any kind of security, if you make yourself a difficult target, criminals are likely to go looking for an easier one.

--

--

Taylor Armerding

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.