When it comes to online collaboration, don’t trust until you verify
Nothing good goes uncorrupted. Or unexploited.
Which is a depressing but important reality in just about any area of life, but especially in cybersecurity. Unfortunately it applies to collaboration — in general a very good thing. People who collaborate are usually able to create or do something better together than any one of them could have done on their own.
In cybersecurity, collaboration means software vulnerabilities frequently get discovered more quickly and therefore get patched before hackers can exploit them. A two-decades-old slogan declares that “given enough eyeballs, all bugs are shallow.” It’s a bit like crowd-sourcing security.
But that assumes everybody is honest, transparent, and is trying to find and fix bugs, not exploit them. And assuming the best of people you don’t know and will never meet in person can get you in trouble — one of the core lessons in any class on how to spot “social engineering” attacks.
Now those classes have a fresh new example courtesy of “Zhang Guo.” Who isn’t really Zhang Guo at all. He (or she) is, or was, a persona — one of a group of social media personas — deployed by the North Korean government to pose as cybersecurity researchers looking to “collaborate” with legitimate security researchers.
According to Google’s Threat Analysis Group (TAG), which last month flagged what it called a “ongoing campaign” that had been functioning for at least the previous several months, the personas did a pretty good job of looking legit. They were all over social media — 10 Twitter accounts with hundreds of followers and also multiple identities on Telegram, Keybase, LinkedIn, and Discord. They even used plain old email.
They created a research blog with “write-ups and analysis of vulnerabilities that have been publicly disclosed, including ‘guest’ posts from unwitting legitimate security researchers, likely in an attempt to build additional credibility with other security researchers,” TAG’s Adam Weidemann wrote in a blog post.
But the goal was to get their targets to download malware that would enable the theft of their research.
The method was a targeted, sophisticated social engineering attack. After making connections with specific researchers, the posers would ask if they would be willing to “collaborate” on a vulnerability research project.
They would then “provide the researcher with a Visual Studio project,” according to Weidemann’s post.
“Within the Visual Studio project would be source code for exploiting the vulnerability, as well as an additional DLL (dynamic link library) that would be executed through Visual Studio Build Events. The DLL is custom malware that would immediately begin communicating with actor-controlled C2 (command and control) domains.”
Some researchers were compromised simply by visiting the attackers’ blog. “Shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server,” Weidemann wrote, which indicated that the North Koreans were exploiting a “zero-day” (unknown) vulnerability in Windows 10 or the Google Chrome browser.
“At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions,” Weidemann wrote. “At this time we’re unable to confirm the mechanism of compromise, but we welcome any information others might have.”
Nobody is immune
All of which is proof that even savvy cybersecurity practitioners can fall for a well-executed scam. Security researcher Zuk Avraham told Wired magazine that after Zhang Guo contacted him, he might have agreed to collaborate if he’d had the time. “I guess being busy saved me here,” he said.
Lance Spitzner, director of research and community at the SANS Institute and a long-time security awareness trainer, said “absolutely anyone, including me, can fall victim to social engineering attacks, especially customized ones that have been highly researched. The more of a target you are, the more time and effort bad guys will put into these attacks.”
Thomas Richards, principal security consultant with the Synopsys Software Integrity Group (SIG), agreed. “All humans are vulnerable to some degree based on assumptions, trust relationships, and experience,” he said.
And Michael Borohovski, director of software engineering with the Synopsys Software Integrity Group, said “regardless of attribution or anything else, even the most seasoned experts could fall prey to this sort of attack.”
“This was an innovative effort on two fronts: actual, real exploits, coupled with social engineering security researchers to run the exploits by, with the attacker masquerading as someone asking for help and generating trust through social proof,” he said. “This is literally the perfect storm — pretending to be a legitimate security researcher asking nicely for help, leading to profit.”
Spitzner agrees that the North Korean attack was impressive but said it isn’t unprecedented. “In many ways the internet has become the Wild, Wild West — a lawless landscape where anything can and does go. I’m quite sure there are other, just as sophisticated, social engineering campaigns going on right now — we just don’t know about them,” he said.
Richards said the social media personas, the blog and the credible backstory that they were legitimate security researchers gave a high level of sophistication to the campaign. But he said that even with a sophisticated attack, the lessons that have been taught for decades about how to spot social engineering still apply.
“It still boils down to the old saying, ‘don’t open attachments from people you do not know.’ Treat every attachment as malicious and have proper sandboxes setup to run code that is untrusted until you can verify it,” he said.
Spitzner said the North Korean hack won’t really change the message in the classes he conducts. “I would focus on the common indicators — sense of urgency, too good to be true, why are they doing this, etc.,” he said. “The social engineering modalities are constantly changing, but the common indicators rarely change.”
Collaborate with caution
There is no major concern that this campaign will have any permanent chilling effect on collaboration among ethical researchers. It’s more likely to reinforce the mantra to treat any interaction with an unknown person with “polite paranoia.”
“I think it will make security researchers more wary of newcomers to the scene,” Richards said. “Folks who are established will probably continue to work together but I expect there to be a period of paranoia.”
And Spitzner said most people in the cybersecurity industry “have a very healthy and suspicious approach to interacting with strangers. This is why trust is so paramount. This will probably be more of a good example why we must all be careful about who we connect and share with, but I don’t think it will put an end to that sharing.”
Meanwhile, Google’s TAG offers this advice: “If you are concerned that you are being targeted, we recommend that you compartmentalize your research activities using separate physical or virtual machines for general web browsing, interacting with others in the research community, [and]accepting files from third parties and your own security research.”