ChatGPT embraced by hackers, but some AI experts say it’s not botmageddon—yet

Taylor Armerding
7 min readJan 23, 2023

There is no technology that can’t, and won’t, be used for both good and evil. So, no surprise, that’s what’s happening with ChatGPT, the latest, hottest tech toy.

The chatbot was created by OpenAI, a research firm that declares its mission is to “ensure that artificial general intelligence benefits all humanity.” Which sounds good. But the company might want to tweak that language a bit, since “all humanity” includes bad humanity — cybercriminals and cheaters. And it is apparently starting to benefit them in multiple ways — probably not what OpenAI has in mind.

For the first few weeks after its beta release at the end of November, most of the headlines were about the entertaining and relatively benign uses of ChatGPT — have it compose a piece of writing like poetry, newspaper story or opinion column, essay, instructions, or even short stories and novels, all in the style of a specific user.

But that couldn’t last. Before the Happy New Year greetings had faded, so did the happy headlines, which pivoted to highlighting the ways ChatGPT was being, or could be, misused and abused.

  • The New York Times reported that university professors are overhauling the way they allow students to write essays, since an increasing number have used ChatGPT to do it. One Northern Michigan University professor who caught a student using it, plans to “require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students [will] have to explain each revision.”
  • Bruce Schneier, author, blogger, and chief of security architecture at Inrupt, coauthored an op-ed in the New York Times (reposted on his blog) warning that “for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.”That lobbying could include using a bot like ChatGPT to generate comments submitted in regulatory processes, letters to the editor in in local newspapers, and millions of comments per day on news articles, blog entries and social media posts.
  • Check Point Research reported earlier this month that not only is ChatGPT becoming increasingly adept at generating phishing emails, but also at writing malicious code. According to the report, the coding “skill” could be used for good, but participants in cybercrime forums — some with little or no coding experience — were using it for malicious tasks like phishing emails and software that could enable espionage, ransomware, and spam. “This script can easily be modified to encrypt someone’s machine completely without any user interaction. For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed,” the researchers wrote.

Primitive, for now

This is not a catastrophe — yet. The current version of ChatGPT is clearly primitive. Even those who are excited or worried about it acknowledge that it needs human supervision.

A Wired magazine post noted that “because of how ChatGPT works — by finding statistical patterns in text rather than connecting words to meaning — it will also often fabricate facts and figures, misunderstand questions, and exhibit biases found in its training data. This is likely to complicate efforts to use the technology widely, by, for example, mixing misleading or biased information into search results.”

And some experts dismiss the media hype as overblown. Stephane Bortzmeyer, a computer networks engineer, in a comment on a different blog post by Schneier, acknowledged that the chatbot “automates some boring tasks such as reading the documentation,” but added that “it also makes big mistakes and, if you’re not an expert, you cannot spot them.” He contended that the Check Point report “seems more a PR attempt to benefit from the current interest in ChatGPT.”

Gary McGraw, cofounder of the Berryville Institute of Machine Learning and author of several books on software security, is equally unimpressed.

ChatGPT, he said, “is constrained by the dataset it learned on.” And while that dataset may be prodigious, “it learned its autoassociative predictive behavior based on that. It doesn’t continue to learn — it stopped learning. We just query it and it uses the statistics it’s built up to answer.”

“So it doesn’t have any understanding of what it’s squawking about. It’s just predicting the next word or token in the sequence,” he said. “It’s just making noise — very impressive noise in some cases. But this is not general AI [artificial intelligence]at all — it’s not even approaching that,” he said.

Doing the grunt work

Jamie Boote, senior consultant with the Synopsys Software Integrity Group, said the current value of a bot like ChatGPT isn’t that it will break a lot of new ground but that it can rely on the massive amount of data it contains to do what has been done before much more quickly.

“You used to need a human brain to do lot of programming grunt work,” Boote said, “but because ChatGPT has been trained — probably months and months or years and years of training of this model — the result is that all that upfront uploaded work means ChatGPT can respond in seconds.”

That would mean, he said, that instead of a manager overseeing “junior developers who put out imperfect code, he becomes a manager of essentially an infinitely large team of developers who all interpret his English language, spoken requests, and turn it into code.”

Still, Schneier contends on his blog that, “the technology will only get better. Where it matters here is that it gives less skilled hackers — script kiddies — new capabilities.”

Check Point wrote much the same in its report. “Although the tools we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

And some big players are betting on it getting much more sophisticated quickly. Microsoft has already committed $1 billion to ChatGPT and is considering another $10 billion.

Some experts say this is part of a technological wave called generative AI, a term that covers algorithms that can generate text, images, or other data — things that, as demonstrated, include college essays, phishing, and malware.

As the Times story put it, “The chatbot generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction — and their schoolwork.”

White-collar disruption

Add phishing and malware to that list. And the concerns about it go well beyond enabling cybercrime. ChatGPT and other AI tools could also allow automation to extend far beyond where it is now. Instead of robots only working assembly lines, generative AI could disrupt the entire white-collar and artistic world by eliminating the need for humans to do work that takes more mind than muscle — compose music, create visual art, take photos, or write everything from ad copy to press releases, research papers, and news stories — yes, including posts just like this one. (Disclaimer: I swear that ChatGPT had nothing to do with writing this story.)

Indeed, a group of artists has filed a federal class-action lawsuit against AI-art companies Stability AI, Midjourney, and DeviantArt for alleged violations of the Digital Millennium Copyright Act, violations of the right of publicity, and unlawful competition for collecting millions of digital images and then creating “derivative” images from them.

According to the complaint, the artists “seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work.”

Far from perfection

Sorting that out will likely take years. But, given that there is no way to ban or eliminate a technology once the digital horse is out of the barn, so to speak, is there a way to mitigate, or at least manage, the rapidly growing malicious use of AI?

One possibility seems obvious. If AI can be used to attack, why can’t it be used to defend? Humans will never write perfect software code, but if a technology can be trained to create “eerily articulate and nuanced text,” why can’t it be trained to write perfect code by being fed data on all the known vulnerabilities that have had to be patched in software, and then avoiding all of them?

That may eventually happen, according to Boote, but it will likely take considerable time.

“AI is good but it’s not perfect,” he said, “which is why we’re not putting it in charge of important things like running our nuclear reactors, launching our missiles, or driving our cars. But how can we use it to help us secure code? Does AI help with security testing or can it enforce policies when it recognizes a policy violation — that someone is not doing what they should be doing the way they should be doing it?”

McGraw also doubts that is imminent. “Where did ChatGPT’s coder version of this large language model get trained? On everything on GitHub. Is all that code bug free? No,” he said, adding that “ChatGPT isn’t learning to code by doing it — at all. Current iterations of this technology don’t work that way. You can’t really code if you don’t have a design understanding of what’s going on.”

“You can make stuff that compiles [code] and you can steal snippets and put them together. But I’m not worried that these autocoding systems are going to be replacing actual developers, especially senior design developers, anytime soon.”

McGraw said the biggest danger is that people who don’t understand AI will think it’s more powerful than it is, for both good and evil. He said there have been important advances in deep neural networks, “and some of this autoassociative predictive behavior is extremely cool.”

But that, he said, is only in constrained environments. “Human life and conversations are much more open-ended, so the idea that we’re approaching general AI very quickly is incorrect. Rumors of the death of humanity are greatly exaggerated,” he said.

--

--

Taylor Armerding

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.