Cybersecurity experts join the 2025 guessing game

Taylor Armerding
7 min readJan 7, 2025

--

Yogi Berra was right: Predictions are hard, especially about the future. But predictions are also useful, even if they’re only guesses and not guarantees. Especially if they are informed guesses. And fortunately, in the world of cybersecurity, we have a robust cadre of very informed speculators willing to put their forecasting reputations on the line.

So as we all put 2024 into what will relatively quickly become the proverbial “dustbin of history,” here are some informed opinions about what are likely to be challenging, troubling, disruptive, and even thrilling trends and events during 2025 in the world of cybersecurity, brought to you by a range of experts who know how to guess much more accurately than most of us.

A to Z on A and I

Boris Cipot, senior security engineer, Black Duck

AI and machine learning will continue to drive automated threat detection. Real-time identification and response to emerging threats will be the focus of this technology. In AppSec, we can expect that AI may also enable more adaptive testing and security automation.

Tanya Janca, head of education and community at Semgrep, secure coding trainer at She Hacks Purple

A very prominent trend I see is every single startup making huge AI promises, and the results being anywhere from terrible to amazing, resulting in the public being generally dismissive of new claims.

I’ve heard many people say they are “over it,” but I also see a lot of exciting research in this area that I think is very promising. I’m also seeing quite a few mistakes in AI development — the same security mistakes we’ve made with every new technology (APIs, serverless, IoT, etc.). I’m hoping we can learn the AI security lesson a little faster than we have in the past, and apply all the AppSec and secure coding best practices that we know and love.

Rebecca Herold, CEO and founder, The Privacy Professor

The use of AI, for bad and for good, is going to blast off and be adopted by significantly more than it already has, even though most AI tools haven’t been thoroughly vetted. That will result by the end of the year in rampant intellectual property (IP), privacy, and discrimination violations, along with growing numbers of associated private lawsuits and class actions.

Cybercriminals and cyber bullies will also dramatically increase the use of AI to commit crimes (breaches into networks, successful impersonation phishing and other types of tactics and frauds) and to wreak havoc, extort, harass, and embarrass others through AI-generated porn images of children, teens and adults.

Companies and nation-states will also use AI to spy or gather intel on their competitors or enemies.

On the healthcare front, AI will be used to make beneficial medical breakthroughs, but also to cause patient harms, thanks to lack of associated cybersecurity and privacy controls.

Sammy Migues, principal at Imbricate Security

AI will continue to get used inappropriately to save a buck today without regard for the downstream costs. To put it somewhat facetiously, with AI, organizations will be able to hit all their dubious metrics such as response time for security-related service tickets (“Have you tried turning the SQL injection off and back on again? HMU later and LMK!”), create reams of written policy and process overnight (“Encrypt every message, hash passwords with care, encode all outputs, and of inputs beware!”), provide 24x7 support (“Hi, I’m ELIZA. What’s your name?”), and deliver security sign-off in CI/CD (“Your code has 0 security issues. You cannot continue until your code has 0.00 security issues. Please try again. Have you tried asking ELIZA to open a ticket to explain the policies to you?”).

Kelvin Lim, director, security engineering, Black Duck

Attackers will leverage AI for highly sophisticated and automated attacks, such as AI-generated phishing emails and deepfakes. Cybersecurity vendors will continue to counter these threats with AI-powered security solutions that can detect and respond to attacks in real time. One challenge organizations may face will be streamlining testing tools. According to the 2024 Global State of DevSecOps report, 82% of organizations have between 6 and 20 security testing tools in use.

Critical infrastructure in the bull’s eye

Craig Spiezle, managing director at Agelight Advisory Group and chairman emeritus of the Online Trust Association

I think we will see a shift from financially motivated exploits (i.e. ransomware) to those that target specific segments of the U.S. economy and infrastructure from state-sponsored actors. Already we are seeing threats responding to tariffs, and I think supply chains specific to key industries will be disrupted.

Aligned to this I expect more targeted attacks on ICS [industrial control systems] and the plethora of IoT devices, many of which are not hardened and are legacy devices with unknown and undocumented software libraries. This speaks to the importance of security teams to conduct a bottom-up audit of all such systems, identify risks, and take steps to isolate or ideally retire them.

Kelvin Lim

There will be increase in ransomware attacks on critical infrastructure in energy, healthcare, and transportation services. Several governments such as in Singapore, are taking steps to mitigate ransomware attacks on critical infrastructure. The increased use of ransomware-as-a-service will make it easier for less skilled attackers to launch sophisticated ransomware attacks.

Software on wheels

Dennis Kengo Oka, senior principal automotive security strategist at Black Duck

We’ll likely see two main trends in 2025 in the automotive industry. First is the continued development of the software-defined vehicle, with increased focus on consolidated, in-house software development, managing the software supply chain, and increased use of open source software. This will also include adoption of a service-oriented architecture, with standardized APIs [application programming interface] running on top of a middleware/automotive OS [operating system]. To meet higher expectations of customers and handle rapid requirement changes, this type of development demands a higher pace of developing software with shorter release cycles. Thus, more organizations are adopting DevSecOps, to ensure a closer loop between development teams, security teams, and operations teams.

The second trend will be the adoption of GenAI during development to help with several activities throughout the development life cycle. An example is using AI to manage requirements, generate code, generate test cases, perform AI guided testing, analyze test results, perform auto triage, and perform auto fixing of issues found in software.

Privacy? What privacy?

Jules Polonetsky, executive director, Future of Privacy Forum

On our radar is preparing for a second wave of state-level legislation on privacy, security, kids’ wellbeing, and AI. Many laws are going into effect in 2025, and many new state laws are being proposed. On the LLM [large language model] front, the focus will move to agentic AI and issues around identifying that a user is human, as well as a range of AI personalization, and ethics issues. Data flows from U.S. to foreign adversaries will increasingly be a priority.

Rebecca Herold

2025 is going to be a very disruptive year for cybersecurity, privacy, and long-time oft-forgotten IP topics. I think dormant-to-date Log4j vulnerabilities will be exploited causing significant privacy breaches and cybersecurity events.

Also, legacy systems, applications, and other tech that still has legacy vulnerabilities (perpetual bad passwords, still-used long-unsupported tech, unaware public) will be exploited to cause some new and interesting, possibly financially business-ending, and/or human-deadly, security incidents and privacy breaches.

And IoT products will be silently proliferating unnoticed pathways into businesses and other types of organizations, along with into people’s homes, taking more data than ever, and surveilling in more ways than ever before, while leaving no trace due to lack of security capabilities within the IoT products.

Only the shadow knows

Tim Mackey, head of software supply chain risk strategy, Black Duck

Beware the impact of shadow governance. With increasing regulatory interest in cybersecurity problem spaces and implementations, combined with interest within technical communities to meet governance targets, we are facing the prospect of “compliant noncompliance” in 2025. At its core, compliant non-compliance exists when one team responsible for implementing compliance with a standard, regulation, or law believes that compliance was attained without direct involvement from a legal or GRC [governance, risk, and compliance] team. Without an internal audit of the implementation, the technical team might report compliance, leading to the mistaken belief the business is compliant when it isn’t.

More weak links

Akhil Mittal, senior manager, professional services consulting, Black Duck

The software supply chain remains one of the weakest links in cybersecurity, as seen with SolarWinds and Log4j. By 2025, real-time monitoring will be crucial, though full visibility will take time. Implementing a Software Bill of Materials (SBOM) will provide transparency into third-party components and libraries. As regulations evolve, SBOMs will become mandatory for tracking third-party code.

Boris Cipot

A popular topic will be the discussion around SBOM and the tooling that provides monitoring of the components that are listed in those files. The biggest problem SBOMs are trying to resolve is the transparency customers lack today about the applications they are using in their organizations, and what threats they bring to their supply chain overall. However, the biggest issue that most people lack understanding in is the need to also monitor those components listed and react to every notice of issues.

Bringing Sec to the Apps

Andrew Bolster, senior manager, R&D engineering, Black Duck

Vulnerability triage will become increasingly context-driven and almost ‘subjective’, based on where potential vulnerabilities are identified in the SDLC. This will require significantly increased visibility between security, operations, and engineering teams, to make the vulnerability context available to security teams.

Sammy Migues

The nature of application security programs will begin to change. We’ll see some contraction in application (product, software) security program size as a way to cut costs. The initial attempts at this reduction in centralized team size will effectively create a program management office and then inappropriately disperse discrete pieces of AppSec responsibility to security champions, development teams, QA, Ops, and others that might have time for their processes but aren’t prepared for the day-to-day interactions, bridge-building, and shared responsibility required to ensure the organization is collectively deploying and maintaining appropriately secure software.

--

--

Taylor Armerding
Taylor Armerding

Written by Taylor Armerding

I write mainly about software security, data security, and privacy.

No responses yet