Discussing cybersecurity emerging trends, and the impact of AI.
  • ai
  • technology
  • cybersecurity
  • adversaries
  • ai impact on jobs
  • tech industry ethics
A watercolour robot
AI and trends in the cybersecurity ecosystem
2023-11-21

Today, I’m writing about #cybersecurity emerging trends.

You've read all the emerging cybersecurity topics a million times. Should we cover the topics taking place in boardrooms across the country, or get into the really interesting stuff? Let’s get the mundane business out of the way first.

  • Cybercriminals continue their capability development reducing time-to-market and continually evolving to stay ahead of detection, regardless of what the cyberdefenders do.
  • There continues to be a shortage of technology skills and cybersecurity skills in the pipeline.
  • Businesses are shifting some investment in compliance-orientated measures (checkbox cybersecurity) to integrating active cybersecurity operations with a focus on actively detecting, triaging and responding to threats. Why? Remember Log4j. In an ideal world, proactive cybersecurity will address everything. In the world I live in, we have incident responders on standby.

That last point is interesting, and speaks to something which it is easy to forget. Defenders are part of a cybersecurity economy. That economy consists of people operating for and against the good of broader commerce. There are good honest people trying to protect their businesses and people trying to extract money from good honest people trying to protect their businesses. As in all economies there are forces at play, that are sometimes in balance. If the cyberdefenders get ahead, the cybercriminals are challenged to work hard to find other means to achieve their income. If the cyberdefenders get complacent, the cybercriminals make gains.

The first cybersecurity trend: AI is an emerging technology - notice the vast capabilities that have become available in the past 3 years, and the speed at which new capabilities are released. The last time I wrote an internal blog about AI for a company I work with it was 100% right at the time I started writing, but 40% wrong 4 days after I published; a humbling experience for a technology leader. AI is going to offer new capabilities to cyberdefenders and cybercriminals alike. Technology is morally neutral - almost every kind of technology can be used for good and evil.

Of course, everyone is worried about the potential for AI to disrupt. If you think the immediate threat is an AI going rogue and destroying human society, I will politely disagree. The next AI conflict will be human vs human. The immediate concern is cybercriminals equipping themselves with AI to up their game. Their goals are data theft, fraud and other kinds ol illegal financial gain as they always have been. AI makes them more efficient and effective.

This is already happening: dark web AI solutions exist to help cybercriminals achieve objectives in social engineering, ransomware deployment and worse. AI large language models are trained to exhibit human-like behaviour, it is no surprise that cybercriminals can use that to produce convincing social engineering emails. Cyberdefenders will be able to use LLMs to detect LLM-generated social engineering. And so the next phase of the cybersecurity economy has cyberdefenders and cybercriminals equipping themselves with AI to get an advantage over their adversary.

At the moment, there is considerable discussion about how to make sure AI is safe and not at all evil. This is a laudable and valuable goal, which should be supported; the people who give us standards and regulations do their bit to keep society going. However, it should be emphasised that cyberdefenders are going to be using AI to keep society safe, which is also a laudable goal. I support the idea of safe, regulated AI but not in a way which is bureaucratic and means that cyberdefenders lose ground in the cybereconomy. Cybercriminals will not be taking any notice of any AI regulation.

The second trend prediction: Cyberdefenders have a superpower that cybercriminals can’t match. Cybercriminals use stealth and anonymity, but this is a disadvantage because they can’t easily set up information sharing forums with known identities. For people trying to maintain human society, human trust and trustworthy communication are our superpowers. Expect to see major developments in identity-managed cross-organisational forums so that trust can be managed across wide communities. This will be a means to share information rapidly, but also a platform for AI assistants to aid cyberdefenders in the cybersecurity conflict. And harnessing our superpower, this will be a platform the cybercriminals will struggle to mitigate.