How AI is changing the cybersecurity landscape

Peter Hedberg, vice president of cyber underwriting at Corvus Insurance, outlines the challenges and opportunities of AI and the steep learning curve facing cyber underwriters.

Since the launch of ChatGPT a year ago, the status of AI has rapidly evolved from headline-grabbing novelty to serious discipline. Where once the talk of its impact ranged wildly from achieving world peace to the unleashing of Armageddon, depending on your disposition, organisations now generally take a more nuanced view.

Preparedness for its impact on cybersecurity is also rapidly improving. However, a degree of friction still exists within many organisations and that is making the job harder for chief information security officers (CISOs).

Many professionals are overly zealous about AI’s potential, excited about the prospect of cutting out hours of corporate drudgery. CFOs, meanwhile, may be blinded by talk of the massive cost efficiencies AI can generate.

CISOs, naturally, have a different perspective as they race to protect ever-increasing cyber attack surfaces. And disappointingly, often they may have to fight for the technology the company needs in light of this mounting threat. There may also be a disconnect between the CISO, steeped in cybersecurity know-how, and the person charged with buying insurance across multiple lines, including cyber.

However, a blasé CFO or generalist insurance buyer should know that unfortunately in the near term AI is likely to do more harm than good when it comes to cybersecurity.

Chief among the ways it increases cyber risk is the ability of large language models like GPT-4, Google’s PaLM and Meta’s LLaMA to rapidly eliminate give-away signs of phishing attacks. The grammatical, punctuation and spelling errors that we have come to rely on to identify a hoax – the outdated phraseology, the stilted syntax – all can be cleaned up in an instant to create a highly credible-looking call to action.

If you add to that the subordinated cultures of many institutions – is the lowly paralegal really going to question the veracity of an email purporting to come from the law firm’s partner? – it is easy to see how seamless such an attack could be.

AI can also turn everyone into a coder, and thus, everyone into a programmer. AIgorithm-powered code tools can scan software code for vulnerabilities in a heartbeat, while generative AI allows individuals with little programming knowledge to write ransomware. (We recently had ChatGPT do this for us. It initially refused but by breaking down our instructions into three steps we persuaded it to oblige.)

Although unproven, it seems a reasonable assumption that AI has contributed to the recent rise in ransomware attacks. These have spiked in 2023 after last year’s lull and in October were up almost 55 percent year on year, affecting 348 victims, the Corvus indicator shows.

Another worrying possibility is the use of AI to spread misinformation via deep fake technology.

Companies do, however, need AI to fight AI, and it is vital that those guarding the purse-strings understand this rather than regarding defensive AI as just another irksome component of a swelling cybersecurity cost centre.

We need better AI to detect where AI is used, whether that be ransomware, malware or phishing. We also need AI for threat vector intelligence, to keep abreast of what is going on in the dark web, to conjure up new threat scenarios, and to generate playbooks to tell us how to respond.

There’s also evidence that AI can reduce the cost of a data breach – IBM puts the saving at an average of $1.8mn of the average $4.5mn cost – and cut the time needed to contain the breach.

Asset inventories and the effectiveness of the controls we deploy are other areas where AI can help.

Counterintuitively, AI can also deliver security advice in plainer English than many a human IT specialist. What's more, it may actually solve a staffing shortage currently being experienced by the IT security field more broadly. This could help to secure C-suite buy-in.

However, when investing in or deploying AI it is vital to understand where liability is apportioned. In this nascent industry insureds may struggle to obtain recovery. Put bluntly, if a company’s use of AI goes awry, good luck suing OpenAI. At the risk of stating the obvious, read the contract. Caveat emptor applies to users as well.

AI’s rise coincides with a period of renewed competition in the cyber insurance market, and it is essential that underwriters do not jeopardise a recent marked improvement in insureds’ security postures for the sake of premiums. Underwriters should be rewarding companies employing a zero-trust model, which has identity validation at its core.

We underwriters ourselves have a steep learning curve. There is currently no specific policy provision for AI, yet we are dealing with the consequences. The next 12 months are likely to be very instructive in terms of how AI risk is handled.

The rapid evolution of AI underscores the need for cyber insurance and for continuous dialogue between all parties in the risk transfer process.

It is going to take us a while to learn how to interact with AI, and while we’re doing so, we’re likely to encounter some unpleasant surprises along the way.

I believe that together we can remove paying a ransom from the list of options. If we unite in our goal of blocking bad actors’ paydays, take a joined-up approach within our organisations, and combine cyber insurance with an always-on threat detection and risk management solution, we will go a long way to mitigating this new source of cyber risk.