Posted: 23/10/2025
"AI is transforming cybersecurity in both positive and negative ways. On one hand, it enhances security by automating threat detection, reducing false positives, and accelerating response times. AI-driven cybersecurity tools can analyse vast amounts of data to identify anomalies and predict attacks before they happen.
“Businesses are increasingly relying on AI-powered security solutions to cut costs while strengthening their defences. However, AI also empowers cybercriminals. Attackers use AI to automate phishing campaigns, create deepfake scams, and develop adaptive malware that can evade traditional security measures. AI-generated threats are evolving rapidly, making it harder for organisations to keep up. Some experts argue that businesses must rethink their security strategies to counter AI-driven attacks effectively."
The above was the response given by Copilot when it was asked how AI could affect cybersecurity. Copilot confessed that AI empowers cybercriminals and has the potential to elevate the ability to evade traditional security measures, thereby necessitating innovation from businesses in order to safeguard against AI-driven attacks. Equally, Copilot boasted of AI's potential to enhance cybersecurity and for businesses to harness AI when strengthening their defences against increasingly sophisticated cyberattacks.
Let us start with the bad news: AI can and has increased the scale and enhanced the sophistication of cybersecurity threats. For example, what used to demand personal time and effort – such as calling unsuspecting individuals to charmingly convince them over the phone to divulge sensitive information now, in large part thanks to AI, can be streamlined to automated mass emails to a larger pool of recipients, enabling lower effort and higher reward for cybercriminals.
Indeed, McKinsey estimates a 1200% surge in phishing attacks since the rise of generative AI in 2022. Likewise, the activity of password hacking naturally lends itself to automation, enabling cybercriminals to attempt hacking a higher number of accounts in a shorter period of time leading to a higher success rate.
More sinister still is the weaponisation of AI to "poison" the training data which feeds into the AI's learning processes. Detecting these incidents is particularly challenging due to the subtlety of the changes and the opacity of most AI systems. For more details and information on the ability of AI to threaten digital ecosystems, see our article available here.
The proficiency of AI in deepfake imaging and video footage is also a real threat to security measures. When facing litigation, businesses must now be alive to the risk of deepfake evidence infiltrating court proceedings and it is vital that businesses are supported by lawyers who understand these risks.
For more information on deepfake evidence in court proceedings, including an exemplar video showing deepfake footage, see our article available here: Seeing isn't always believing - navigating the challenges of falsified evidence).
The increased scale of AI-driven cybercrime is therefore a significant concern for businesses as they need to protect themselves against these risks. Now we move to the good news and explore how businesses can use AI when innovating against novel cybersecurity threats.
A government survey into the use of AI in cybersecurity by businesses revealed that:
If more businesses were to embrace and properly harness the capabilities of AI in the fight against ever-more-sophisticated cyberattacks, this could yield significant benefits. AI can help to protect businesses against cybersecurity threats in a preventative, detection and reactive capacity.
When using AI software to address cybersecurity threats, some caution should be exercised to ensure that the AI software is appropriate, safe and lawful. The law is evolving in this area. For example, the EU AI Act came into force on 1 August 2024, comprising comprehensive AI regulation. The EU AI Act adopts a "sliding-scale" approach, assigning proportionate levels of regulation to different AI usages correlating to the scale of the risk they pose.
From the perspective of cybersecurity, this translates to a requirement for heightened regulations and scrutiny for high-risk AI systems, including increased human oversight of high-risk AI-driven processes. Likewise, as observed in the UK government's voluntary Code of Practice for the Cyber Security of AI, it remains important to counterbalance AI software with proportionate human oversight and to supplement AI software with adequate staff training and reporting mechanisms.
For more information on the developing law in this area, including the UK's Cyber Security and Resilience Bill, see our article available here.
It is evident, however, that enlisting the help of AI software will be indispensable for businesses seeking to protect themselves against cybersecurity threats. Indeed, in June 2023, the EU Agency for Cybersecurity published a framework to guide effective AI Cybersecurity Practices (FAICP), which observes that 'AI's ability to identify patterns and adaptively learn in real time as events warrant can accelerate detection, containment and response.'
The FAICP also observes that AI's role in protective measures against cybersecurity attacks should be complementary to rather than substitutive of human input, stating that AI can help to 'reduce the heavy load on analysts working in security operations centres (SOCs) and enable them to be more proactive. These workers will likely remain in high demand, but AI will change their roles.'
Ironically, AI is likely the solution to a problem of its own making. As eloquently put by Copilot in an impressive display of self-awareness: "Ultimately, AI is both a weapon and a shield in cybersecurity. Organisations must leverage AI responsibly while staying vigilant against AI-powered threats."
This article was co-authored by Puja Patel, a trainee solicitor in the employment team.