News and Publications

Love on(the)line: the darker side of AI

Posted: 11/02/2025


A new dawn

On 13 January 2025, Prime Minister Keir Starmer set out a blueprint to ‘turbocharge’ AI and unveiled details of the government’s AI Opportunities Action Plan. In a press release about the announcement, AI is heralded by the government as having the ability to ‘transform the lives of working people – it has the potential to speed up planning consultations to get Britain building, help drive down admin for teachers so they can get on with teaching our children, and feed AI through cameras to spot potholes and improve roads’. Secretary of State for Science, Innovation, and Technology, Peter Kyle said: ‘AI has the potential to change all of our lives but for too long, we have been curious and often cautious bystanders to the change unfolding around us. With this plan, we become agents of that change.’

It is clear from this announcement that there is a real desire for the UK to be at the forefront of the development of AI technology, with AI being put in the driving seat to power the government’s plans for change. There is much to be excited about, but whilst investment in AI offers a number of positive opportunities for change and growth, there is a darker side of these technologies which continues to sully the otherwise rosy image that might be portrayed about their implementation and use.

In an earlier article, ‘Guarding your reputation: defending against AI-driven defamation’ and in episode six of Penningtons Manches Cooper’s ‘Beyond the code: the legalities of AI’ podcast, the reputation management team explored the evolving ways in which our reputation – both online and offline – is vulnerable to the often-unforgiving realms of social and other online media, and how generative AI could pose dangers to one’s reputation online.

The team discussed a fictional story – centred around an Olympic gymnastics hopeful – to illustrate how irresponsible use of generative AI can quickly lead to reputational damage, and considered the strategies that those affected by AI-hallucinated inaccurate and defamatory content might make use of.

Looking for love

But these are not the only potential challenges that generative AI poses. Naturally, at this time of year and with Valentine’s Day upon us, thoughts turn to all things romance. The pursuit of love is a venture for which many now turn to the internet for help in their search, with a plethora of apps and websites offering the ability to meet and match with a number of users within a few swipes or clicks.

One such app, Tinder, which was launched in 2012, describes itself as having ‘revolutionized how people meet, growing from 1 match to one billion matches in just two years. This rapid growth demonstrates its ability to fulfil a fundamental human need: real connection. Today, the app has been downloaded over 630 million times, leading to over 100 billion matches, serving approximately 50 million users per month in 190 countries and 45+ languages…’. But in the dawn of the development and use of generative AI, it is precisely that ‘real connection’ that so many seek which is put at risk by bad actors.

The brave new (digital) world in which we find ourselves is an ever-changing technological landscape, and the ways in which people seek human connection is a far cry from the likes of Hamlet’s love letter to Ophelia in Shakespeare’s Hamlet, where he implores Ophelia to ‘Doubt thou the stars are fire, Doubt that the sun doth move, Doubt truth to be a liar, But never doubt I love.’ Alas, the modern-day dater should be wary about online professions of love, and needs to exercise a greater degree of caution about whether the person they are talking to (and potentially falling in love with) is really who they say they are.

Deep fakes

Enter stage left: deep fakes. The Public Sector Fraud Authority’s Introduction to AI Guide with a focus on Counter Fraud describes deep fakes as including where data is used to mimic real online interactions (such as a person’s voice or image) that can have the illusion of being real.

Concerningly, it is reported that deep fakes are increasingly being used in so-called ‘romance scams’ to trick victims into believing they are talking to a real person in order to steal large sums of money. The Times recently reported the story of a British woman who fell victim to an online scammer posing as a handsome colonel in the US Army on Tinder, who gained her trust and convinced her to part with £20,000 of her savings in just a matter of weeks. It was reported that the scammer led his victim to believe that he was looking for love after his wife had died, and sent her a series of videos to convince her that he was real.

However, the videos – including his voice and image – were fake, created by AI. Martin Richardson, a senior partner at National Fraud Helpline, told The Times that, ‘This was an incredibly unusual fraud in which the scammer used every possible method to convince the victim that he was genuine. Not only did the fraudster create AI videos but he also sent physical items such as… trinkets, keepsakes and an ornament. Combining AI and fake letters and sending items in the post shows a level of sophistication from a very determined scammer.’

The rise of ‘romance fraud’

Sadly, this is not an isolated incident, nor is it the only type of ‘romance fraud’ on the rise, as Charlie Shillito, senior associate in the commercial dispute resolution team, explores in his article. And whilst money is often the motivating factor for many online scammers, it is not always what they try to obtain from you – it can also be personal information. This serves as a timely reminder to maintain good ‘online health’ – not only from a general data privacy perspective, but also to mitigate the potential for being targeted by online scammers.

As McAfee notes, some romance fraudsters profile their potential victims before contacting them, using information that they have gathered online to tailor their approach. Sadly, McAfee reports having seen examples of cases where scammers target widowers with fake profile pictures that share similarities with the widower’s deceased spouse. It offers a couple of tips for making it tougher for scammers to find and use information:

  • Make social media more private to ensure that personal information is only visible to the people we want to share it with. This also helps reduce the amount of personal information that is publicly available and accessible by search engines.
  • Watch what is posted on public forums to reduce the amount of information available for scammers to harvest.

Looking ahead

As noted in the previous article, as generative AI continues to advance, so too do the risks that it poses to individuals and businesses alike – not only in relation to reputational damage caused by incorrect and fabricated content being produced and circulated online, but also the potential for unsuspecting people to fall victim to bad actors who use often very convincing images and voices generated by AI to seduce them out of money. As before, remaining vigilant is key as the development and integration of AI into our daily lives continues apace.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP