James Tumbridge is an intellectual property lawyer, arbitrator and mediator. Through his engagement with policymakers he advises clients on how to prepare for legislative changes and regulation compliance. James chairs the Digital Services function for the City of London, and has been an advisor to various MPs and MEPs on a range of issues.
He took part in the AI Global Summit and gave advice to the government on the 2018 Data Protection Act, and Online Safety Act 2024, and helped design the first guidance for public sector use of GenAI, published in 2023.
Robert Peake is an intellectual property lawyer with over 15 years’ experience. His practice covers the intersection of intellectual property, technology, and regulation. Robert has a particular interest in emerging issues at the intersection of law and technology. He completed his LLM at the London School of Economics, focussing on the liability of internet intermediaries for IP infringement, and returns regularly as a visiting practitioner.
AI has the power to enhance marketing efforts with improved customer engagement, improved insights, and personalised content creation. But the benefits also come with hidden risks. In our latest post, James Tumbridge and Robert Peake discuss the pitfalls businesses can encounter when adopting AI solutions for marketing strategies – and how to avoid them:
The mass adoption of artificial intelligence (AI) by businesses in recent years has been spurred on by promises of increased efficiency and effectiveness. AI solutions offer potentially significant advantages in marketing and promotional activities, including improving customer engagement, retention, identification of new markets or customers, and increasing content creation. Those advantages, however, come with legal and business risks which should be properly understood, to make the right risk assessment and avoid common pitfalls.
1. Transparency – a cautionary tale – do customers know their data will be used to train AI and target offerings?
Many businesses generate and hold significant amounts of data relating to (and perhaps belonging to) their customers. The digitising of business services and transactions means that those data holdings can often be enriched by drawing on data from elsewhere. Retail businesses, for example, will know their customers’ browsing and purchasing history across properties those retailers own, but may also draw inferences from the broader activities of those customers online.
AI applications afford the opportunity to analyse large amounts of data in ways that previously would have been impractically costly in time and resource. A challenge for organisations wishing to leverage the data they have access to from their customer interactions (and data which may be collected from other sources), is that they may not have a compliant legal basis on which to use that data to train an AI model.
The challenge for business is to be transparent with customers about the personal data they collect, and how it will be used. From a legal perspective –and a reputational one – a customer should not find that their personal data is being used in a way that they did not expect. It is common for businesses to disclose that customer data may be used to improve their service offering, and to communicate future offers which may be of interest to the customers. As AI solutions are increasingly adopted, businesses should be looking at their customer privacy notices and the terms and conditions, to ensure that they have sufficiently addressed the possible use of customer data by AI tools, which may use that customer data for AI training purposes.
Consent is not a panacea
In addition to transparency obligations under data protection laws (like the GDPR), direct marketing typically requires an individual’s consent, and personal data processed for those purposes may also rely on consent unless a business can point to another legal basis for such marketing.
A recent decision of the UK High Court illustrates that obtaining consent does not mean that an organisation can send targeted marketing to that individual without risk. In the case of RTM v Bonne Terre Limited [2025], the court examined the online marketing directed at the claimant, a ‘recovering gambling addict,’ by Sky Betting and Gaming (‘SBG’), ultimately finding in the claimant’s favour.
The claimant had a long history of problematic gambling behaviour, and had previously closed his account with the defendant on numerous occasions. It was not disputed that when he reopened his SBG account in 2017, he would have been presented with SBG’s cookie consent banner in use at the time; SBG contending that having clicked on ‘accept and close’ he provided the necessary consent for the use of cookies. Although he could not recall doing so, the claimant appears to have amended his marketing preferences a few months later, and began to receive direct marketing from SBG.
Following the coming into force of the GDPR, an update to consents and preferences was conducted by SBG; following which, the claimant confirmed his consent to receive direct marketing. The claimant again did not recall having done so.
SBG’s evidence was that it collects ‘extensive customer data regarding use of [its] service over time.’
The court described that ‘raw data’ as being stored in data warehouse, where it is ‘operated on by systems created by the data science team.’ A subset of data on each customer, referred to as a ‘feature store’, would contain roughly 500 data points at any given time; these would feed into marketing profiles for each customer
SBG targeted the claimant with extensive personalised and targeted marketing; the claimant at the height of his activity was losing nearly £1,800 per month on an income of £2,500. The court examined the nature of the claimant’s consents for SBG’s use of cookies and personalised marketing, and concluded that although the claimant had given those consents, they could not be seen to meet the legal threshold of being: freely given, specific, unambiguous, and informed. Those consents were inseparably linked to the claimant’s uncontrolled craving to gamble.
The court also found that the profiling of the claimant ‘was parasitic on the obtaining of the data and the ultimate delivery of the marketing, and had no other standalone purpose so far as he was concerned; it necessarily discloses no distinct basis for lawful processing.’
Accordingly, SBG could not rely on legitimate interests as a legal basis for its targeted marketing.
The decision addresses marketing in a highly regulated space, and is constrained to the claimant’s particular circumstances, but it nevertheless serves as a useful illustration of the importance of keeping automated processes under review in order to identify where those may not be performing in line with expectations.
2. Do they know when they are interacting with an AI agent?
One of the most prominent trends in AI applications is ‘agentic AI,’ which refers to AI tools which mimic human interactions with individuals. An example of the AI agent is a chatbot, often deployed by businesses ranging from retailers to financial institutions, in customer service functions. Whilst AI agents can be effective, and appreciated by customers as an alternative to a long wait in a telephone queue, organisations must consider the need for transparency with those interacting with agents.
There is the potential for reputational risk where individuals are not made aware that they are interacting with an AI agent rather than a human . Data protection compliance is also an important consideration when deploying AI agents. An individual’s data may be processed in ways that are unexpected in order for the agent to respond to queries or offer suggestion s, or for further training an AI model.
Profiling of individuals may arise in the context of AI agents, such as when a user is identified as having certain characteristics and then associated with a particular ‘group’ in order to influence the way in which the agent interacts with that individual. Profiling typically involves the analysis of aspects of an individual’s personality, behaviour, interests and habits to make predictions or decisions about them.
Under the GDPR, profiling falls within the broader scope of automated decision-making, and where it is used, engages additional rights for individuals to object, and to seek human intervention or review, in circumstances which may be particularly impactful on the individual. In order to be able to exercise those rights, individuals must be informed of the use of automated decision-making in the context of the interaction.
An example of an impactful deployment of an AI agent is where it might be used to recommend particular products or services for a user, and those on offer are determined by analysing the users’ characteristics, based perhaps on a combination of a broad set of data already held about that individual together with additional information submitted in response to the AI agent’s queries. Where the consequences of the interaction may have a significant impact for the user – such as whether a particular insurance product is available, and its cost and conditions – relying exclusively on an AI agent poses significant challenges for legal compliance.
AUTOMATED DECISION-MAKING AND PROFILING: segmentation and ‘look alike’ audiences; inferred special category data
Profiling can also often engage the additional legal obligations which govern the use of sensitive personal data (‘special categories’ of personal data under the GDPR). An immediate consequence of which is the need to obtain the explicit consent of the individual for the use of that sensitive data, unless another legal basis is available, such as where processing is required to meet a legal obligation.
Organisations may not be aware that sensitive personal data is being used for their marketing communications; this could be in relation to targeting or to the content of communications where generative AI is involved (on which, see below).
To illustrate, consider the widely used tool of ‘lookalike audiences’ to target users online; the data points used to segment target audiences can lead to those groups being delineated on the basis of special category data. This can occur when personal traits which, in isolation, may be considered relatively anodyne, but in combination can result in a classification indicating political opinions, religious affiliation, or other sensitive personal characteristics.
One of the most prominent trends in AI applications is ‘agentic AI’, which refers to AI tools which mimic human interactions with
BIAS: do you know that data sets used to train AI models are sufficiently representative
Linked to the use of profiling, and the use of sensitive personal data, are concerns around how an AI model has been trained. The use of personal data to train an AI model requires a legal basis, but it also requires consideration of the suitability of the range of personal data for the purpose for which the trained model will eventually be deployed.
An AI used to target online marketing, for example, may analyse a user’s online footprint in order to categorise them into a profile within a trained model, in order to optimise the promotional material that users will see when accessing a webpage, or receive via direct messaging. If the underlying AI model used by that AI has been trained on an insufficiently representative range of personal data, it may be unsuitable for deployment across an organisation’s target markets. It may also be unconsciously biased, and in some cases it may even breach the Equality Act, so ensuring a suitable training data set is important. An AI model trained solely or principally on personal data from a single country or region, may be of little value if it is to be deployed to optimise a business’s targeting of potential customers globally.
AI-assisted targeting and the risk of discriminatory outcomes
An example may be drawn from what is known as dynamic pricing, where goods or services may be offered to customers at different price levels depending on a range of factors such as their geographical location (sometimes their precise location), the day (e.g. a weekday or a weekend) and the time of day.
With rich data sets on online users, and the assistance of AI, organisations have an increasing ability to target and tailor their pricing on a granular level. Businesses need to be mindful of the risks of targeting tailored offerings to individuals in a way that may be discriminatory. This may be the case where AI is routinely offering inferior pricing to a particular ethnographic group.
Such an outcome may stem from an overreliance on the location of users when determining pricing; it may be that a postcode is heavily inhabited by a particular ethnographic group, the result being that individuals of that group receive different – and potentially inferior –offers than those residing elsewhere.
REPUTATIONAL RISK: GenAI without proper supervision; creative content, but also agentic AI which could produce unintended responses
There are myriad reported examples of GenAI being deployed by businesses without proper supervision and vetting, and the results can prove highly embarrassing and potentially harmful for those brands that are caught out. An example from Canada illustrates how GenAI in combination with AI agent can trigger both reputational and legal consequences for a business. Following a death in the family, a customer asked the AI agent on Air Canada’s website about bereavement fares; they were informed that an application for a bereavement fare reduction could be made retroactively.
When the passenger later sought to apply for a fare reduction, it became apparent that the AI agent had given information which was at odds with the airline’s policy. The customer was eventually successful in a claim before the Small Claims Court and was awarded compensation of roughly $800. Air Canada unsuccessfully argued that it could not be liable for misstatements by its online AI agent; a position that received very wide press coverage, considerably amplifying the airline’s reputational damage from the incident.
There are myriad reported examples of GenAI being. deployed by businesses without proper supervision. and vetting, and the results can prove highly. embarrassing and potentially harmful for those. brands that are caught out.
IP INFRINGEMENT: risk of inadvertent copying from GenAI outputs
In the context of marketing and advertising content, the use of generative AI can be particularly attractive. Advantages of GenAI can include the rapid production of creative content, potentially at lower cost by bypassing traditional design teams, whether in-house or at external agencies. Placing increased reliance on AI-generated content from applications such as the popular ChatGPT, though, comes with risks, both legal and reputational.
Numerous high-profile legal disputes are playing out in the UK and the US courts, principally, between the developers of GenAI applications and the owners of copyright material that the underlying AI models used as training data. A question to be resolved in those disputes, is whether GenAI applications such as ChatGPT themselves may infringe copyright (i) by, in essence, comprising within them copies of copyright works, and (ii) by reproducing copyright works in response to use prompts. The result of those disputes may, therefore, prove disruptive to the use of those tools.
Here again, a properly considered (and monitored) organisational policy on the use of AI, is an important tool in seeking to mitigate risks. Those responsible for marketing activities should be aware of the risks of infringing third party intellectual property rights by, for example, prompting a GenAI tool using imagery or text obtained online, or by directing the creation of output ‘in the style of’ another brand. Not only might the output, if deployed, bring embarrassment for the business, it could give rise to a claim for copyright infringement, trade mark infringement or passing off.
In the event of a legal dispute over AI-generated content, the prompts used to create the disputed text or images may also serve as evidence of infringement, posing an additional legal challenge for business that has unadvisedly used GenAI.
AI tools offer considerable opportunities for those conducting marketing, but not without risk. Those who stand to benefit greatly from AI will engage in proper deliberation and planning before rolling out AI-based initiatives, and ensure that programmes are monitored appropriately in order to identify and address challenges which may arise.