Questions every board should be asking about AI, data and cyber security in 2026
2026 marks a decisive shift in corporate risk and governance. Artificial intelligence, data protection and cyber security are no longer discrete disciplines managed in isolation. They have converged into a single, systemic source of enterprise risk – one that directly affects operational resilience, regulatory exposure, and organisational trust.
For boards of directors, this convergence fundamentally changes what ‘good governance’ looks like. A cyber incident can now expose personal data to public AI models, trigger regulatory investigations across multiple regimes, derail operations for weeks, and permanently damage brand credibility – all from a single point of failure. At the same time, the rapid adoption of generative AI inside organisations has introduced new risks around intellectual property loss, privacy breaches, fraud, and accountability.
In this environment, resilience can no longer be delegated solely to technology teams or compliance functions. Regulators, courts, insurers and investors are increasingly asking not whether an organisation was breached, but whether harm was foreseeable – and whether reasonable, proportionate steps were taken at board level to prevent it.
This Q&A with our technology sector team explores the key questions every board should be asking – before attackers, regulators, or the market ask them first.
Artificial intelligence and data
AI is now embedded across core business functions, from customer engagement and product development to HR, finance and internal decision-making. That creates opportunity, but it also turns data governance, accountability and regulatory readiness into board-level issues.
This section focuses on the questions that help boards move from enthusiasm and experimentation to controlled adoption: understanding where AI is already in use, what data is being shared or repurposed, whether oversight is credible, and how to evidence ‘reasonable steps’ as expectations harden across regulators, insurers, investors and counterparties.
Do we know the true extent of 'shadow AI' within our organisation?
Author: Joanne Vengadesan
As artificial intelligence tools become increasingly accessible, employees are adopting them – often informally – to boost productivity, spark creativity, or streamline repetitive tasks. This quiet, decentralised use of unapproved AI tools is commonly referred to as ‘shadow AI’.
The challenge for many organisations is not just identifying who is using AI, but understanding how, why, and where these tools are being embedded into everyday workflows. Surveys across industries consistently show that a large proportion of employees use generative AI without approval, often believing they are acting helpfully or harmlessly. Some 38% of employees acknowledge sharing sensitive work information (including code and strategy) with AI tools without the employers’ permission. While it can be well intentioned, beneath the surface, this behaviour may be creating risks and liabilities that leadership cannot see or manage, such as:
- Data leakage and confidentiality breaches: employees may inadvertently input sensitive, client, or proprietary information into public AI systems, risking exposure, reputational damage, loss of privilege, or regulatory breaches. Customers and clients are increasingly worried about their data reaching unsanctioned AI systems and unapproved leakage of customer data could lead to legal action;
- Compliance and legal exposure: unapproved tools may fail to meet data protection standards or contractual obligations. The outputs generated may also raise IP ownership issues, which is especially important if shadow AI is used to develop code or other products and services;
- Security vulnerabilities: externally hosted AI systems may introduce new threat vectors, including poorly secured APIs or third party data processing practices outside organisational control;
- Quality and reliability risks: shadow AI use may lead to inconsistent work quality, unreviewed AI-generated content, or decisions made on the basis of inaccurate outputs.
Actions organisations should take:
- Establish clear policies and guidance: define which AI tools are permitted, how they may be used, and what data employees must never share. Guidance must be practical, accessible, and non technical;
- Provide approved, secure AI alternatives: offering sanctioned AI platforms reduces the temptation to experiment with risky and unauthorised tools;
- Educate and upskill staff: regular training helps employees understand risks, safe usage practices, and when to involve oversight. It is key for employees to appreciate that using shadow AI could lead to reputational damage and possibly legal action from third parties and regulators;
- Implement monitoring and governance: AI registers, risk assessments, and transparent reporting channels help organisations identify shadow AI usage and transition it into safe, managed practice;
- Foster a culture of openness: encourage employees to experiment with AI – but safely – and remove the stigma associated with asking for approval.
Does our board have sufficient 'AI literacy' to uphold and discharge its duties?
Author: Joanne Vengadesan and Dan Lovett
As businesses integrate AI into their products, services, processes and internal decision-making, boards of directors and other senior executives must ensure they have the appropriate level of AI knowledge and skills to provide effective oversight. Directors need to understand the technology as well as the governance issues associated with it. AI literacy is becoming essential to fulfilling directors’ legal duties and ensuring that AI-driven opportunities are taken forward responsibly.
Without an understanding of AI, directors may struggle to interrogate assumptions, evaluate commercial decisions, and ensure that AI projects are legally compliant and ethical. Key risk areas to upskill on include:
- Data privacy and security;
- Contractual issues;
- Intellectual property infringement;
- Compliance with the EU AI Act and/or sector specific regulations as well as voluntary codes of practice and other guiding principles;
- Ethics and reputational risk.
The Companies Act 2006 requires directors to exercise reasonable care, skill and diligence. As AI becomes mainstream, what is considered ‘reasonable’ is shifting. A lack of AI literacy will inevitably make it harder for directors to demonstrate that they exercised informed judgment in areas such as data governance, fairness and transparency, cybersecurity, intellectual property and the use of automated decision making.
Boards of directors need to consider whether they have the right mix of skills and expertise to oversee AI’s strategic and operational impact. Do they have sufficient collective understanding of AI to challenge management effectively and are they confident they know enough to be able to assess AI-related risks?
Article 4 of the EU AI Act imposes a specific obligation on providers and deployers of AI systems to ensure that staff possess sufficient AI literacy. However, the Digital Omnibus on AI (which contains targeted simplification measures) proposes to transform the obligation on providers and deployers of AI systems to ensure AI literacy into an obligation on the Commission and member states to foster AI literacy. Boards should monitor the Digital Omnibus on AI developments closely.
Actions organisations should take:
- Review board composition to determine whether specialist AI expertise is sufficient;
- Consider appointing a director to lead AI projects or appointing a non executive director with AI expertise. Alternatively the board may consider creating an advisory panel of external AI experts or a committee to lead AI projects;
- Engage external legal or technical advisers or consultants to brief the board on emerging regulation, risk management and best practice;
- Ensure regular training throughout the organisation from the top down on AI fundamentals with regulatory updates relevant to the particular sector;
- Embed AI oversight within existing governance structures, such as risk, audit or technology committees. Ensure that AI forms part of the organisation’s strategy and that policies and procedures are in place to maintain a consistent approach across all AI projects.
Do we understand the privacy implications of AI training data?
Author: Dan Lovett
As artificial intelligence becomes increasingly embedded in business processes, the question of what data these systems learn from has taken centre stage. Yet many organisations still underestimate the privacy and regulatory implications tied to AI training data, particularly when that data includes personal or sensitive information.
Even when this information appears low risk, training models on personal data can have unintended consequences. Anonymised datasets can be re identified, confidential information may inadvertently surface in model outputs, and organisations may find themselves processing far more personal data than intended. Models can often reproduce or infer sensitive details embedded in training datasets. Malicious actors may also attempt to recover training data from deployed models. This could lead to personal data breaches which could attract fines from data regulators and legal action from individuals.
When using AI to scrape data in the public domain, there are even greater data protection issues to consider. The UK data protection regulator, the Information Commissioner’s Office (ICO), is increasingly examining whether organisations can rely on legitimate interests as a lawful basis when scraping personal data for AI training. If organisations fail to demonstrate a lawful basis, the ICO can issue Enforcement Notices requiring destruction of AI models and may impose fines.
Actions organisations should take:
- Map the ‘data lineage’ of all AI models, confirming whether the AI uses your data for training and documenting how both your organisation and any third parties may use that data. For any AI training, ensure there is a clear and documented legal basis for every data point ingested;
- Use privacy preserving techniques, such as data sets which are not personal data to train AI models. Synthetic data can be particularly useful, as it is artificially generated data that mimics the statistical patterns, structure, and characteristics of real world data without containing any actual personal or sensitive information about real individuals;
- Demand transparency from AI vendors about training sources, model governance and how data provided to the AI will be used;
- Implement clear internal AI policies that set boundaries for staff usage and data inputs.
Are our AI systems compliant with the August 2026 EU AI Act deadline?
Author: Tom Perkins
From 2 August 2026, the remaining operative provisions of the EU AI Act (other than Article 6(1)) come into force. This marks the end of the transitional period and the beginning of active enforcement for a wide range of AI systems. For developers, providers, and deployers of AI, this date represents a hard regulatory line: any system falling within scope must meet the relevant EU AI Act compliance obligations or face significant legal and commercial consequences.
The most substantial impact falls on high risk AI systems, which include tools used in biometric identification, employment, essential private services, education and critical infrastructure. At the same time, the obligations for general purpose AI and general purpose AI models also become fully operational.
To comply, organisations must implement a comprehensive set of controls, including:
- Robust risk management frameworks, covering identification, analysis, mitigation, and continuous evaluation of risks throughout the system lifecycle;
- Accuracy, robustness, and cybersecurity safeguards, ensuring the system performs reliably under expected conditions and is resilient to adversarial attacks or data integrity threats;
- Human oversight mechanisms, designed to prevent or minimise risks to safety and fundamental rights, and to ensure human intervention remains possible;
- High quality, relevant, and representative training, validation, and testing data, with documented data governance processes;
- Post market monitoring systems, enabling ongoing assessment of system performance, incident reporting, and rapid remediation of emerging risks.
The EU AI Act also imposes extensive record keeping and documentation obligations on providers, deployers, importers, and distributors. This includes maintaining technical documentation, activity logs, conformity assessment records, and evidence of compliance processes.
Non compliance carries severe penalties. The most serious non-compliance, such as engaging in prohibited AI practices, can result in fines of up to €35 million or 7% of global annual turnover, whichever is higher. Breaches of high risk AI obligations can trigger fines of up to €15 million or 3% of global annual turnover. Importantly, once the August 2026 deadline passes, systems classified as high risk are immediately subject to enforcement, with no further grace period.
Actions organisations should take:
- Catalogue all AI systems currently in use, under development, or procured from third parties;
- Determine your role under the Act, whether that is of a provider, deployer, importer, or distributor in respect of each AI system as obligations differ significantly;
- Conduct a high risk classification assessment to identify whether any system falls within high risk categories;
- Conduct a full EU AI Act compliance audit and implement remediation measures well ahead of the deadline.
Are we prepared for the 'Failure to Prevent' fraud offence?
Author: Charlotte Hill
The introduction of the Failure to Prevent fraud offence under the Economic Crime and Corporate Transparency Act 2023 (ECCTA) marks a fundamental shift in how organisations must think about fraud risk – particularly in an era where artificial intelligence is becoming deeply embedded in everyday business processes.
Under ECCTA, an organisation can be held criminally liable if an employee, agent, subsidiary or other ‘associated person’ commits a fraud intending to benefit the organisation, and the organisation does not have reasonable fraud prevention procedures in place. Crucially, liability attaches even where senior leaders are unaware of the misconduct.
This represents a significant compliance challenge at a time when AI tools – often deployed rapidly and with limited oversight – create new vectors for misconduct. For example, an employee might use generative AI systems to fabricate financial or performance data, manipulate reports, or automate sophisticated forms of deception that are harder to detect through conventional controls. If that AI enabled fraud benefits the organisation, the organisation may find itself exposed to criminal prosecution unless it can demonstrate the existence of ‘reasonable procedures’ to prevent such conduct.
Government guidance updated in October 2025 makes clear that the offence is broad in scope, capturing employees, agents and subsidiaries acting for or on behalf of the organisation. The guidance emphasises that while advisory, it is not a safe harbour: even strict compliance does not automatically amount to a defence. Organisations must design fraud prevention procedures tailored to their own structure, risks and activities.
The offence came into force on 1 September 2025 and applies to large organisations – those meeting at least two of the following thresholds: more than £36 million turnover, more than £18 million total assets, or more than 250 employees. But importantly, the principles set out in the guidance are considered good practice for organisations of all sizes, especially those adopting AI into critical decision making, operations or client delivery.
AI and fraud risk: why this matters now
AI systems offer speed, efficiency and insight – but they also magnify the potential for fraud. ECCTA was designed to make it easier to hold organisations to account precisely because modern fraud is often perpetrated internally by individuals with intimate knowledge of systems and controls. AI intensifies this: tools that automate data analysis can just as easily automate data manipulation.
The question boards and executives must ask is not simply ‘Do we have fraud controls?’ but ‘Are our controls fit for a world where AI can accelerate and conceal misconduct?’
The guidance highlights six principles for reasonable prevention procedures – top level commitment, risk assessment, proportionate controls, due diligence, communication and training, and ongoing monitoring and review. These principles must now be viewed through an AI risk lens. Without clear governance, human oversight, auditability and guardrails around AI use, organisations risk falling short of the ‘reasonable procedures’ standard.
Actions organisations should take:
With ECCTA in full force, forward thinking organisations should act decisively. Below is a practical starting point:
- Map where AI is used across the organisation, including informal or ‘shadow AI’ adoption;
- Update fraud risk assessments to reflect AI enabled misconduct scenarios;
- Implement clear policies governing AI use, including prohibitions, approval pathways and monitoring rules;
- Strengthen data governance and audit trails, ensuring AI generated outputs are reviewable and verifiable;
- Deliver targeted training on AI risks, fraud indicators and reporting channels;
- Review and test controls regularly, documenting enhancements to demonstrate ongoing monitoring and review.
Cyber security and resilience
Cyber resilience is no longer just about preventing attacks, it’s about sustaining operations and recovering credibility at pace. As organisations become more interconnected (cloud, SaaS, third parties, AI-enabled tooling), a single incident can cascade into prolonged outage, data exposure, regulatory scrutiny and reputational harm.
This section sets out the questions boards should ask to test whether resilience is real rather than assumed: supply chain dependencies, operational continuity, crisis communications, and the rapidly evolving fraud landscape.
Have we mapped the cyber resilience of our third party supply chain?
Author: Sarah Kenshall
Always remember that your cloud-based AI systems sit on vulnerable physical infrastructure made up of telecoms, power centres, data centres and connected devices. It is the same for your AI solution providers, who in turn are likely to have built their solutions on licensed-in products and services of other third party providers, whether in terms of the model or the training data.
Complex digital supply chains like this present specific risks around accountability, business continuity and operational resilience from a security perspective.
Actions organisations should take:
- Have a register of all the third-party AI and data suppliers who interface with your systems and a clear view of the critical providers (acknowledging that this is no small task, best done through a detailed audit and risk assessment exercise);
- Upgrade all such supply agreements by applying an operational resilience lens to them, ideally bringing them in line with the risk-based regime contained in the UK Cyber Security and Resilience Bill which is expected to receive royal assent in 2026. Although the bill is sectoral, it deals with systemic risk created by dependency on digital suppliers and is worth aligning with irrespective of whether your business falls within its scope. Ultimately, it falls to the customer to contractually express the minimum levels of operational resilience required of a supplier;
- Run joint incident simulations with critical suppliers to ensure both teams know how to collaborate and identify gaps in resilience, and are able to respond quickly to outages and security breaches. These steps may not prevent them but will put your business in a much stronger position to deal with them when they occur;
- Ensure that contracts with third-party providers include clear terms for, among other things, warranties relating to data provenance and authorisations, liability, indemnity and service descriptions. When a systems outage or breach occurs, determining liability can be complex. This is best achieved by having a fully fleshed out product or service specification and a clear allocation of rights and responsibilities between the parties in relation to that specification.
Could our business survive an operational outage?
Author: Oliver Kidd
The risk: the incident at JLR in September last year caused a total paralysis over weeks for its production lines and resulted in a 43% drop in wholesale volumes. The cyber-attack forced JLR to suspend production at its factories throughout September, with Britain’s largest carmaker only returning to normal levels by mid-November. This incident highlights the importance, not only of having a Business Continuity Plan (BCP) in place, but also ensuring that contingency plans can practically mitigate the effects of the varying degrees of attack severity and business disruption. The ever-increasing reliance on technology by organisations, along with the interconnectivity of technologies between organisations and partners creates a complex supply chain and an environment in which cyber risks are expanding exponentially. Where previously BCPs may have assumed restoration in a matter of days, the JLR incident highlights the growing threat that modern ransomware can play in paralysing operations for weeks or even longer, which could prove fatal to many businesses.
Actions organisations should take:
- Make sure to review and stress-test BCPs against varying degrees of severity, including a ‘long-tail’ scenario (30+ days);
- Review your organisation’s insurance policies for ‘systemic event’ exclusions. This type of event is still lacking a clear definition across the industry, but it is usually drafted in a cyber context where a cyber incident impacts multiple entities in a single act (eg cloud hosting outage). Exclusions such as these could leave your organisation without business interruption insurance cover, from which you might otherwise expect protection.
Does our crisis plan include a reputation management strategy to resolve a cyber security incident?
Author: Adele Ashton
Cyber security incidents are no longer viewed solely as technical failures – they can be reputation-defining events. Even when the operational impact of a cyber security incident is contained, lasting damage can be caused if an organisation is not prepared and there are poor or delayed communications.
- Key reputational risks include:
- Loss of customer and stakeholder trust;
- The public creating its own narrative of events;
- Regulatory scrutiny and legal exposure;
- Long-term damage to the brand and negative commercial impact.
When a cyber security incident occurs, legal notification, technical remediation and public narrative must be synchronised to prevent longer term issues. It is therefore essential that an organisation’s crisis plan includes a robust and proactive reputation management strategy to protect stakeholder confidence and safeguard the long-term value of the brand.
Actions organisations should take:
A board of directors should ensure that its current crisis plan includes a strategy for managing the reputational risks of a cyber security incident. An effective strategy should include the following:
- A designated trained communications lead within the crisis management team who has pre-agreed authority to issue public statements/updates quickly. A single spokesperson for communications is preferable to ensure consistent messaging;
- Ready prepared template holding statements to deal with the most common cyber security incidents, for example ransomware with potential data theft, a confirmed data breach, a service outage or a third party incident impacting the business. The messaging should reflect empathy, transparency and accountability;
- A template statement should be prepared for each group of stakeholders, such as customers, employees, investors, regulators and partners, as well as for social media and the press. There should be clear guidance to employees on what they can/cannot share publicly;
- Stakeholder mapping which identifies the key stakeholder groups and sets out the prioritisation as to who should be notified, how and when;
- A defined plan for monitoring media and social networks, particularly with an eye to emerging narratives and misinformation so that inaccuracies can be quickly corrected;
- A plan for rebuilding confidence after the incident is resolved, to include post-incident updates regarding remediation and improvements, targeted communications to key stakeholders and monitoring of brand impact and customer sentiment.
Are your colleagues, suppliers and clients capable of detecting a deepfake CEO?
Author: Charlotte Hill
The rapid rise of AI enabled impersonation has created one of the most pressing fraud risks facing organisations today: the deepfake CEO. With hyper realistic audio and video now straighforward to generate, attackers no longer need technical sophistication or insider access – they simply need a convincing clone of an executive’s voice or face, produced using widely available generative AI tools.
Recent global data underscores the scale and speed of the threat. In 2024, a deepfake attack occurred every five minutes, while digital document forgeries surged by 244% year on year, surpassing physical counterfeits for the first time in history (according to Entrust’s 2025 Identity Fraud Report). These attacks are increasingly professionalised, fuelled by ‘fraud as a service’ platforms that give criminals easy access to sophisticated tools for identity manipulation, voice cloning and biometric spoofing.
This shift marks a turning point. Traditional verification measures – call backs, email confirmations, even video based approvals – are losing their reliability. Cybercriminals can now generate hyper realistic deepfake video calls, spoof live facial recognition, or create AI authored instructions that mimic an executive’s communication patterns with startling accuracy. The question is no longer if an organisation will encounter such an attack, but whether the humans on the receiving end – colleagues, suppliers, clients – are trained and equipped to recognise that something is amiss.
The data is particularly stark for synthetic identity and onboarding fraud. Digital identity verification providers observed that digital forgeries accounted for 57% of all document fraud in 2024, with national ID cards hit hardest. Fraudsters now routinely blend document manipulation with deepfake face swap techniques to bypass onboarding checks – attacks that previously required specialist skills but can now be launched by amateur actors using consumer tools.
For businesses, the threat landscape is no longer limited to phishing emails or invoice redirection scams. AI generated impersonations of CEOs and CFOs are being used to authorise fraudulent payments, instruct internal teams to bypass controls, and pressure external partners to act quickly. Hyper realistic synthetic audio can be deployed to ‘approve’ multimillion pound transfers. Deepfake video calls can be used to trick suppliers or clients into changing bank details or sharing sensitive information. And because these attacks are so convincing, even experienced professionals may not recognise them until it is too late.
The 2025 Identity Fraud Report makes clear that deepfakes and digital forgeries are the fastest growing categories of fraud. The combination of readily available generative AI tools and highly scalable attack methods means no organisation is insulated – regardless of size, sector or digital maturity.
This creates an urgent need for a new kind of organisational readiness: not just stronger technology, but stronger people. Fraud resistant culture, AI specific training and robust identity verification processes are now essential pillars of corporate resilience. If colleagues, suppliers and clients cannot reliably distinguish between a real executive and an AI generated imitation, then the organisation is effectively exposed.
Deepfakes are no longer a future threat – they are a present operational reality. The organisations that act now will be the ones protected tomorrow.
Actions organisations should take:
- Conduct AI specific fraud training for all staff, focusing on deepfake awareness and real world attack simulations;
- Implement multi factor executive authentication, including out of band confirmations and secure approval workflows;
- Mandate verification protocols for all high risk requests, irrespective of source or seniority;
- Strengthen supplier and client onboarding, including document analysis tools and deepfake detection technology;
- Audit communication channels, ensuring processes do not rely solely on voice, video or email for critical authorisations;
- Establish a rapid response escalation procedure for suspected impersonation attempts across all departments.
Board action checklist
Infrastructure
• maintain a single AI register covering approved tools, known shadow AI, owners, use cases and risk ratings;
• map data lineage for each AI use: what data goes in, where it goes, whether it trains models, retention and locations;
• map supplier dependencies (critical vendors, sub-processors, key fourth parties where known).
Policy and guardrails
• publish ‘allowed AI’ rules in plain English: permitted tools, prohibited data types, approval pathway, human review standards;
• create simple data input rules (what can/cannot be pasted, and how to sanitise);
• make reporting safe: clear escalation routes for accidental misuse and suspected incidents.
Approved tools
• provide sanctioned AI platforms so teams are not forced into public tools by default;
• configure minimum controls (access, retention, logging, vendor settings where available);
• provide practical templates (safe prompting, red-line examples, review checklists).
Training and literacy
• run a board-level AI literacy programme (repeatable, short-form): AI basics, privacy, IP, cyber, assurance, and governance expectations;
• deliver role-based training for high-risk teams (finance, HR, procurement, client-facing);
• include deepfake/impersonation scenarios and response drills for anyone who approves payments or sensitive changes.
Governance and accountability
• assign a senior accountable owner for AI governance and a named board sponsor;
• route AI and cyber risk through existing committees (risk/audit/tech) with a clear cadence;
• require documented decision records for material deployments (risks considered, controls adopted, why acceptable).
Risk assessment and assurance
• standardise pre-deployment AI risk assessments (and periodic reviews) covering: purpose, data risks, security, human oversight, auditability;
• keep evidence of ‘reasonable steps’: approvals, logs, controls, training records, incident learnings;
• prioritise based on risk: heavier assurance for high-impact/high-risk use cases.
Privacy and data governance
• confirm lawful basis and purpose limitation for any training/fine-tuning and scraping of public data;
• minimise personal data in AI workflows; use privacy-preserving approaches where appropriate (eg synthetic data);
• demand vendor transparency on training use, retention/deletion, sub-processing and data location.
Regulatory readiness
• run an EU AI Act readiness sprint: classify systems, confirm your role (provider/deployer/importer/distributor), and identify high-risk use;
• build a remediation plan ahead of 2 August 2026 (documentation, oversight, monitoring, incident reporting);
• ensure procurement and due diligence can evidence compliance (or credible progress) when asked.
Third-party contracts and supplier resilience
• review key supplier contracts: security obligations, audit rights, incident notification, service levels, data provenance, liability/indemnities;
• identify critical providers and set minimum resilience requirements;
• run joint incident simulations with critical suppliers.
Incident response
• stress-test BCPs for prolonged disruption (including 30+ day outages and systemic supplier failures);
• review cyber insurance assumptions/exclusions (including systemic event wording);
• maintain a joined-up crisis plan: comms lead, holding statements, stakeholder sequencing, monitoring and post-incident trust rebuild.
Introducing our technology lawyers
AI, data and cyber risk have converged and boards are being judged on what they knew, what they asked, and what they did. Our technology team supports organisations navigating the complexities of these areas with pragmatic, commercially focused advice that helps you innovate securely, meet regulatory expectations, and protect trust.
We advise on the full lifecycle: AI strategy and governance, data protection and privacy compliance (often across jurisdictions), cyber incident readiness and response, and the contracts that underpin modern digital supply chains.
We also help clients manage the pressure points that surface when something goes wrong – regulatory investigations, disputes, and recovery of losses – bringing a joined-up approach across legal, technical and reputational considerations.
Whether you’re deploying generative AI internally, building AI-enabled products, responding to a serious incident, or stress-testing third-party risk, we focus on outcomes: clearer accountability, stronger controls, and decision-making you can defend – to regulators, customers, insurers and the market.
Key contacts
Insights
What people are saying
Recognised for its prowess across a full range of matters, including structuring and negotiating complex technology transactions, artificial intelligence projects, high-tech manufacturing agreements and work among novel and emerging technologies.
Penningtons Manches Cooper’s IT team is brilliant, reliable and responsive. The quality of service is outstanding.
Contact us
Please complete the short form below to send us your enquiry. We will be in touch shortly with a reply.
If you need to speak to someone soon, call:








