AI presents valuable opportunities to enhance efficiency, improve decision-making, and drive innovation. However, developing or implementing AI without understanding the associated risks can lead to significant legal, reputational, and security challenges. Getting prepared to meet the challenges of new rules on AI takes time.
New rules on the development and deployment of Artificial Intelligence are being enacted worldwide. Some like the new EU AI Act are broad in scope, and will apply to covered types of AI System (not all AI is caught) which is used or available in the EU regardless of who has developed or is providing the AI
The UK does not have a centralized framework like the AI act but it does have rules on AI that are enforced. It has adopted a more flexible approach based on a set of principles to guide existing Regulators on how to address AI. This has led to these Regulators creating their own guidelines to address AI risk creating a complex framework for compliance.
Our audits and AI risk assessments establish whether you have effective controls in place to manage AI legislative risk. Where gaps are found we can help you revise and implement policies and procedures to support your obligations. We work with some of the leading institutions in this area to monitor changes which may have an impact on your business and help you adapt.
Our services include:
We can help you navigate the new rules applying to AI and to put in place effective governance and record keeping to help you manage the risk.
Our AI training expert Rob Bateman can help bring your teams up to speed on AI Ethics Risk and improve their AI Literacy. This training is likely to be a requirement if your are deploying certain higher risk AI solutions into the EU.
We offer:
AI third-party risk management and vendor due diligence: Let us help you evaluate AI-related risks from your vendors and ensure their compliance. We can help guide you on embedding ethical principles into your AI development and deployment.
We can draft tailored contracts with robust AI safeguards to help protect your business data. We review clauses with suppliers and partners and assess their policies and contractual terms for compliance. Our work also includes conducting thorough vendor due diligence reviews to ensure third-party risks are managed and your potential contractual liability is mitigated.
We can review vendor documentation for privacy and security compliance and raising enquiries on your behalf to address risks.
AI system technical documentation support (EU AI Act): We help your prepare and maintaining the detailed technical records required for EU AI Act compliance and for standards such as ISO 42001.
1. “We’re just using a general AI tool like ChatGPT/Gemini — do we really need to comply?”
Yes — maybe. If you're using a generative AI in the EU or for EU-facing users, you may have duties under the AI Act, even if you didn’t build or fine-tune the model yourself.
Generative AI applications, like ChatGPT, Claude, Gemini, and Llama, are a type of “General Purpose AI (GPAI)” under the AI Act, and using GPAI systems carries specific obligations in certain contexts.
2. “We didn’t train or fine-tune the model — does that mean we’re not responsible?”
Not quite. If you're deploying the model — e.g. using it or integrating it into a product, chatbot, analytics tool, or customer-facing service — you may still have obligations as a deployer under the AI Act.
3. “Does it matter how we’re using it?”
Yes — context is everything. You have more obligations if the model is:
- Used in high-risk contexts (e.g. hiring, education, law enforcement).
- Used in a user-facing way (e.g. customer support, synthetic media).
- Producing outputs that people rely on or act on.
4. “Do we need to label AI-generated content?”
Often, yes.
If your users interact with the AI (e.g. chatbot) or see its output (e.g. AI-generated images, videos, audio, or text). From 2 August 2026, you may need to:
- Inform users if they’re dealing with an AI chatbot.
- Check that the provider has put a “watermarking” system in place to ensure content can be detected as AI-generated.
- Label the content yourself in certain contexts (e.g. if you’re using generative AI to inform the public about matters of public interest or to produce “deepfakes”, with some exceptions).
5. “What if we just use it internally — like for editing or writing?”
It depends. If no one outside the organisation sees or relies on the AI’s output, transparency duties may not apply — but if it’s used to make important decisions about people (e.g. employee appraisals, job candidates), then risk assessment rules apply. You still need to review documentation from the provider and use the system responsibly.
6. “What documents or controls may we need?”
You should have:
- An “AI literacy” programme to ensure people can use generative AI responsibly
- A copy of the provider’s documentation
- A risk assessment if using it in a sensitive context
- A transparency and labelling policy if people interact with the model or see its output
- A record of compliance showing you reviewed and followed the provider’s usage instructions
Privacy Partnership Law Ltd is regulated by The Solicitors Regulation Authority with registration number 829686 .
Privacy Partnership Law Ltd. is a registered company based in England and Wales with a registration number 13211514 - and a registered office at
7 Eland Rd, London Sw11 5JX. VAT number 401788010. It forms part of the Privacy Partnership Group of Companies.
Copyright © 2025 Privacy Partnership Law Ltd - All Rights Reserved no part of this website may be copied or reproduced without permission.
We use necessary cookies to make our site work. We would also like your permission to set optional analytics cookies to help us improve it. Clicking 'Accept' below will set cookies on your device to remember your preferences. Find out more in our Privacy Policy or scroll down to read more about the different types of cookies.
Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.
Where you select "Accept" we set Google Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone. For more information on how these cookies work see https://developers.google.com/analytics/devguides/collection/analyticsjs/cookie-usage?hl=en-US