Privacy Partnership Law
Home
AI Governance
Data Protection
Insights
Find people
Industries
DPO Services
Training
Privacy Partnership Law
Home
AI Governance
Data Protection
Insights
Find people
Industries
DPO Services
Training
More
  • Home
  • AI Governance
  • Data Protection
  • Insights
  • Find people
  • Industries
  • DPO Services
  • Training
  • Home
  • AI Governance
  • Data Protection
  • Insights
  • Find people
  • Industries
  • DPO Services
  • Training

Artifical Intelligence

New rules on the development and deployment of Artificial Intelligence are being enacted worldwide. Some like the new EU AI Act are broad in scope, and will apply to covered types of AI System (not all AI is caught) which is used or available in the EU regardless of who has developed or is providing the AI. 


There is sometimes a misunderstanding that their are no rules in place to govern AI in the UK.  This is not the case. The UK does not have a centralized framework like the AI act but instead adopts a more flexible approach based on a set of principles to guide existing Regulators on how to address AI.  This has led to these Regulators creating their own guidelines to address AI risk creating a complex framework for compliance. 


AI Governance

We can help you navigate the new rules applying to AI and to put in place effective governance and record keeping to help you manage the risk. We create AI Asset Registers so that you can identify where your risks are and quickly respond to the requirements of new laws and help you develop appropriate policies and procedures.  


Our audits and AI risk assessments establish whether you have effective controls in place to manage AI legislative risk.  Where gaps are found we can help you revise and implement policies and procedures to support your  obligations.   We work with some of the leading institutions in this area to monitor changes which may have an impact on your business and help you adapt.  


We can also manage AI vendors to make sure they are not introducing unnecessary regulatory risk into your AI ecosystem. 

AI presents valuable opportunities to enhance efficiency, improve decision-making, and drive innovation. However, developing or implementing AI without understanding the associated risks can lead to significant legal, reputational, and security challenges. Getting prepared to meet the challenges of new rules on AI  takes time.  Privacy Partnership has the expertise to help you leverage AI effectively while ensuring: 


  • Full compliance with regulatory obligations 
  • Maximised benefits from AI technologies 
  • Sustainable, long-term success 


Our pragmatic approach focuses on responsible AI adoption. We help you understand your legal requirements, build your team's AI knowledge, and implement AI at a risk level appropriate for your organisation.  



Privacy Partnership’s AI services 

Compliance and risk management:

 

  • EU AI Act applicability and scoping assessment: Check whether your company is covered by the AI Act, and what obligations you may have. 
  • High-risk AI system compliance audits and gap analysis (EU AI Act): Assess your high-risk AI systems against EU AI Act and identify any compliance gaps. 
  • AI-specific Data Protection Impact Assessments (DPIAs / PIAs): Evaluate and mitigate data protection risks unique to your AI systems. 
  • AI governance framework development and implementation: Establish clear policies, roles, and procedures for responsible AI use in your organisation. 
  • AI ethics and responsible AI advisory: Get expert guidance on embedding ethical principles into your AI development and deployment. 
  • AI bias and fairness assessments: Identify and address potential biases in your AI models to ensure fair outcomes. 
  • AI third-party risk management and vendor due diligence: Evaluate AI-related risks from your vendors and ensure their compliance. 
  • AI incident response planning and support: Prepare for and manage AI-related incidents, like model failures or bias discoveries. 


Training and awareness:

 

  • AI literacy for legal, compliance and leadership teams: Equip your key teams with a foundational understanding of AI concepts, risks, and opportunities. 
  • Deep dive: EU AI Act for practitioners: Provide in-depth training on the EU AI Act's requirements for those implementing it. 
  • Deep dive: GDPR and AI: Understand how GDPR rules apply specifically to your AI systems and data usage. 
  • Responsible AI and ethics workshops: Engage your teams in practical workshops to apply AI ethics in their daily work. 
  • AI risk management workshops: Learn to identify, assess, and manage AI-specific risks using established frameworks like NIST. 


Technical and documentation support:

 

  • AI system technical documentation support (EU AI Act): Get help preparing and maintaining the detailed technical records required for EU AI Act compliance. 
  • Transparency and explainability strategy development: Develop strategies to make your AI systems' operations and decisions clear and understandable. 
  • Data governance for AI audits and advisory: Ensure your AI data is high-quality, properly managed, and suitable for its intended use. 
  • AI compliance monitoring and reporting frameworks: Continuously track and report on your AI systems' adherence to legal requirements. 


Privacy Partnership can help you build a framework for AI governance and compliance that supports your goals, minimises disruption, and allows you to innovate and grow. 


 We can co-ordinate with the appropriate experts and consultants to provide quality management expertise in the standards you are seeking to implement.  Please ask us for our guidance note on QMS for more information. 


AI Literacy Training

Struggling with getting AI Governance taken seriously? Our AI training expert Rob Bateman can help bring your teams up to speed on AI Ethics Risk and improve their AI Literacy. This training is likely to be a requirement if your are deploying certain higher risk AI solutions into the EU.  

not sure what AI Literacy training is?

Follow Rob Bateman on linked in to Find out more

1. “We’re just using a general AI tool like ChatGPT/Gemini — do we really need to comply?”

Yes — maybe. If you're using a generative AI in the EU or for EU-facing users, you may have duties under the AI Act, even if you didn’t build or fine-tune the model yourself.

Generative AI applications, like ChatGPT, Claude, Gemini, and Llama, are a type of “General Purpose AI (GPAI)” under the AI Act, and using GPAI systems carries specific obligations in certain contexts.


2. “We didn’t train or fine-tune the model — does that mean we’re not responsible?”

Not quite. If you're deploying the model — e.g. using it or integrating it into a product, chatbot, analytics tool, or customer-facing service — you may still have obligations as a deployer under the AI Act.


3. “Does it matter how we’re using it?”

Yes — context is everything. You have more obligations if the model is:
- Used in high-risk contexts (e.g. hiring, education, law enforcement).
- Used in a user-facing way (e.g. customer support, synthetic media).
- Producing outputs that people rely on or act on.


4. “Do we need to label AI-generated content?”

Often, yes.

If your users interact with the AI (e.g. chatbot) or see its output (e.g. AI-generated images, videos, audio, or text). From 2 August 2026, you may need to:

- Inform users if they’re dealing with an AI chatbot.

- Check that the provider has put a “watermarking” system in place to ensure content can be detected as AI-generated.

- Label the content yourself in certain contexts (e.g. if you’re using generative AI to inform the public about matters of public interest or to produce “deepfakes”, with some exceptions).

5. “What if we just use it internally — like for editing or writing?”

It depends. If no one outside the organisation sees or relies on the AI’s output, transparency duties may not apply — but if it’s used to make important decisions about people (e.g. employee appraisals, job candidates), then risk assessment rules apply. You still need to review documentation from the provider and use the system responsibly.


6. “What documents or controls do we need?”

You should have:

- An “AI literacy” programme to ensure people can use generative AI responsibly
- A copy of the provider’s documentation
- A risk assessment if using it in a sensitive context
- A transparency and labelling policy if people interact with the model or see its output
- A record of compliance showing you reviewed and followed the provider’s usage instructions



  • Privacy Policy
  • Legal
  • Contact Us
  • Careers
  • Website Terms
  • Terms of Business
  • Nominated Representatives
  • Complaints
  • About us

Privacy Partnership Law Ltd  is regulated by The Solicitors Regulation Authority with registration number  829686 .  

Privacy Partnership Law Ltd. is a registered company based in England and Wales with a registration number 13211514 - and a registered office at

7 Eland Rd, London Sw11 5JX. VAT number 401788010.  It forms part of the Privacy Partnership Group of Companies.


Copyright © 2025 Privacy Partnership Law Ltd - All Rights Reserved no part of this website may be copied or reproduced without permission.

This website uses cookies.

We use necessary cookies to make our site work. We would  also like your permission  to set optional analytics cookies to help us improve it. Clicking 'Accept' below will set cookies on your device to remember your preferences. Find out more in our Privacy Policy or scroll down to read more about the different types of cookies.

Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytics cookies

Where you select "Accept" we set Google Analytics cookies to help us to improve our website by collecting and reporting information on how you use it. The cookies collect information in a way that does not directly identify anyone. For more information on how these cookies work see https://developers.google.com/analytics/devguides/collection/analyticsjs/cookie-usage?hl=en-US

DeclineAccept