WITNESS expresses concern regarding the changes in the EU AI Act’s third draft of the General-Purpose AI Code of Practice (CoP). These modifications aim to lessen the burden of applying the EU AI Act on General-Purpose AI (GPAI) model providers. This directly impacts transparency, risk assessments and copyright, and undermines fundamental rights. We also believe these changes negatively impact the information environment and the ability to detect the use of AI and ensure content authenticity. They prioritise reduced implementation over stronger commitments towards more robust rights protections.  

The EU’s third draft of the AI Code of Practice weakens crucial safeguards to no clear purpose. WITNESS, grounded in its work globally on this issue, knows the importance placed by marginalized communities on ensuring clarity and accountability around the use of AI as well as transparency on models and outputs, particularly in content and communication where truth is at risk. Innovation is not incompatible – indeed, aligns – with the confidence of all people in AI systems, and critical human rights-based safeguards. We should not sacrifice fundamental rights for corporate convenience. The CoP must rigorously reflect and strengthen the principles and safeguards enshrined in the EU AI Act.

 

– Sam Gregory, Executive Director, WITNESS

While one of the goals of the document is to provide a detailed framework for AI providers to ensure their models are safe, secure, and compliant with relevant regulations, we have concerns with a reduction of emphasis on some commitments now present in the latest version of the CoP, as well as a possible reduction in the scope of the systemic risks to be mitigated by GPAI providers. 

Below, WITNESS offers feedback on specific sections of the CoP, with the goal of strengthening the text so that the final version reinforces AI Act obligations rather than creating loopholes that could undermine them.

Transparency: Robust Guardrails start at the Model Level

This new version of the CoP has heavily eroded transparency provisions and weakened requirements for third-party evaluations for systemic risks within GPAI models. The current draft gives significant space to exempt companies from external assessment based on internal expertise or assessments from “similarly safe” models (Measure II.11.1). These are concessions that would be unprecedented in the regulation of technologies from other industries.  The final draft of CoP must demand robust third-party assessments by default for all GPAI models.

Robust regulations at the model level will result in the right guardrails and assessments of outputs. However, to ensure that AI systems are not only explainable but also that their decisions and outputs can be audited and understood by downstream users, the final draft should make a general requirement for transparency across all GPAI applications, and not just in the context of identifying and addressing serious incidents.

Protecting the Information Environment: A call for stronger Output Transparency on GPAI 

The CoP does make some effort towards serious incident reporting (Commitment II.12) – including requirements for tracking outputs from models using advanced techniques like watermarks, metadata, or other provenance methods. But currently, there is no explicit requirement for output transparency in GPAI models – neither in the AI Act or in the CoP.  However, Article 50 of the AI Act outlines binding obligations for providers of deepfake tools, which presents an opportunity to address the broader issue of output transparency in content and communications. While this is not inherently part of the voluntary guidelines of CoP, we understand that there is a clear gap in ensuring information integrity that should be focused on more effectively.

Looking ahead from Article 50, it is important to recognize that providers of deepfake tools will struggle to meet the transparency requirements if there is no pipeline responsibility back to the model creators. This challenge will be particularly significant in the case of open-source models, where accountability and traceability may be more complex to manage. Therefore, to achieve greater transparency and safeguard against misinformation, a more cohesive approach involving all stakeholders in the model development and deployment process is needed.

Another way to address this gap could be to expand the definition of transparency to include provisions for output transparency. This would involve ensuring that AI systems can be traced back to their origins (i.e., the model itself) and that the use of AI in any communicative or content output produced is clearly identifiable and understandable. This approach would align with the need for pipeline responsibility, particularly for open-source models. A review of the transparency concept needs to ensure that AI systems provide mechanisms for users to identify the source and nature of generated content within the CoP

Systemic Risks: Risks to Fundamental Rights Must Not Be Optional

This draft positions fundamental rights risks as optional. The transparency framework should ensure that companies are held accountable for how their models are used, especially in high-risk situations. Transparency isn’t just about what is shared—it’s also about ensuring that the right information is available to assess the risks associated with a given AI model. Additionally, transparency will enable companies to define and communicate the intended scenarios and use cases for their models, ensuring clearer guidance on their appropriate deployment and applications. 

Harmful manipulation, identified as a type of systemic risk, highlights this issue. Even if a model wasn’t designed for manipulation, if there’s a reasonable possibility it could be used that way, the risk must be addressed. The CoP also mentions “Other types of risks,” including risks to fundamental rights and society as a whole. If companies can choose which risks to assess this might mean that fundamental rights and harms that might emerge from some of these technologies might go unnoticed or without the proper remedy mechanisms. 

The risk assessment process for GPAI models is currently unclear, and there is a lack of understanding regarding the various purposes to which a model can be integrated. While there is some recognition of the ripple effects caused by the integration of multiple models in the assessment process, the regulations fail to offer adequate protection or protocols to manage these complexities, leaving significant gaps in ensuring comprehensive risk mitigation. This uncertainty highlights the need for a final draft with a standardized risk assessment framework that is not left to the discretion of individual providers. As many of our civil society peers have echoed, the serious fundamental rights risks that model providers assess and mitigate should not be optional, and neither should be the responsibility of protecting fundamental rights. Other criticism points out concerning reductions on the range of risks, as highlighted by Ada Lovelace Institute and HuggingFace.

A truly comprehensive understanding of risk must consider the real-world applications of models and their effects on diverse users. The methodologies outlined in measures II.4.4 and II.4.5 are primarily technical and fail to address these crucial aspects. And, while measure II.4.8 acknowledges the importance of stakeholder consultation, the phrasing—specifically “where possible and relevant”—diminishes the significance of this step. The sociotechnical evaluation must be regarded as an integral component of the risk assessment process, not merely as a secondary or optional commitment to consult.

Copyright Protection Must Not Be Based on Metadata Standards Not Intended For This Purpose

The CoP’s dedicated copyright section raises significant concerns regarding metadata usage. While implementing appropriate, human rights-respecting copyright policies and ensuring lawful data mining are crucial, the final draft must not pursue methods that compromise privacy or misuse metadata. Specifically, we are worried that the CoP might inadvertently promote the use of metadata standards for rights management, like  the Coalition for Content Provenance and Authenticity standards (C2PA), which were not designed for this purpose.

Relying on such standards risks creating gaps in intellectual property protection, undermining content ownership and fair use, and jeopardising privacy through potential Personally Identifiable Information (PII) misuse.  C2PA does make explicit commitments to human rights and privacy in their guiding principles, but they have prioritised content provenance and authenticity as use purposes, explicitly excluding rights management from its core design.

Furthermore, utilising metadata standards for rights management could create a discriminatory two-tiered system. Only those with the technical capacity to leverage these standards would have their works fully protected, leaving others vulnerable. This creates an uneven playing field where content protection is determined by technical compatibility, not inherent rights.

Other Considerations

The CoP text would benefit from clearer definitions and stronger wording to reduce ambiguity. Communicating updates and maintaining documented information is a critical consideration, and signatories should be asked to follow this practice. However, the CoP lacks clarity on which specific protocols and standards in these areas it refers to.

One last point we would like to call attention to is the safety and security chapter of CoP. As highlighted by the Ada Lovelace Institute, the current version of the document “largely draws on companies existing safety and security practices, and voluntary obligations under international agreements that they should already comply with”. While there’s relevance in consolidating and standardizing industry practices on the text, if this section remains the way it is, it means that the CoP could eventually fail to innovate on the introduction of new safety and security standards when these are not yet part of industry practices. 

Recommendations and Conclusion 

The 3rd draft of the General-Purpose AI CoP is a step forward, but there are still some points that need to be addressed in order for the final text to be more aligned with the protection of fundamental rights and mitigation of risks. In order to address the gaps listed above, we suggest the following: 

Systemic Risk Assessment: 

  • GPAI providers should not be the sole decision-makers on which systemic risks to assess and mitigate. There needs to be a more comprehensive and standardized approach.
  • The CoP should take sociotechnical evaluations as an integral component of the risk assessment process, not merely as a secondary or optional voluntary obligation to consult.

Transparency Requirements: 

  • A review of the transparency concept needs to ensure that GPAI systems provide mechanisms for users to identify the source and nature of generated content within the CoP, which is essential to ensure meaningful implementation of Article 50.
  • The CoP should make a general requirement for transparency across all GPAI applications, and not just in the context of identifying and addressing serious incidents.
  • The final draft of CoP must demand robust third-party assessments by default for all GPAI models.

Copyright: 

  • Copyright protection must be based on inherent rights. While implementing copyright policies and ensuring lawful data mining are crucial, the final draft must not pursue methods that compromise privacy or misuse metadata.

 

Published 28th March, 2025

Archived in AI.

Leave a Reply

Your email address will not be published. Required fields are marked *