In this edition of our global AI bulletin, we will be looking at:

  • Global – Report published on AI governance in central banks.
  • Asia – Hong Kong: Report highlights responsible AI adoption in finance; Privacy Commissioner publishes generative AI use checklist.
  • Europe – Commission updates guidelines for responsible AI use in research; Report published on AI privacy risks and large language models; European Commission launches AI Continent Action Plan; European Commission publishes third draft of General-Purpose AI Code of Practice.
  • Middle East – Gulf state to use AI to write laws.
  • UK – Bank publishes report on AI in the financial system; AI Private Members’ Bill passes first reading.
  • US – White House issues new AI policies for federal agencies; Newly revised Texas AI Bill; House of Representatives’ new AI Bill; Department of Energy flags 16 potential sites for AI-energy projects; Institute publishes AI report on adversarial machine learning.

Global

Global: Report published on AI governance in central banks

On April 28, 2025, the Bank for International Settlements published a report on AI governance and implementation in central banks. The report highlights the potential benefits of AI, such as automation, productivity, and operational efficiency.

The report highlights how central banks are increasingly adopting AI for tasks like information retrieval, computer programming, and data analytics. However, many are still in the early stages of AI adoption. Effective AI use requires robust governance to address privacy, cybersecurity, and ethical concerns.

The report emphasizes the importance of data quality, collaboration with private actors, and the use of both open and closed source AI models.

Impact: Central banks should:

  • Balance the need for computational power with security and cost considerations.
  • Continue improving data management practices and advancing AI literacy to fully harness AI’s benefits.
  • Establish robust governance frameworks to address privacy, cybersecurity, and ethical concerns.
  • Focus on curating high-quality data and metadata to ensure transparency, traceability, and machine readability.
  • Develop specialized training programs and promote AI literacy among staff and wider audiences.

Asia

Hong Kong: Report highlights responsible AI adoption in finance

On April 9, 2025, the Hong Kong Institute for Monetary and Financial Research published a report on generative AI (GenAI) in financial services. The report highlights the steady adoption of GenAI among Hong Kong’s financial institutions.

The report draws on survey and interview findings. It captures market participants’ views on the current state of GenAI adoption among local financial institutions. It also outlines the expected trajectory of GenAI development in Hong Kong.

Currently, 75% of surveyed institutions have implemented or are piloting GenAI use cases. This figure is expected to rise to 87% within three to five years. Challenges include model accuracy, data privacy, security, and resource constraints. However, advancements in technology and regulatory engagement are expected to facilitate broader adoption.

Impact: The report aims to inform best practices for responsible GenAI innovation and adoption, as well as industry-wide capacity building. Financial institutions should:

  • Adapt and expand existing technical skills infrastructure to meet growing GenAI demands
  • Increase collaboration with regulators and developers
  • Harmonize regulations across jurisdictions
  • Develop physical AI infrastructure to ensure a level playing field over the long run

Hong Kong: Privacy Commissioner publishes generative AI use checklist

On March 31, 2025, the Office of the Privacy Commissioner for Personal Data published a checklist to guide organizations on generative AI use by employees. The checklist helps organizations create policies that comply with the Personal Data (Privacy) Ordinance.

The checklist covers permissible AI tool use, input data guidelines, lawful and ethical use, data security measures, and violation policies. It also provides practical tips for supporting employees in using generative AI tools.

Impact: The checklist aims to ensure responsible AI use and enhance data protection. Businesses should:

  • Ensure existing policies align with the checklist
  • Educate staff on responsible AI use and data protection practices
  • Strengthen data security to protect personal information
  • Regularly audit AI use to ensure adherence to privacy laws

Europe

EU: Commission updates guidelines for responsible AI use in research

On April 10, 2025, the European Commission published updated guidelines for responsible AI use in research. The guidelines provide recommendations for researchers, research organizations, and funding bodies. They set non-binding common directions on the responsible use of generative AI.

The guidelines emphasise four key principles:

  • Reliability: Ensuring research quality and addressing bias
  • Honesty: Promoting transparent and fair research practices
  • Respect: Safeguarding privacy, intellectual property, and cultural heritage
  • Accountability: Ensuring researchers take responsibility for their work from conception to publication

These principles are based on the European Code of Conduct for Research Integrity.

Impact: Researchers should use AI transparently and responsibly, respecting privacy and intellectual property. Research organizations are advised to promote ethical AI use and track AI system evolution. Funding bodies should ensure transparency in AI applications and monitor AI use.

EU: Report published on AI privacy risks and large language models

On April 10, 2025, the European Data Protection Board published a report on AI privacy risks and mitigations for large language models (LLMs). The report provides a thorough explanation of LLMs, detailing their development history and the principles behind their operation.

The report:

  • Provides a comprehensive risk management methodology to identify, assess, and mitigate privacy risks
  • Emphasizes continuous monitoring throughout the AI lifecycle
  • Outlines the roles and responsibilities of stakeholders under the EU AI Act and GDPR
  • Compares various LLM service models and their specific privacy challenges
  • Offers both quantitative and qualitative risk assessment strategies

Impact: Among other things, businesses should implement robust data protection and privacy measures, carry out thorough risk assessments regularly, maintain transparency and accountability in data processing, address biases and ensure human oversight, and manage vendors and third parties effectively.

EU: European Commission launches AI Continent Action Plan

On April 9, 2025, the European Commission (EC) launched its AI Continent Action Plan. The plan is part of a broader strategy to boost Europe’s competitiveness, security, and technological sovereignty in the AI domain.

The plan is structured around five key pillars:

  • Building a large-scale AI data and computing infrastructure and establishing AI factories across Europe to boost private investment and expand data center capacity.
  • Creating a single market for data and setting up data labs to organize high-quality data from diverse sources.
  • Launching strategies to integrate AI into industries like healthcare and the public sector.
  • Educating, training, and retaining AI experts within the EU while attracting skilled talent from abroad.
  • Providing guidelines, codes of practice, and establishing a service desk for businesses to navigate the AI Act.

Impact: The plan builds on the InvestAI initiative, announced by the EU in February 2025, which aims to mobilize €200 billion for AI investments. The EC also plans to introduce the Cloud and AI Development Act (Act) to boost private sector investment in cloud computing and data centers across the EU. You can respond to

the EC’s consultation on the Act

by June 4, 2025. There is also another

consultation

closing on the same date, seeking input on priorities, challenges, and solutions for AI adoption. Alongside these consultations, the EC will be talking with industry leaders and public sector bodies to refine the Apply AI Strategy.

EU: European Commission publishes third draft of General-Purpose AI Code of Practice

On March 11, 2025, the European Commission published the third draft of the General-Purpose AI Code of Practice (Code)”. a more streamlined structure with refined commitments and measures under the EU AI Act (Act), which it describes as “

The new Code for general-purpose AI models (GPAI models) is structured into four parts: commitments, transparency, copyright, and safety and security. It provides guidelines for GPAI model providers to comply with the Act, particularly for models posing systemic risks, with a compliance deadline of August 2, 2025. Non-compliance could result in fines up to 3% of annual global turnover or 15 million euros, and potential bans on the models. The draft Code also introduces a new Model Documentation Form to help GPAI model providers comply with the transparency requirements of the Act. 

Impact: Compliance with the final-form Code will be vital for all in-scope providers, as this will provide a presumption of conformity with the relevant provisions of the Act until such time as formal standards are established. It is expected that the final version of the Code will be published by August this year.

Middle East

UAE: Gulf state to use AI to write laws

On April 20, 2025, the Financial Times reported that the UAE plans to use AI to write new laws. The AI will help create, review, and amend legislation, overseen by a new unit called the Regulatory Intelligence Office.

The UAE will use AI to track the impact of laws by creating a database of federal and local laws, including public sector data like court judgments and government services.

Impact: The proposed initiative aims to speed up lawmaking by 70%. However, researchers warn of potential challenges, such as the AI becoming inscrutable to users, biases from its training data, and whether it interprets laws the same way humans do.

UK

UK: Bank publishes report on AI in the financial system

On April 9, 2025, the Financial Policy Committee (FPC) published a report on AI’s impact on the financial system. The report highlights AI’s potential to boost productivity and enhance decision-making in finance. It also identifies risks such as model errors, data biases, and systemic risks from common AI models.

Key focus areas of the report include AI in banks’ and insurers’ decision-making and AI-driven trading strategies in financial markets. The report also details the FPC’s approach to tracking AI developments through surveys and market intelligence. It highlights how collaboration with international bodies is crucial to safeguard the financial system against emerging threats.

Impact: The FPC will monitor AI developments in banks and insurers, engaging stakeholders to ensure safe AI adoption and support financial stability. Businesses should stay informed about AI regulations, adapt their risk management, and invest in operational resilience to manage disruptions effectively.

UK: AI Private Members’ Bill reintroduced to Parliament

On March 4, 2025, the Artificial Intelligence (Regulation) Private Members’ Bill (AI Bill) passed its first reading in the House of Lords. It was first introduced in November 2023 but did not progress at that time. Among other things, the AI Bill:

  • Provides for the establishment of an ‘AI Authority’ to oversee the regulatory approach to AI in the UK according to certain regulatory principles such as safety, fairness, and accountability.
  • Sets a requirement for businesses developing, deploying or using AI to have a designated AI officer responsible for the safe use of AI.
  • Requires that the AI Authority set up a program to engage the public in meaningful, long-term discussions about the opportunities and risks of AI.

Impact: It is rare for a Private Members’ Bill to become law as they often lack the necessary parliamentary time and support to progress through all the legislative stages.  However, the debate around the AI Bill could influence the government’s overall approach to AI regulation.

U.S.

U.S.: White House Office of Management and Budget issues new AI policies for federal agencies

On April 7, 2025, the White House Office of Management and Budget (OMB) released two memoranda setting forth the most recent guidance for federal agencies concerning the procurement and use of AI: (1) OMB Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (Use of AI Memo); and (2) OMB Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government (Procurement of AI Memo).

The Use of AI Memo directs agencies to remove bureaucratic barriers and promotes accelerating federal use of artificial intelligence by encouraging innovation and investing in and supporting a competitive U.S. AI marketplace. The Use of AI Memo also directs agencies to appoint a Chief AI Officer.

The Procurement of AI Memo mandates timely, cost-effective AI acquisition, safeguarding taxpayer dollars, and ensuring compliance with privacy and civil rights. The Procurement of AI Memo aims to streamline how federal agencies procure and use AI technologies, emphasizing transparency, accountability, and risk mitigation in government AI deployments.

Both memos require agencies to update procedures, manage risks, and ensure transparency and accountability in AI implementation.

Impact: ”, signalling a departure from the previous administration’s stance. The Federal Government will remove certain restrictions on the use of innovative AI within the Executive Branch. By adopting AI, the White House aims to make agencies more agile, cost-effective, and efficient, so that they can improve public services and enhance America’s global leadership in AI innovation.forward-leaning, pro-innovation and pro-competition mindsetThe Trump Administration appears to be adopting a “

Texas: Newly revised Texas AI bill the “most-watched artificial intelligence bill in America”

Texas Rep. Giovanni Capriglione filed the Texas Responsible AI Governance Act (TRAIGA) (H.B. 1709) on December 23, 2024. The bill is touted as one of the most comprehensive state-level AI bills yet introduced and aims to establish a regulatory framework for the use of AI systems by both private businesses and state agencies in Texas. The original bill was met with criticism for being overly burdensome and potentially crippling innovation. These criticisms led to the introduction of TRAIGA 2.0 on March 14, 2025, a revised version of the bill that scaled back many of the original’s more controversial provisions.

TRAIGA 2.0 reflects a shift toward a more balanced AI governance approach, addressing earlier criticisms of overreach while still prioritizing transparency, fairness, and public trust. The bill has passed the Texas House and is heading to the Texas Senate.

Impact: Passage of the bill could significantly influence how AI is developed and deployed in Texas by requiring transparency, fairness, and accountability in high-risk AI systems. It introduces mandatory algorithmic impact assessments, consumer rights to explanations and human review, and a regulatory sandbox to encourage innovation under oversight. If passed, it may serve as a model for other states seeking to balance AI innovation with public trust and ethical safeguards.

U.S.: House of Representatives introduces new AI bill

The Creating Resources for Every American to Experiment with Artificial Intelligence Act – known as the CREATE AI Act of 2025 a shared infrastructure to expand access to computing power, datasets, and educational tools for AI research and development across the U.S. economy. , (H.R. 2385) is a bipartisan bill introduced on March 26, 2025 in the U.S. House of Representatives by Representatives Jay Obernolte (R-CA) and Don Beyer (D-VA) that aims to establish the National Artificial Intelligence Research Resource (NAIRR)

Key objectives of the bill include expanding equitable participation in AI development across academia, startups, and public institutions, expanding access to AI research tools, supporting responsible AI innovation, and fostering competition with respect to AI within the U.S.

Impact: The bill is designed to significantly broaden access to artificial intelligence research and development resources across the United States, potentially impacting the democratization of AI innovation (with the establishment of the NAIRR) and maintaining U.S. leadership in the global AI race by accelerating domestic breakthroughs with respect to AI. The CREATE AI Act of 2025 also promotes responsible AI use by embedding ethical and safety considerations into federally supported research initiatives.

U.S.: Department of Energy flags 16 potential sites for AI-energy projects

On April 3, 2025, the Department of Energy (DOE), identified 16 potential development sites for AI-energy projects. The DOE plans to co-locate data centers and new energy infrastructure on its land to support the booming AI-driven power demand. The DOE published a Request for Information (RFI) (now closed) to assess industry interest in these projects. The sites benefit from swift data center construction conditions, including existing energy infrastructure. Developers will also have the option to fast-track permitting for new energy generation such as nuclear reactors.

The sites, across five states, include Idaho National Laboratory, and facilities in Paducah, Kentucky and Portsmouth, Ohio.

Impact: The deadline to respond to the DOE RFI was May 7, 2025. The DOE sought input from data center developers, energy developers, and the broader public to further advance this partnership. Feedback from the RFI will help foster collaborations between private companies and public entities and support the construction of AI facilities at specific DOE sites. The goal is to have these AI infrastructures operational by the end of 2027.

U.S.: Institute publishes AI report on adversarial machine learning

On March 24, 2025, the National Institute of Standards and Technology (NIST) published an AI report on adversarial machine learning (AML). The report highlights the need for secure, robust AI systems as they become essential to the digital economy and daily life.

The report:

  • Provides a taxonomy and defines terminology in AML
  • Includes key machine learning methods, life cycle stages of attacks, and attacker goals, objectives, capabilities, and knowledge
  • Identifies current challenges in AI system life cycles and describes methods for mitigating and managing attack consequences

Impact: The report is aimed at those responsible for designing, developing, deploying, evaluating, and governing AI systems. It aims to provide voluntary guidance on identifying, addressing, and managing risks associated with AML. NIST will update the report annually, collaborating with US and UK AI institutes, industry, and academia.

Co-authored by Jon Botham (Knowledge).

Further reading

The materials on the Eversheds Sutherland website are for general information purposes only and do not constitute legal advice. While reasonable care is taken to ensure accuracy, the materials may not reflect the most current legal developments. Eversheds Sutherland disclaims liability for actions taken based on the materials. Always consult a qualified lawyer for specific legal matters. To view the full disclaimer, see our Terms and Conditions or Disclaimer section in the footer.



Source link