Policy quarterly update - Q1 2023
The EU AI Act
In December 2022, the EU Council adopted its common position on the AI Act. Links to the PDF of the latest compromise text and to the December press release with some of the highlights and amendments. Negotiations with the EU Parliament are currently underway with a view to implementing the Act at the end of 2023.
The EU AIA is the most ambitious and comprehensive AI regulation to date. It includes definitions of AI, classification of AI systems into four risk categories, obligations of providers and users of AI systems, and fines for parties that don’t comply. The EU AIA is relevant to companies outside of the EU too, Annex (11): ‘this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union.’
It remains to be seen what the regulation would mean for Generative AI and whether the EU AIA will need to be updated to address generative AI more specifically.
Regardless, the EU AIA is the first regulation of its kind anywhere and will impact AI providers and users beyond the EU.
New AI risk management ISO/IEC standard
Published in February, the ISO/IEC 23894 offers guidance on managing risks connected to the development and use of AI. The new standard offers guidance to organisations across all sectors. It is intended to be used in connection with the risk management standard ISO 31000:2018 however it focuses on managing risks and good practices specifically in the context of AI. Importantly, the new standard offers concrete examples of effective risk management implementation.
New privacy-by-design ISO standard
The new ISO 31700 privacy by design standard was published at the end of January. The standard establishes high-level requirements for privacy by design for consumer goods and services. The aim is that the individual "need not bear the burden of striving for protection when using a consumer product." Instead consumer needs should be considered in the initial design and throughout the product lifecycle process, ensuring that personally identifiable information should be processed in a way that doesn't compromise an individual's privacy.
BS ISO/IEC 5392 Information technology - Artificial intelligence - Reference architecture of knowledge engineering
The draft BS ISO/IEC 5392 standard defines a reference architecture of Knowledge Engineering (KE) in AI. The reference architecture describes KE roles, activities, constructional layers, components and their relationships amongst themselves and other systems from systemic user and functional views. This document also defines KE terms. The draft is now open for comments until 29 March.
Guidance and best practices
UN PET guidelines
In February, the United Nations published a guide on PETs for official statistics. It focuses on PETs that protect data during analysis and dissemination of data for official statistics. Targeting professionals from National Statistical Offices, the guide offers practical considerations for choosing appropriate PETs with respect to specific problems that the PETs can solve. The guide covers six PETs (secure multiparty computation, homomorphic encryption, differential privacy, synthetic data, distributed learning, zero-knowledge proof, and trusted execution environments) and is complete with detailed case studies and an overview of current standards-making activities.
In January, NIST released its AI Risk Management Framework. The AI RMR 1.0 is intended to help manage the AI-related risks to individuals, organisations, and society. The framework outlines the characteristics of trustworthy AI systems and provides guidance on framing risks related to them. It also provides guidance on how to govern, map, measure and manage the risks.
ICO draft guide
Earlier guidance efforts include the Information Commissioner’s Office (ICO) draft guidance on PETs, published in September 2022. The document discusses seven PETs (homomorphic encryption, secure multi-party computation, federated learning, trusted execution environments, zero-knowledge proofs, differential privacy, and synthetic data) and outlines how they can help with data protection compliance.
Royal Society report
The January report by The Royal Society takes a look at six PETs (trusted execution environments, homomorphic encryption, secure multi-party computation, federated learning, differential privacy, and privacy-preserving synthetic data) and summarises recommendations for future development. The report explores barriers to PETs adoption and recommends the development of PETs-specific standards. It also offers use cases that illustrate the potential benefits of PETs and how they may be able to help construct responsible data governance systems.
Differential Privacy watch
In the USA: The Electronic Privacy Information Center (EPIC) urged in January the National institute for science and technology to endorse the adoption of differential privacy in its paper on de-identifying government data sets. EPIC has often advocated for the adoption of differential privacy. In 2021 it called it "the only credible technique to protect against [reidentification] attacks, including those that may be developed in the future."
New Department for Science, Innovation and Technology:
The UK government is moving responsibility for data policy from DCSM to the newly created Department for Science, Innovation and Technology. Its aim is to ensure the development of science and technology and it will focus on five technologies: AI, quantum, engineering biology, semiconductors, telecoms, along with life sciences and green sciences. The effects on AI policy of the move remains to be seen.
Synthetic Data Industry Connections Group
The need for Synthetic Data standardisation and best practices is being increasingly recognised. At Hazy, we are eager to see development of guidance in the industry and are part of a recently set up Synthetic Data Industrial Connections Group at IEEE. The group’s aim is to exchange best practices, publish guides, and establish a standard-setting group in the near future.
In January, in response to a government request for feedback on how it can better regulate emerging technologies, we suggested it speed up the introduction of best practices and/or standards in Synthetic Data.
In February, the Financial Conduct Authority published the synthetic data call for input feedback statement. This is the outcome of the call for input by the FCA to which we responded in 2022.
Government’s white paper on AI governance
The Office for Artificial Intelligence is working on a whitepaper on AI governance. It will establish the UK’s position on governing and regulating AI. It is likely to follow the pro-innovation, risk-based approach set out in the government’s policy paper from July 2022. No publication date has been announced for the whitepaper but rumours in the AI space have it that it will come out in Q2 of this year.
CDEI AI Assurance Repository
Following the report on barriers and enablers of AI assurance published in December, the CDEI are now looking to create an AI assurance repository. This will be a portfolio of case studies of AI assurance good practices. There is still time to submit case studies.
FCA Synthetic Data Expert Group
The FCA is looking for members to join its Synthetic Data Expert Group. It will be a sub-group to the Innovation Advisory Group that was launched in February. Applications are open until 8 March.
PETs prize winning solutions
The winning solutions to the first set of prize challenges related to PETs will be announced this spring. The aim of the challenges is to accelerate the adoption and development of PETs. The winners of Phase 1 were announced in November 2022. You can find more information on the prize challenges here.
Read Diana's previous blog on the latest development in the PETs space.