News – 25.06.2025

The Office of the Data Protection Ombudsman has published guidelines on considering data protection in the development and use of artificial intelligence systems

Main contact

Sonja Heinonen-Tilli

Senior Associate, Attorney at Law

+358 50 4682 182

The Office of the Data Protection Ombudsman has prepared guidelines on how organizations should take into account the requirements of data protection legislation when developing or deploying artificial intelligence systems. If the AI system involves the processing of personal data, data protection legislation must be followed in its development and use.

An organization must assess the data protection risks of the AI system before processing personal data and decide on necessary security measures or other actions to manage the risks based on the assessment. According to the General Data Protection Regulation, a data protection impact assessment must be carried out especially when the planned processing of personal data may pose a high risk to people’s rights and freedoms. The development of AI systems often meets the criteria for a high risk. A data protection impact assessment helps in complying with the requirements of data protection legislation.

An organization must choose an appropriate legal basis for the processing of personal data as defined in the General Data Protection Regulation when developing or using an AI system. A legal basis is also needed when personal data is used to train the AI. The legal basis must exist already when personal data is collected and used for the development and training of the AI. Legal bases for the processing of personal data according to the General Data Protection Regulation include for example the consent of the data subject, a contract, a legal obligation, and a legitimate interest of the controller or a third party.

Additionally, organizations must pay attention to the data protection principles stipulated in the General Data Protection Regulation, such as data minimization and purpose limitation. An organization must always carefully define which personal data is necessary and for what purposes it is used in the AI system. When personal data is no longer needed for the development or use of the AI system, it must be anonymized or deleted. The AI system must also be designed in such a way that data subjects’ rights can be implemented.

The processing of personal data in the AI system must be communicated to the data subjects openly and understandably. Additionally, data subjects must be provided with the information required by the General Data Protection Regulation about the processing of their personal data, such as the purpose of the processing and information on how long the data will be retained. The organization developing or using the AI system is responsible for complying with the requirements of data protection legislation and must be able to demonstrate this, for example, by documenting data protection impact assessments and other actions taken.

The use of AI systems is also regulated by the EU AI Act, which came into force in 2024. It defines, for example, prohibited uses of AI and sets requirements for high-risk AI systems, which are particularly those systems that can have a significant harmful impact on people’s safety or fundamental rights. High-risk AI systems can be brought to market or used only if they meet the requirements defined in the AI Act.


This article is also available in Finnish.