top of page

Human-centric AI

The increasingly widespread use of artificial intelligence (also referred to as AI) by both public authorities and private companies offers the potential to create added value for society across almost all industries, as well as in socially important fields such as healthcare, education, transport, public administration, security, and environmental protection. Alongside this potential, however, it is important to recognize that the use of AI entails significant challenges arising from its growing ability to make complex decisions and perform actions without human involvement or control. The use of artificial intelligence may therefore have substantial implications for fundamental rights, democracy, and the rule of law, as well as for the social and economic balance of society.
One of the core principles of Estonia’s digital state is that it must be human-centric: the use of digital solutions is not an end in itself, but a means to enhance people’s well-being. This requires that the development and use of artificial intelligence consistently consider and promote values such as human dignity, fairness, equal treatment, privacy, and security, ensuring that AI systems operate in alignment with human interests. At the same time, it is essential that public trust in AI solutions is maintained and strengthened. Human-centricity and trustworthiness are key components for realizing the social and economic benefits that artificial intelligence can offer and for ensuring that innovation remains responsible and sustainable.

Legal Framework

The use of artificial intelligence and automated decision-making systems in the public sector does not take place in a legal vacuum. Both in Estonia and more broadly within the European Union, a number of laws and principles define the framework governing the use of these technologies, particularly with regard to preventing discrimination, protecting fundamental rights, and processing data. Even today, the development and use of artificial intelligence must be based on the Constitution of the Republic of Estonia as well as sector-specific legislation addressing personal data protection, cybersecurity, consumer rights, administrative procedures, and other key issues.
The most comprehensive regulatory instrument is the European Union Artificial Intelligence Regulation, known as the AI Act. It establishes harmonised quality and transparency requirements for high-risk artificial intelligence systems used by both public authorities and private companies across all EU Member States. Providers of high-risk AI systems, including public sector bodies, are required to comply with a range of obligations, including:

  • Risk management: Risks, including those related to bias and discrimination, must be identified, assessed, and mitigated.

  • Data quality and data governance: Training, validation, and testing datasets must be relevant, representative, free of errors, and complete, and must include appropriate measures to reduce bias.

  • Technical documentation: Systems must be accompanied by up-to-date documentation demonstrating compliance with regulatory requirements.

  • Transparency and user information: Users must be provided with clear and sufficient information about the system’s capabilities, limitations, and expected performance.

  • Human oversight: Systems must be designed in a way that enables human oversight, including the ability to intervene or override decisions where necessary.

  • Accuracy, robustness, and cybersecurity: Systems must be designed to actively prevent and mitigate bias that could lead to discriminatory outcomes.

​

It is important to note that you have the right to contact the authority that made an automated decision in order to obtain explanations about how the decision was made and to request that the decision be reviewed by a human.
 

Measures to Ensure Human-Centric AI

Ensuring that AI solutions remain human-centric and trustworthy requires more than merely imposing legal obligations through legislation. In light of the rapid development of artificial intelligence and the continuous emergence of new use cases, it is essential to consistently advance approaches for properly assessing potential AI-related risks and challenges, as well as identifying effective measures to mitigate them. For this reason, a range of support services and activities have been developed in Estonia to promote human-centric artificial intelligence, led by the Ministry of Justice and Digital Affairs and its partners.

  • AI Support Toolbox - To strengthen the baseline capacity of public authorities to initiate and implement projects that include an AI component, to introduce data-driven decision-making processes, or to enhance existing AI capabilities, ongoing support is provided throughout the initiation, implementation, and deployment of AI projects. An overview of the available services can be found on this page

  • Algorithmic Bias Risk Management Tool focuses on managing bias-related risks in algorithmic and AI systems. The tool consists of three components. These materials were developed as part of the EquiTech project (Project 101144709 – Improving response to risks of discrimination, bias and intolerance in automated decision-making systems to promote equality).

    • guidelines describing the nature and background of AI systems and algorithmic bias, as well as approaches for identifying and mitigating bias;

    • a methodology providing detailed instructions for setting up and conducting a risk management process; and

    • a workbook that facilitates the documentation of information required for risk management.

  • Algorithmic Bias Risk Management e-course - This e-course addresses how to identify potential sources of bias and the risks arising from them in order to prevent discrimination- and bias-related risks in professional practice, while ensuring the trustworthiness of digital solutions and public confidence. The course is available in both Estonian and English. The materials were developed within the EquiTech project.

  • AI Register - To increase transparency in the public sector and to promote inter-institutional cooperation and the reuse of algorithms, information on AI use cases is collected through a dedicated form. The collected data will be published shortly in the AI Register currently under development. Until then, this page provides short overviews of artificial intelligence use in the Estonian public sector. In recent years, AI-based solutions have been implemented in the Estonian public sector on approximately 200 occasions.

 

Estonia also actively participates in international cooperation to promote human-centric and ethical artificial intelligence, working with organizations such as the European Commission, the Council of Europe, UNESCO, OECD, and the Freedom Online Coalition.
 

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

bottom of page
Title