top of page

Human-centric AI

The increasingly widespread use of artificial intelligence (also referred to as AI) by both public authorities and private companies offers the potential to create added value for society across almost all industries, as well as in socially important fields such as healthcare, education, transport, public administration, security, and environmental protection. Alongside this potential, however, it is important to recognize that the use of AI entails significant challenges arising from its growing ability to make complex decisions and perform actions without human involvement or control. The use of artificial intelligence may therefore have substantial implications for fundamental rights, democracy, and the rule of law, as well as for the social and economic balance of society.
One of the core principles of Estonia’s digital state is that it must be human-centric: the use of digital solutions is not an end in itself, but a means to enhance people’s well-being. This requires that the development and use of artificial intelligence consistently consider and promote values such as human dignity, fairness, equal treatment, privacy, and security, ensuring that AI systems operate in alignment with human interests. At the same time, it is essential that public trust in AI solutions is maintained and strengthened. Human-centricity and trustworthiness are key components for realizing the social and economic benefits that artificial intelligence can offer and for ensuring that innovation remains responsible and sustainable.

Pilvteenused, on-prem lahendused ja riskihaldus

Kõik tehisaru- ja tarkvaralahendused ei ole ühesugused. Oluline on teha vahet pilvtoodetel ja asutuse kontrolli all olevatel lahendustel, sest sellest sõltuvad riskid, vastutus ja vajalikud hindamised enne kasutuselevõttu.

Pilvtoode

Teenus või tarkvara, mida ei majutata asutuse enda kontrolli all. Ka tehisaru lahendus, mida ei käitata kliendi juures, on pilvtoode.

On-prem lahendus

Tarkvara või teenus, mida majutatakse ja hallatakse asutuse enda taristus või otsese kontrolli all.

Measures to Ensure Human-Centric AI

Ensuring that AI solutions remain human-centric and trustworthy requires more than merely imposing legal obligations through legislation. In light of the rapid development of artificial intelligence and the continuous emergence of new use cases, it is essential to consistently advance approaches for properly assessing potential AI-related risks and challenges, as well as identifying effective measures to mitigate them. For this reason, a range of support services and activities have been developed in Estonia to promote human-centric artificial intelligence, led by the Ministry of Justice and Digital Affairs and its partners.

  • AI Support Toolbox - To strengthen the baseline capacity of public authorities to initiate and implement projects that include an AI component, to introduce data-driven decision-making processes, or to enhance existing AI capabilities, ongoing support is provided throughout the initiation, implementation, and deployment of AI projects. An overview of the available services can be found on this page

  • Algorithmic Bias Risk Management Tool focuses on managing bias-related risks in algorithmic and AI systems. The tool consists of three components. These materials were developed as part of the EquiTech project (Project 101144709 – Improving response to risks of discrimination, bias and intolerance in automated decision-making systems to promote equality).

    • guidelines describing the nature and background of AI systems and algorithmic bias, as well as approaches for identifying and mitigating bias;

    • a methodology providing detailed instructions for setting up and conducting a risk management process; and

    • a workbook that facilitates the documentation of information required for risk management.

  • Algorithmic Bias Risk Management e-course - This e-course addresses how to identify potential sources of bias and the risks arising from them in order to prevent discrimination- and bias-related risks in professional practice, while ensuring the trustworthiness of digital solutions and public confidence. The course is available in both Estonian and English. The materials were developed within the EquiTech project.

  • AI Register - To increase transparency in the public sector and to promote inter-institutional cooperation and the reuse of algorithms, information on AI use cases is collected through a dedicated form. The collected data will be published shortly in the AI Register currently under development. Until then, this page provides short overviews of artificial intelligence use in the Estonian public sector. In recent years, AI-based solutions have been implemented in the Estonian public sector on approximately 200 occasions.

 

Estonia also actively participates in international cooperation to promote human-centric and ethical artificial intelligence, working with organizations such as the European Commission, the Council of Europe, UNESCO, OECD, and the Freedom Online Coalition.
 

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

Mille alusel teenuse usaldusväärsust hinnata?

Vastavus nõuetele

Kontrolli teenuse vastavust infoturbe- ja andmekaitsenõuetele ning seda, kas andmeid töödeldakse Euroopa Liidus.

Aktsepteeritavad riskid

Hinda, kas teenusega kaasnevad riskid on asutuse jaoks maandatavad või aktsepteeritavad.

Teenusepakkuja taust

Veendu, et teenusepakkuja või seotud osapooled ei kujuta endast vastuvõetamatut riski.

Juurdepääs andmetele

Kontrolli, kellel on juurdepääs andmetele ning millised on andmete liikumise ja töötlemise tingimused. 

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

Hinnatud tooted ja teenused

Allpool on loetletud enamlevinud tehisaru tööriistad ja teenused, mille kohta on koostatud usaldusväärsuse hinnang ning esitatud riskiskoor esmaseks orienteerumiseks.
Hinnangud on kättesaadavad vajutades toote/teenuse nimel.

Microsoft_Copilot_Icon.svg.png
claude-color.png
perplexity-e6a4e1t06hd6dhczot580o.webp
images.png

🟨 ChatGPT

images (1).png

🟩 Atlassian AI

18291564.png

🟩 Texta

Adobe_Firefly_Logo.svg.png

🟨 Adobe Filefly

gemini-color.png

🟩 Gemini

download.png

🟩 Azure AI

Notion_app_logo.png

🟩 Notion

images (2).png

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

Characteristics of a human-centric AI

To achieve human-centricity and trustworthiness, artificial intelligence must primarily be based on the following aspects:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including ensuring that they do not undermine or diminish human independence and the right to self-determination. AI must not coerce or manipulate people into actions that are contrary to their interests. Individuals must be appropriately informed and retain the freedom to refuse interaction with AI systems. Human oversight and control are essential to achieving these goals: AI solutions must be designed and implemented in a way that allows people to intervene in the system’s operation in order to prevent undesirable impacts.

  • Equality and fairness: AI solutions must respect human diversity and avoid unjust bias or discrimination based on gender, nationality, age, or other characteristics. The use of AI should ensure fair access to the benefits it provides, including accessibility for persons with disabilities. This requires, among other things, the involvement of affected stakeholders in the development of AI systems and consideration of their needs.

  • Privacy and personal data protection: The development and use of AI must ensure a high level of protection of individual privacy, including personal data, throughout the entire system lifecycle. This is a key challenge, as AI systems often rely on large volumes of personal data for training or decision-making. In addition, the advanced analytical capabilities of AI may enable far-reaching inferences about individuals based on behavioral patterns and other data, including conclusions about health, sexual orientation, physical characteristics, political views, and other sensitive attributes. Such inferences may constitute a significant intrusion into privacy and create risks of unlawful discrimination.

  • Robustness and safety: AI must operate accurately, securely, safely, and reliably in order to prevent harm to people and the environment. Accuracy requires that AI decisions achieve an appropriate level of correctness and minimize errors, taking into account the purpose of the system and the potential impact of its decisions. In addition, AI systems must be protected against misuse, (cyber)attacks, and other vulnerabilities. Unintended harmful effects must also be avoided, meaning that AI should be resilient even when used in unintended or non-purposeful ways or when encountering unexpected situations.

  • Transparency: The core operating principles, objectives, and impacts of AI solutions must be understandable and subject to scrutiny. Where AI decisions have a significant impact on individuals, those decisions should be explainable to an appropriate extent. Transparency helps build trust, improve the quality and accuracy of AI, and identify risks or shortcomings in order to prevent or minimize harm to people and the environment. In addition to documenting and auditing AI processes, transparency also requires providing individuals with the relevant knowledge and tools needed to sufficiently understand and interact with AI systems.

  • Accountability: It is essential that AI developers, providers, and users are accountable for the lawful, ethical, and proper use of AI systems and for the consequences they generate. This requires the establishment of clear rules defining the specific responsibilities of the different actors, as well as ensuring that effective remedies are available in cases where risks materialize or harm occurs.

  • Societal and environmental well-being: The development and use of AI must take into account its broader impact on society and the environment, and measures should be taken where necessary to mitigate negative effects. At the same time, the state should encourage the development and deployment of AI solutions that contribute to addressing global challenges such as climate change and environmental issues, or that promote societal values and interests such as justice, democracy, sustainability, and overall quality of life.
     

bottom of page
Title