Artificial intelligence (AI) driven technologies are increasingly present in our daily life, including in home appliances and social media applications. Ever more smart and sophisticated, they are also used by public services in the context of justice, welfare and healthcare to solve problems and make decisions independently of human intervention.
This brings great opportunities but also involves risks with real and serious consequences for individuals, including potential breaches of their human rights. AI systems should therefore be controlled and moderated by public authorities.
The publication “Unboxing Artificial Intelligence: 10 steps to protect human rights” is aimed at those involved in the development or application of AI systems.
Recommendations include regular monitoring of AI systems to see if they do not undermine any of the human rights as well as evaluating how decision makers collect and influence the inputs, and interpret the outputs, of AI systems.
Oversight bodies should be set up to handle complaints from individuals negatively impacted by a decision informed solely or significantly by an AI system.
Moreover, public authorities should track the number and types of jobs created and lost as a result of AI developments and adapt education curricula accordingly.
The publication is available in English and French.
Fuente: Council of Europe (CoE)