Guiding Principles For AI Development

A meeting of data protection authorities from around the world has highlighted the development of artificial intelligence and machine learning technologies (AI) as a global phenomenon with the potential to affect all of humanity. A coordinated international effort was called for to develop common governance principles on the development and use of AI in accordance with ethics, human values and respect for human dignity.

The 40th International Conference of Data Protection and Privacy Commissioners (conference) released a declaration on ethics and data protection in artificial intelligence (declaration). While recognising that AI systems may bring significant benefits for users and society, the conference noted that AI systems often rely on the processing of large quantities of personal data for their development. In addition, it noted that some data sets used to train AI systems have been found to contain inherent biases, resulting in decisions which unfairly discriminate against certain individuals or groups.

To counter this, the declaration endorses six guiding principles as its core values to preserve human rights in the development of AI. In summary, the guiding principles state:

  1. Fairness principle

    AI should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle. The design, development and use of AI should have regard to individuals' reasonable expectations in relation to the use of personal data and should consider the impact of AI on society at large. Systems should be developed in a way that facilitates human development and does not obstruct or endanger it.

  2. Continued attention and vigilance

    Continued attention, vigilance and accountability for the potential effects and consequences of AI should be ensured by promoting accountability of stakeholders; fostering collective and joint responsibility; investing in awareness and education; and establishing demonstrable governance processes.

  3. Systems transparency and intelligibility

    AI systems transparency and intelligibility should be improved by investing in scientific research on explainable artificial intelligence; making organisations' practices more transparent; ensuring individuals are informed appropriately when they are interacting with AI; and providing adequate information on the purposes and effects of AI systems.

  4. Ethics by design

    An "ethics by design" approach should be adopted. AI should be designed and...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT