Regulatory Alert : Guidelines For Responsible And Trustworthy Artificial Intelligence In The Financial Technology Industry In Indonesia

Published date26 March 2024
Subject MatterFinance and Banking, Technology, Financial Services, New Technology, Fin Tech
Law FirmAnggraeni and Partners
AuthorMs Setyawati Fitrianggraeni, Sri Purnama and Jericho Xavier

Setyawati Fitrianggraeni, Sri Purnama, Jericho Xavier Ralf1

BACKGROUND

Artificial Intelligence (AI), a blend of computer science, machine learning, and big data, is increasingly being adopted in various industries, especially in financial technology (Fintech). This shift towards AI-driven processes aims to enhance business efficiency and transaction speed. However, it also introduces unprecedented risks, necessitating a behavioural framework or code of conduct to optimise AI usage and mitigate risks.

On 24 November 2023 at the Fifth Indonesia Fintech Summit & Expo 2023, Indonesia's Financial Services Authority (Otoritas Jasa Keuangan or OJK), in partnership with several other organisations,2 launched its code-of-ethics guidelines (Guidelines) to work towards responsible and trustworthy artificial intelligence within the Fintech sector.3 This framework aligns with global standards adopted by the Organisation for Economic Co-operation and Development (OECD) AI Principle4 and the

KEY PRINCIPLES

The fundamental principles for the use of Responsible and Trustworthy AI in Fintech include:

  1. Alignment with Pancasila:7 Ensuring AI development and usage align with national interests and ethical responsibilities based on Pancasila values.8
  2. Beneficial: AI applications should add value to business operations, enhance consumer welfare, improve decision-making abilities, reduce inequality, increase financial inclusion, and support sustainable economic growth.9
  3. Fair and Accountable: AI applications must be fair and non-discriminatory, protect consumer privacy, and prevent harm There should be a risk mitigation framework to ensure relevant and proportional contributions of AI applications.10
  4. Transparent and Explicable: Fintech companies must have control over AI processing and be able to explain it to consumers, from input to output, including risk and mitigation strategies.11
  5. Robustness and Security: AI applications should be robust secure against cyber-attacks, and developed by competent or certified experts in AI. Continuous testing and validation are required for technical processing and security aspects.12

IMPLICATION

These guidelines require Fintech companies to adopt ethical, fair, transparent, accountable, robust, and secure AI practices. This involves regular updates, testing, and validation of AI models and algorithms, ensuring data integrity and privacy, and maintaining scalability and human involvement in decision-making processes.

CONSIDER

Fintech firms...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT