Saturday, December 28, 2024
HomeTalking PointOpinionAre Ethical Safeguards for AI Adequate?

Are Ethical Safeguards for AI Adequate?

As the development of AI gains momentum and countries the world over vie to lead in the technology, what can we do to ensure it is used responsibly? By Tong Suit Chee

The push to advance artificial intelligence (AI) has in recent years gained significant momentum, as research and development on the technology broadens exponentially. Around the world, more companies are beginning to harness its capabilities, while several countries, from the United States to China, launch their respective initiatives to support its growth and prepare themselves for a world that will be markedly changed by the technology.

Singapore’s push for AI

Singapore is no different — it is gearing up to be a global leader in AI as part of its Smart Nation push. This initiative, which was launched in 2014, focuses on the digital transformation of transport, home and environment, business productivity, health and enabled ageing and public-sector services. In 2017, four frontier technologies to enhance the Republic’s goals for its digital economy were announced, among which AI was named as one. AI Singapore was created soon after to build deep capabilities in AI nationally, grow the local talent, build an AI ecosystem, and put Singapore on the world map.

Professor Goh Yihan, director of the Centre for AI and Data Governance and dean of the Singapore Management University’s (SMU) School of Law, believes AI has advanced significantly in Singapore in recent years, and people are already benefiting from services such as voice assistants, language translations, GPS optimisation and credit card fraud alerts. He said, “On the government’s end, AI has been used for detecting drowning incidents, fraudulent activities and local speech recognition.”

He also mentioned that the Land Transport Authority and the Agency for Science, Technology and Research had launched a joint partnership to oversee and manage autonomous vehicle research, test-bedding and development of applications by industry partners and stakeholders. Since 2015, the authority has been facilitating autonomous vehicle trials by various technology developers in designated locations, with the target of serving residents in three neighbourhoods by 2022.

Elsewhere, the Info-communications Media Development Authority (IMDA) is preparing Singapore’s workforce for the digital economy through initiatives like the TechSkills Accelerator to drive the development of AI talents.

“The government is tackling AI governance challenges by not setting a prescriptive rule and working with industry participants. This approach will help them when shaping the principles that will govern AI and position Singapore as a leading digital economy.”
— Prof Goh Yihan, director of the Centre for AI and Data Governance

Managing AI ethical issues

Given the fast adoption of AI everywhere, questions have been raised about the issues surrounding the use of the technology. For instance, who is responsible when a driverless car hits someone? Or what happens when machine goes rogue or makes an irresponsible decision?

Regulatory bodies have taken steps to lower the odds of mishaps taking place. The IMDA has launched three interlinked AI governance initiatives aimed at engaging key stakeholders, including the government, industry, consumers and academia. These are:

  1. The Advisory Council on the Ethical Use of AI and Data. This is an industry-led initiative that brings together international and local leaders in AI, advocates of social and consumer interests and academia to examine the legal, ethical, regulatory and governance issues arising from commercial deployment of the technology, and advise the government on areas requiring regulatory or policy attention.
  2. The Model AI Governance Framework, which translates ethical principles into implementable practices for voluntary adoption by organisations. It embodies two sets of principles — decisions made by or with the assistance of AI should be explainable, transparent and fair; and AI systems and decisions made using the technology should be human-centric and safe.
  3. The Centre for AI and Data Governance, a research programme that aims to develop and advance international thought leadership, scholarship and discourse in legal, regulatory, ethical and policy issues arising from the use of AI and data. Issues relating to the legal liabilities associated with AI, intellectual property rights and societal impacts of AI, equal access to AI products and services by different segments of society and more are explored here.

Meanwhile, other regulators have put forth their own set of rules with regard to the use of AI in other industries. For instance, the Monetary Authority of Singapore last year issued a set of principles to guide firms offering financial products and services on the responsible and ethical use of AI and data analytics. Among other things, the authority states that AI-driven decisions should be held to the same ethical standards as human-driven decisions.

While these moves are a step in the right direction, SMU’s Prof Goh highlighted that AI technology is too new and nascent to have firm rules yet. He pointed out, “The government is tackling AI governance challenges adequately by not setting a prescriptive rule and working with industry participants and academia. This collaborative approach will help regulators when shaping the principles that will govern AI and position Singapore as a leading digital economy and smart nation.”

He added that approaches such as the Model Framework may not provide all the answers, but it offers an opportunity for all to grapple with fundamental ideas and practices that may prove to be key in determining the development of AI.

Stakeholders play a role, too

It’s not just regulatory bodies that have a part to play in ensuring AI technology is used responsibly. Stakeholders like research institutes and companies leveraging on this technology have a duty to see that proper safeguards are in place.

At the N.1 Institute for Health at the National University of Singapore (NUS), researchers have successfully developed and validated CURATE.AI, a powerful AI platform that optimises clinical efficacy and safety for several combination therapy indications. Before any clinical study commences, a rigorous and detailed review process is taken, where the clinical study protocol is reviewed by the medical and ethics review boards at NUS and the National University Hospital. In addition, the Health Sciences Authority is an expert resource that is consulted when needed during the review process.

“We have clearly outlined procedures in place to properly implement our clinical trials,” said Prof Dean Ho, the institute’s director. “Our number one priority is the patient’s welfare. Also, the patient’s doctors have the final approval on the combination of drugs and the dosages given.” He added that patients must give their informed consent before the treatment starts. In addition, national guidelines and the university’s data protection office help ensure that patient data is properly safeguarded and de-identified.

With all these ethical safeguards in place, Singapore is in a fairly strong position to minimise the abuse of AI technology and ensure that most AI initiatives are ethically on the right track.

- Advertisement -spot_imgspot_imgspot_imgspot_img

Most Popular