Singapore Guidelines on Artificial Intelligence: How Singapore Policies Impact the future of AI

Ketaki JoshiKetaki Joshi
Ketaki Joshi
November 10, 2022
Singapore Guidelines on Artificial Intelligence: How Singapore Policies Impact the future of AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

As artificial intelligence (AI) becomes increasingly embedded in everyday business practices, products, services, processes, and decision-making, both corporations and consumers are shifting their attention to how their data is being used. Other potential risks include bias (such as ethnicity, gender, profession,etc.), undermining people's privacy, black-box nature, etc. 

The concerns have rushed governments around the world to formulate legal frameworks to control and govern AI and its impacts, including Singapore. 

Singapore in 2019 unveiled ‘The National AI Strategy’, 1 a key step in their Smart Nation journey, to deepen the use of AI technologies, identify and allocate resources to key focus areas, as well as pave the way to be a leader in developing and deploying "scalable, impactful AI solutions" in key verticals by 2030. One of the purposes the country aims to achieve from this strategy is “how Government, companies, and researchers can work together to realise positive impact from AI. Talent, data, regulation, and effective deployment are key elements needed to enable AI applications that serve society”

Singapore pursues a balanced approach to promoting AI by facilitating innovation, safeguarding consumer interests, and serving as a common global reference point by enabling good governance and enforcing ethical practices in implementing AI. The same is designed to oversee matters that consider the justification of automation in AI-enabled decision-making, the level of human involvement desired, and the risk involved along with its mitigation. In Singapore, the Personal Data Protection Commission (PDPC) 2 oversees all matters related to data and AI, and it covers AI developers and, User companies (backroom operations, front-end usage companies, and companies that sell/distribute devices or equipment that provide AI-powered features) (Rai, Murali, 2020) 3. The Singapore Academy of Law (SAL) 4 oversees the application of law applicable to AI systems and issues currently under consideration impacting industries relying on AI systems and/or robotics. In reference to the same and the follow-up amendments to the PDPR 2021 (Personal Data Protection Regulations 2021) 5 from October 2021; there are minor clarifications on what constitutes 'significant harm' and egregious mishandling of personal data.

Critical updates in Singapore over the past 2 years or so in the field of AI include the Cyber Security Agency of Singapore (CSA) 6 announcements in October 2021, as well as a court’s decision on the scope of the PDPA on 25th May 2021 (Bellingham, Alex v Reed, Michael [2021] SGHC 125) (Crompton, Buttoo, 2022) 7. The Government of Singapore also launched Singapore’s AI Verify, a Testing Framework and Toolkit for AI Governance, for companies that want to test their AI capabilities (Lee, Mulrow Peattie, 2022) 8 .

Here’s an overview of AI policy initiatives in Singapore, categorized by policy instruments:

Figure 2: AI policy initiatives in Singapore. Source: OECD.AI (2021), powered by EC/OECD (2021), database of national AI policies, accessed on 26/08/2022, 

Singapore has also taken a friendlier position to enter the cryptocurrency space. There are discussions happening around the approach to the issue of automated contracting 9, which will have implications for AI. Views reflected in B2C2 Ltd v Quoine Pte Ltd. 10, also suggest that ‘for truly autonomous systems, the enquiry may be different. Would a court still look to the state of mind of the programmer, or would it look to, say, the (typically) opaque subroutines of the algorithm during subsequent system operation to determine knowledge, and attribute that to the relevant party? Extremely complex factual enquiries may result from such an approach’. 

The Monetary Authority of Singapore (MAS) 11, in February 2022, announced the release of five white papers detailing assessment methodologies for the Fairness, Ethics, Accountability and Transparency (FEAT) principles, to guide the responsible use of AI by financial institutions (FIs). 

Although the global discourse on AI regulation is still evolving, common standards are beginning to develop. It is high time for organizations to ensure they have reliable procedures in place to address AI risks and comply with all applicable laws and regulations.  Organizations should build or adopt a framework for risk management and compliance that will allow them to rapidly scale AI projects, but safely deploy AI. 

For insights on how regulators are responding to the Ethical Use of AI globally, download our whitepaper:

Download Whitepaper

1. The National AI Strategy:

2. Personal Data Protection Commission:

3. Rai, Murali, 2020:

4. Singapore Academy of Law:

5. Personal Data Protection Regulations 2021:

6. Cyber Security Agency of Singapore:

7. Crompton, Buttoo, 2022:

8. Lee, Mulrow Peattie, 2022:

9, 10. Publication by Norton Rose Fulbright:

11.  Monetary Authority of Singapore announcement:

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.