Using Artificial Intelligence Responsibly

Artificial intelligence ethics

“Machine Learning is a revolutionary technology that has started to fundamentally disrupt the way that companies operate”. Of course the valid response would be businesses panicking and rushing to implement it into their processes. Of course, there are companies who are already ahead in integrating AI into their systems, which we call the early-adopters. But as we are also aware, deploying AI does not end there. There are many concerns involved such as cyber attacks, and unethical practice of AI. These are all issues that are vital to dissect and understand, as well as having the next course of action.

In this sharing, we will be touching on the concerns of using Artificial Intelligence (AI), and how do we use AI responsibly, as well as the framework that have been imposed by the government to ensure this is being practiced. According to IMDA, a trusted ecosystem is essential in which organisations can benefit from tech innovations while consumers are confident to adopt and use AI. Whilst AI continues to grow and evolve at a rapid speed, Singapore believes that a balanced approach when it comes to AI ethics and governance will aid the innovation whilst safeguarding interests of consumers.

The Problem: AI Creates Unique Governance Challenges

As Artificial intelligence progresses, so does the challenges that comes with it. Now, we live in a world filled with uncertainty, and the ability to build learning systems that are able to cope with this basic reality to a certain extent, is an immense opportunity to be discovered. There are many issues that arise from deployment of AI. For one thing, these systems are heavy-reliant on data, hence it is normal for companies to massively collect personal data, causing potential privacy issues in the process. Of course Personal Data Protection Act is in place, but sometimes, systems are able to override, which will cause massive chaos in return.

Second, collecting, cleaning and processing high-quality data is a costly and complex task. It is not an easy task at all, many automation tools and programming softwares are used in this area. We are all swimming in data, gaining insights that are beneficial for organisations to use and improve on their business is a massive advantage that not all are able to leverage on.

As AI-powered systems evolve with data and use, their behaviours are hard to anticipate; and when they misbehave, they are harder to debug and maintain. Take for instance, the chatbot Ask Jamie, for Ministry Of Health. The misaligned replies certainly were the talk of town. Due to this,  the bot’s been banished from the MOH website until it has been properly fixed. Even thought it might be a trivial error, it is certainly not something the government would want to be known for. AI is after all as smart as the proper interactions it has over time.

The Solution: Model AI Governance Framework

Thus, the Model AI Governance Framework provided a comprehensive and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. Having such a framework in place provides and open and transparent communication for all to see, and practice. Hence, for those who do not take into account such a framework, they should be held accountable for any misuse of AI that may cause harm to others. As Singapore progresses as well, so does the framework to keep up with changes, and considerations.

Second Edition Model Framework

According to the second edition of the Model Framework, decisions made by AI should be transparent, explainable and fair. Additionally, AI systems should be human centred. Under these key points, 4 sub points were identified to further enhance the framework.

1. Internal Governance Structures and Measures: In this section, the framework mentioned to have clear roles and responsibilities in organisations, as well as SOPs to monitor and manage risks and of course staff training. By having all these clearly identified and addressed, the organisation will have a better start at embracing and managing such high intensity of changes.

2. Determining the Level of Human Involvement in AI-augmented Decision-making: Implementation of AI should not cause less human involvement, instead it has to be a balanced. AI is solely purposed to revolutionise human’s world and daily routine be it in jobs or everyday lives. An Appropriate degree of human involvement is required to minimise the risk of harm to individuals

3. Operations Management: When it comes to operations management, in order to minimise bias in data and model, it is imperative for organisatons to implement and facilitate risk-based approach to measures such as explainaibility, robustness and regular tuning.

4. Stakeholder Interaction and Communication: this section ensures that AI policies are made known to users, thus providing users the platform for  feedback, where possible and also make communications easy to understand. This will reduce any misunderstanding, as well as allow for better facilitation of transparency.

These initiatives play a vital role in Singapore’s National AI Strategy. They embody the plans to develop a human-centric approach towards AI governance that builds and sustains public trust. These initiatives also creates the ecosystem of AI – the trust of public, everyone involved as well as the sustainability measures to ensure AI is integrated smoothly, and is ethically done as well. The Model Framework and ISAGO ( which aims to help organisations assess the alignment of their AI governance practices with the Model Framework, as well as priovide an extensive list of useful industry examples and practices to help organisations implement the Model Framework), will create opportunities for future developments, such as the training of professionals on ethical AI deployment, laying the groundwork for Singapore, and across the world to acknowledge and address AI’s impact on society, for the better.

The Way Forward

There is an increasing awareness among business leaders that a responsible approach to AI is needed to ensure the beneficial and trustworthy use of this transformative technology. However, they are unsure about how to do this at scale while creating value for their companies. Thus, the framework is a step forward for these companies, to ensure that their worries can indeed be solved. The strong government support and the assistance they offer will benefit most companies in their AI implementation. The steps we take today is essential to our future. The Model Framework has been recognised as a firm foundation for the responsible use of AI and its future evolution. It is important that we take this step forward to ensure a safe and trusting ecosystem for future generations in their use of AI.

Of course, whilst any change is hard to do at first, if AI is truly something that will bring wonders to the company and add value for its customers, it is certainly worth all the hassle. After all, we live in a highly digitalise era where automation and anything AI is the norm. Most businesses are already deploying AI, but more companies are still on the ropes regarding ethical AI. Thus, if you believe that you have got much to learn, but have no idea where to start, this blog will certainly narrow your options!

Conclusion

Of course, whilst any change is hard to do at first, if AI is truly something that will bring wonders to the company and add value for its customers, it is certainly worth all the hassle. After all, we live in a highly digitalise era where automation and anything AI is the norm. Most businesses are already deploying AI, but more companies are still on the ropes regarding ethical AI. Thus, if you believe that you have got much to learn, but have no idea where to start, this blog will certainly narrow down your options!

Learn more about AI Ethics & Governance in Action!

Having the right training and resources is not enough to implement a lasting change if you fail to build the right lines of accountability. In other words, to do the right thing, employees must have the right incentives and be recognized for doing the right thing. Unsurprisingly, that’s one of the biggest challenges that Responsible AI practitioners are reporting. Thus, in Aventis, we have curated a 2 days course to tackle such issues head on!

This certification aims to provides guidance on issues to be considered and measures which can be implemented to build stakeholder confidence in AI and to demonstrate reasonable efforts to align internal policies, structures and processes with relevant accountability-based practices in data management and protection. This 2 Day Professional Certification in AI Ethics and Governance in Action will provide you with in depth understanding of the various areas to watch out for when deploying or using 3rd Party AI technology.

For more information, you can get in touch with us at (65) 6720 3333 or training.aventis@gmail.com

 

References

A 5-step guide to scale responsible AI

Model Artificial Intelligence Governance Framework Second Edition

Artificial Intelligence