MedCity Influencers, Artificial Intelligence

Can We Build a Trustworthy ‘AI’ While Models-As-A-Service (Maas) Is Projected To Take Over?

Model building is now simplified more than ever because of automation and tool intelligence achieved in model building. With a surplus of MLOps tools, users of all sizes and backgrounds are now focusing on model manufacturing. But the question remains - are all models usable? 

Regulators in few geographies have defined the components and frameworks but what does it take to create a ‘trustworthy AI’. What are the challenges we see in delivering it?

AI continues to be incorporated into everyday business processes, industries and use cases. However, one concern remains constant – the need to understand ‘AI’. Unless this happens, people won’t fully trust AI decisions.

The opacity of these systems, often referred to as ‘BlackBox AI’, raises several ethical, business and regulatory concerns, creating stumbling blocks to adopting AI/ ML, especially for mission-critical functions and in highly regulated industries. No matter how accurately the model makes predictions, unless there is clarity on what goes on inside the model, the question of trusting the model blindly will always be a valid concern for all stakeholders. So how does one get to trust AI?

AI decisions – To trust or not to trust

To trust any system, accuracy is never enough; justifications for prediction accuracy are just as important. A prediction can be accurate, but is it the correct prediction? This can be determined only if there are enough explanations and pieces of evidence to support the prediction. Let us understand this through an example from the healthcare industry.

The industry is considered one of the most exciting application fields for AI. It has intriguing applications in the areas of radiology, diagnosis recommendations/ personalization, and medicine discovery. Due to growing health complexities, data overload and shortage of experts, diagnosis and treatment of critical diseases have become complex. Although using AI/ ML solutions for such tasks can provide the best balance between prognosis ability and diagnosis scope, the significant problem of ‘explainability’ and ‘lack of trust’ remains. The model’s prediction will be accepted by all users if the model can provide strong supporting evidence and explanation behind the prediction that satisfies all users i.e., doctors, patients and governing bodies. Such explanations can help the physician judge whether the decision is reliable and create an effective dialogue between physicians and the AI models.

It’s more difficult to achieve Trustworthy AI, though. AI systems, by nature, are highly complex. The process of ideation, researching and testing the systems into production is hard, and keeping them in production is even harder. Model behaviours are different in training and production. If you need to trust the AI model, it cannot be brittle.

What are the critical components of trustworthy AI?

  • AI explainability

In machine learning, explainability refers to understanding and comprehending your model’s behaviour, from input to output. It resolves the “black box” issue by making models transparent. It should be noted that ‘transparency’ and ‘explainability’ and quite different. Transparency clarifies what data is being considered, which inputs provide outputs, and so on. Explainability covers a much larger scope – explaining technical aspects, demonstrating impact through a change in variables, how much weightage the inputs are given, etc.

For example, if the AI algorithm has predicted the prognosis for ‘cancer’ from the patient’s data provided, the doctor would need evidence and an explanation of the prognosis. Without which it merely acts as a non-reliable suggestion.

  • AI accountability: 

Much has been said about the opacity or black-box nature of AI algorithms. The solution should lay out the roles and responsibilities for running the ‘AI’ solution and thereby causation of the failure as well. Capturing such artifacts and registering them by records can provide a traceable and auditable in-depth lineage.

  • Promise of consistency

AI models in production can act differently as compared to training/test environments. And models do suffer from drift in data or goals. Even if the models are periodically retrained, there is no guarantee that the outputs will be consistent in production. Frequent failures reduce the model’s reliability and create mistrust in the user’s minds.

  • Underlying bias

AI/ ML model predictions can be error-free and still have underlying biases. The models replicate associations based on training data, and biases prevalent in the training data can easily creep into production. Taking the earlier example from healthcare, the markers for a particular disease might vary in Americans and Asians. Ideally, the model should be trained and able to distinguish both while making predictions.

  • The need for continuous adaption

In the real world, people are constantly learning, particularly in both prognosis and diagnosis. With growing research publications and studies, there could be incremental or drastic changes for the same given pre-conditions. ‘AI’ solutions should ensure up to date on knowledge consumption.

  • Human-in-the loop controls

It becomes difficult to prevent AI from uncertainty due to automated decision-making. It is not possible to define necessary controls without understanding how the system works. With human oversight, it becomes much easier to govern and determine AI behavior to reflect organisational, societal and business preferences.

Regulatory guidelines in achieving trustworthy AI

Globally, the regulation of AI has become a common point of discussion for governments. Many countries are setting up a regulatory environment to decide what it deems acceptable uses of artificial intelligence (AI). The European Union, in 2019, published the Ethics Guidelines for Trustworthy Artificial Intelligence. The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy – namely, Human agency and oversight, Technical Robustness and safety, Privacy and data governance, Transparency, Diversity, non-discrimination and fairness, Societal and environmental well-being, Accountability. In October 2022, the White House Office of Science and Technology Policy (OSTP) unveiled its Blueprint for an AI Bill of Rights. The blueprint is a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems. This comes a year after the White House announced its intent to develop a process to launch a “bill of rights” to guard against ‘powerful technologies created’. The Monetary Authority of Singapore (MAS), in February 2022, announced the release of five white papers detailing assessment methodologies for the Fairness, Ethics, Accountability and Transparency (FEAT) principles, to guide the responsible use of AI by financial institutions (FIs).

While we can draw some relevant elements from the guidelines, there are some additional elements that can now be added to the above list as the technology has advanced over the past few years.

So what elements are needed for an AI framework to be trustworthy?

What are some of the impediments to trustworthy AI?

The components of ‘Trustworthy AI’ are fairly standardized across geographies and use cases, but when enquired about the adoption of it, it is still a work in progress across use cases. There are many reasons for it.

  • Explanations and evidences are highly contextual! 

The complex nature of AI makes it difficult for humans to interpret the logic behind AI predictions, and whatever is being generated today is understandable only to an AI expert! Typically, data science or ML teams look at these explanations and tries to understand the model behaviour. But when it comes to relating them in the business sense, they are lost in translation. Explainability should be translated into a language that all users can understand. The purpose gets diluted if only a few users can understand these explanations. It becomes the task of the ‘AI’ builders to offer such easy-to-understand explanations to all types of users. And it is not easy to achieve such explanation templates that are acceptable to all users and for all use cases. Explanations and evidence are highly contextual to the use case, user, and geography!

  • Explanations need to be true-to-model. 

Just to address the regulatory or user requirements, we have seen case studies where explanations are deployed using surrogacy models or unrealistic synthetic data. Methods like LIME/SHAP use synthetic data to explain models. While these provide a good starting point but cannot be used as the sole explainability approach because of the sensitivity of the use case. There are few studies where it is proven that these methods can be fooled. Explanations should be consistent, accurate and true-to-model.

  • Human bias: 

And even after providing all these explanations, ‘AI’ solutions can face bias from experts as they would trust a human more than a machine’s predictions. Reasons may not be intellectual but biased towards human relations.

  • Nexus trail: 

Often, users expect to further dig down on an explanation or evidence to find the source of learning and validate it. While there could be a simplified tree built around this but achieving a fully dynamic nexus trail is very complex as the trail path can be debated for its authenticity.

Conclusion: 

Many users may assume that ‘AI’ would’ve matured by now because of the multiple hype cycles in the recent times. It has kind of become a pattern that – whenever we find a solution to an aspect of the ‘AI’ usage problem, we may tend to over-expect its sufficiency of it and market it as the ‘blue pill’ of ‘AI’ supremacy! But in reality, like any technology, it’ll take years to perfect the template and make it full proof. By then, it’d have become a norm in the industry and found mass acceptance. To achieve that, we need all blocks of innovation to cooperate and co-validate – Regulatory, Academia, Corporates and Customers. The definitions of these ideal components of ‘trustworthy AI’ will continue to change in the coming months and years, but ‘AI’ may find short-term acceptance in limited scope. We already have seen an uptick in finding a number of ideal use cases for ‘AI’ today. And almost all of them have a ‘human-in-the-loop’ component as the critical criteria for sensitive use cases. We will continue to see the same trend for a few more years until we see the flipping of confidence between ‘AI’ and ‘human experts’.

Photo: Gerd Altmann, Pixabay


Avatar photo
Avatar photo

Vinay Kumar Sankarapu

Vinay did his Bachelor’s and Masters from IIT Bombay with a thesis on using deep learning to predict the remaining useful life in a turbine engine. Vinay started Arya.ai in 2013, in college, along with his Co-founder, Deekshith. He successfully raised the first round of funding in 2014 to build India’s first Deep Learning platform for developers. Later, the platform was verticalized for the BFSI industry.

Today, Arya.ai is deployed at multiple Fortune 500 and large financial institutions. He also authored a patent which is currently in review. Vinay and Deekshith were listed in ‘Forbes 30 under 30’ at the age of 23. He was the youngest member of the AI task force set up by the Industry and Commerce Ministry of India in 2017 to submit recommendations on AI adoption in India and as part of Industry 4.0. He has been an esteemed speaker talking about strategies and concepts in AI and deep learning since 2014.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares0
Shares0