What’s wrong with Black Box AI, and how can we fix this?
Imagine this: you apply for a job online but get an automated message telling you that you were not selected. You call the recruiter because you would like some feedback on your application and learn how you could do better next time. However, the recruiter cannot accede to your request: ‘That's just how the system works’.
Alternatively, suppose you go to the bank to apply for a small-scale loan – to renovate your roof, for example. The bank clerk then tells you that a notification popped up on his computer saying you are not eligible for a loan. When you ask why, the clerk regrets that he can’t give you an answer: ‘That's just how the system works’.
Both examples involve Black Box AI, in which a decision is made by a machine rather than a person. There’s nothing wrong with that in itself, as it saves time and energy that can be invested elsewhere more profitably. But it does go wrong when there is no control over or insight into the machine's decision-making process, as illustrated by the two previous examples. And believe it or not, Black Box AI is the reality in many AI systems.
What is black box ai?
Black box AI is the result of unsupervised deep learning models, or AI systems that rely entirely on what they learn from data. The black box is a metaphor for systems where input and activities are not visible to users or other parties. All decisions or actions are performed by AI, but the process or methods involved to achieve this cannot be traced.
What problems does Black Box AI cause?
An impenetrable box such as this not only makes the model difficult for programmers to understand; it also makes it difficult to detect errors, which subsequently remain unnoticed for a long time. Consider, for example, AI discrimination: as some deep learning models (e.g. GPT-3) skim the internet in search of knowledge, they also come across a lot of incorrect, racist and sexist data. As a result, a system that relies on this AI might be inclined to select a man rather than to a woman for a particular job, or formulate statements that contain hateful language.
Another problem that remains undetected with Black Box AI is leakage. Suppose an AI system is developed for the medical industry that predicts a patient's diagnosis based on a number of abstract factors (e.g. age, medical history and lifestyle). Now, imagine that a data leak occurs while testing and training that AI, and the AI gains information that does not actually correspond to those pre-programmed abstract factors. However, the information that was leaked (e.g. a doctor’s notes, or a patient ID) nevertheless provides a conclusive answer to the question of what the diagnosis will be. The AI will thus provide a perfect answer and it will appear as if it is making flawless predictions without the additional information, which is actually not the case. Leakage such as this occurs when an error has been made in a code, but which is hidden in the black box and would, in such a case, designate the AI systems as successful while they are not actually so.
Is there an alternative?
Yes, there is. 'No Black Box AI' is the term used for systems that are transparent, understandable and controllable – systems in which errors are detected and poor performance is avoided, and where the user also gains insight into the decision-making or execution process of the technology.
A No Black Box AI system can be tailored to a diversity of applications, for both the developer and the user. In the example of the rejected loan applicant, the system could present the criteria needed for the decision to be positive, in clearly understandable human language rather than in complicated codes so that the bank clerk could communicate or resolve the problem immediately.
‘Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result.’
— Prof. Sénen Barro, Professor of Computer Science and Artificial Intelligence at the University of Santiago de Compostela in Spain
So, why does Black Box AI still exist?
One might ask: if there is an alternative, why does Black Box AI still exist? Why are the biggest international AI players still hiding their systems in black boxes? Some people think No Black Box AI does not work as well and that really good AI should be an indecipherable mystery. However, that is a myth that should be rejected.
What does explain Black Box AI is that some companies want to market AI models as quickly and lucratively as possible. Even when important decisions are involved, they prefer profit and power to interpretability and transparency; not to mention humanity. As a result, a mindset prevails in which people believe that AI has to be complicated to be accurate, while there are enough interpretable models available for the same tasks.
Fortunately, at Nalantis, we develop 100% No Black Box AI models that are fully interpretable, verifiable and transparent. We are striving for AI that can be trusted. Would you like to join us in realising this? Contact us for more information.