Responsible AI: performance makes way for duty

Ever since the widespread acceptance of ChatGPT, we have been faced with the emergence of a proliferation of generative AI tools. After the initial phase of standing in awe and amazement at the power of innovation, it was only inevitable that we would start to question this new tool. However, now that the AI bubble has burst, we are all asking ourselves how responsible the use of AI tools actually is. Performance makes way for duty, and we can no longer ignore the need for responsible AI

Responsible AI can be regarded at two levels. On the one hand, there is the regulation and monitoring of artificial intelligence in order to eliminate risks and guarantee a secure environment. On the other hand, we want to build reliable AI tools that make a valuable contribution to the world in which we live.

 

Ethical issues

AI processes are complex. They have an almost miraculous ability to analyse data and generate output. However, they are not perfect: there is always a chance of errors, misleading results and even data-driven catastrophes. Consider the situation in the Netherlands, when the childcare benefits scandal first made the headlines in 2017. People with dual citizenship were automatically designated as possible fraudsters by the algorithm, unjustly and due to erroneous data. Here’s what happened: the algorithm had been ‘retrained’ after a small group of people of Bulgarian origin had committed fraud. As a result, the system put everyone with a foreign background at a disadvantage.

There are countless other examples:

  • The Amazon HR tool apparently had a built-in prejudice against women and people of colour. These groups were therefore less likely to be recruited. 

  • Banks in the USA have been making large-scale use of AI-driven credit scoring, which determines whether or not people are eligible for a loan. However, this can also result in poorly made decisions if the AI is not trained based on the right data – and legislation on this point is still sorely lacking.  

  • In 2014, Daniel Santos was faced with the probability of losing his job as a teacher at a secondary school in Houston. Why? Simply because an algorithm had marked him as unsuitable. This was completely unjustifiable as this happened a week after he had been elected teacher of the month. He lodged a complaint against the system, in which he was fully backed by the teacher’s union.

We cannot deny that AI can help us work more efficiently and automate processes. However, it can also be subject to prejudices (biases), disrupt our privacy and make unauthorised decisions. Responsible AI, however, is different because it is geared towards protecting organisations and individuals against these risks. 

Regulation and monitoring of AI processes

All over the world, rules and regulations are being made in relation to AI technologies. Those set down by the European Parliament deal mainly with eliminating discrimination in AI-controlled processes, from education to law enforcement, and regulation in relation to automatic facial recognition in public areas. 

We need more clarification with regard to when, where and how AI is deployed. According to a study conducted by Stanford University in 2022, only 50% of the world population is aware of this. We are currently seeing a countermovement emerging from various directions, bent on creating more transparency. Among the key agenda items here is identifying AI-generated texts, images deepfake videos and chatbots

OpenAI, the company behind the generative chatbot ChatGPT, is currently hard at work developing the Text Classifier. This tool can verify whether a text was written by a human or AI. Also, the WorldCoin organisation and developer of Orb devices is making every effort to guarantee our human digital identity through the application of iris scanning. (According to WorldCoin, the privacy of the people being scanned is guaranteed and the company does not retain biometric data without prior permission.) 

AI you can trust

At Nalantis, we understand the importance of rules and guidelines to manage the risks associated with artificial intelligence. We approach responsible AI from the source: we build only AI-tools you can trust, and which are deployed for positive change.

Based on our ‘no black box’ philosophy, we work diligently on eliminating bias and increasing transparency. We do this by checking every step of the decision-making process and allowing outcomes to be adjusted should this be necessary. A prime example of this is our Talent Acquisition Platform: if a candidate is rejected, the reason for this is clearly explained and the recruiter is given the chance to nevertheless take a look at the rejected candidate’s CV. This means that we are not deploying AI as a universal truth, but as a tool to help us perform certain tasks easier, faster and better.

Other Nalantis applications help students gain insight into regional jobs that match their interests, competences and skills. We also deploy AI to improve our services in cities and municipalities. In addition, our Flying Forward 2020 project is geared towards the regulation and smooth integration of self-propelling vehicles by translating our human legislation to their ‘language’. Robots taking over the world? Thanks to responsible AI, they know our rules.

Whether you want to gain greater insight into AI, apply it as a tool or effectively build it by yourself: AI has to be, above all, responsible, whether it’s being used by an organisation, a government body or a private individual. Would you like to know more about our ethical AI applications and collaborate with us on reliable tools? Let’s talk.


 

Written by Frank Aernout, CEO of Nalantis

Would you like to read more and stay updated on new developments?
Connect with Frank on LinkedIn.

Previous
Previous

Openness of administration: no more empty promises thanks to AI technology

Next
Next

What if AI technology joined forces with the generative chatbot ChatGPT?