3 factors that must be considered to build AI we can trust

Trust is key to any effective technology implementation. If people don’t trust it, they’re not going to use it. This challenge comes up again and again, especially with AI. While we have mountains of data to train systems on, creating a system that users trust demands thoughtful use of data to ensure meaningful results, and therefore, trustworthy AI. An example of how this shows up in the real world is the case of seatbelts. Seatbelts are safer for men than women. Why? Data bias. When seatbelts were initially designed and safety tested, the test dummies were modeled on the average body dimensions of men who served in the U.S. Army in the 1960s. Could this existing data be used to create AI that predicts injury risks as accurately for women as for men? Unlikely. But does that mean it’s not trustworthy? All systems have weaknesses. AI struggles more than traditional software because AI performance depends on the data quality, population properties, and the ability of a machine-learning algorithm to learn a function. You need to study how those features affect the problem to solve in order to build in failsafe mechanisms to warn users when the AI becomes unreliable. In the seatbelt example, this means acknowledging that because of the original data set that was used, today’s seatbelts are safer for men than women. That doesn’t mean they shouldn’t be made safer for everyone and certainly not that we should stop using them altogether. Just that better data is needed. Now let’s compare this example with a multibillion-dollar chip factory. The world is currently faced with a massive chip shortage, so it’s imperative that systems run as efficiently as possible. When the AI-powered manufacturing system correctly interprets data, it maximizes production yield and throughput, getting chips where they’re needed, when they’re needed. Failing to interpret the data can result in costly delays. The need for meaningful AI becomes even more important as we zoom in. Suppose you supply equipment to a chipmaker. To minimize repair time, you want to predict as early as possible when equipment parts will fail. If you can predict a failure early, you can schedule the swap during a periodic maintenance slot. But if you can’t predict it, it could result in an unscheduled equipment outage, which could cost millions per day. To prevent those issues, you build a deep learning model to classify defects based on part pictures. But because you don’t have enough pictures of part failures (because you don’t have enough data), you use data augmentation techniques like rotation and flipping. This provides more data, which could mean better accuracy, but not necessarily a trusted result. Yet. To create truly meaningful AI, you need more than just high data volumes. An expert must verify the data is meaningful in the real world. In this case, creating more data from flipped and rotated images is not meaningful because the augmented pictures don’t reflect real-life failure points on the equipment. The AI could still be especially useful for locating suspicious areas and isolating patterns, but not to make production decisions. 3 critical areas that impact AI  Ultimately, the best way to understand where the weaknesses lie in your data set is to test it, test it again, and keep testing it. AI requires many iterations and confrontations with the real world. Each confrontation can expose different flaws and allow the AI to interact with scenarios that can’t be predicted. When you first put an AI solution to market, the key to making sure that it is trustworthy is to look critically at three areas. Remove bias in your data population: When your technology is making decisions for humans, it’s critical that the AI is evaluating each case objectively. While companies are becoming more aware of the racial and gender bias that can be unintentionally built into algorithms, there are some forms of bias that may be more difficult to track. In particular, analysts need to continuously evaluate results for things like income and education bias. Control for drift: Particularly for predictive maintenance technology, drift can be a major factor influencing your results. Analysts need to pay close attention to data sets, not just in the first few weeks or months of execution, but over the first few years to ensure that systems are truly spotting abnormalities and not getting clouded by benign deviations in the data. For example, this would look like ensuring that AI is spotting signs of unexpected failures in equipment, not just normal wear and tear. Check explainability: Humans have a hard time trusting what they can’t explain. Trustworthy AI has to be able to provide an explanation for every step in its decision-making process. There are many existing frameworks for this, and they should be included as standard in all solutions. However, it’s also critical to keep humans in the loop to check the AI’s work. If an explanation doesn’t make sense or feels biased, it’s a sign to take a deeper look at the data going in. AI is as trustworthy as the data you train it with. The more accurately it represents the real world, the most trustworthy it will be. But mathematically capturing the world isn’t enough. You miss basic human judgment, common sense. Explainability is the biggest challenge experts need to solve in the coming years, and key to helping humans trust and adopt AI.

Arnaud Hubaux PhD is a product cluster manager at ASML.  Ingelou Stol is a former journalist and works for High Tech Campus Eindhoven, including the AI Innovation Center, a European knowledge platform for AI developments, founded by ASML, Philips, NXP, and Signify. Stol is also the founder of Fe+male Tech Heroes.

https://www.fastcompany.com/90724601/3-factors-that-must-be-considered-to-build-ai-we-can-trust?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 3y | Feb 24, 2022, 11:21:37 AM


Login to add comment

Other posts in this group

‘My library of Alexandria has been burned down’: Pinterest users are fuming over sudden bans

Pinterest fans are nothing if not loyal. Many have spent years—sometimes decades—carefully curating boards filled with wedding inspiration, home decor ideas, fashion, and more. Now users are loggi

May 6, 2025, 1:30:04 PM | Fast company - tech
Instacart launches Fizz, a group ordering app for party drinks and snacks

Instacart is launching a new stand-alone app called Fizz, designed for groups to order snacks and drinks ahead of parties for a flat $5 delivery fee.

The platform, developed in collabora

May 6, 2025, 1:30:03 PM | Fast company - tech
Inside the Grindr CEO’s ‘hardcore’ vision for the LGBTQ dating app’s future

George Arison is telling me about a hookup.

Arison, the 47-year-old CEO of the LGBTQ dating app and social network Grindr, recalls an encounter with a man who ranked low in physical chem

May 6, 2025, 11:10:04 AM | Fast company - tech
‘AI is already eating its own’: Prompt engineering is quickly going extinct

Just two years ago, prompt engineering was hailed as a hot new job in tech. Now, it has all but disappeared.

At the beginning of the corporate AI boom, some companies sought out large la

May 6, 2025, 11:10:04 AM | Fast company - tech
Goodbye human drivers? Waymo’s robotaxis are now fully operational

Summoning a robotaxi from your phone is not a futuristic fantasy since Waymo achieved full commercial deployment.

https://www.fastcompany.com/91325288/goodbye-human-drivers-waymos-robotaxis-a

May 6, 2025, 8:50:02 AM | Fast company - tech
‘You got to be really careful what you tie your name to’: The Hawk Tuah girl is planning a rebrand

Haliey Welch, better known as the Hawk Tuah girl, is ready for a rebrand.

After being thrust into the spotlight in 2024, thanks to her now-iconic “Hawk Tuah” catchphrase—featured in a vi

May 5, 2025, 11:30:07 PM | Fast company - tech
Anthropic hires a top Biden official to lead its new AI-for-social-good team (exclusive)

Anthropic is turning to a Biden administration alum to run its new Beneficial Deployments team, which is tasked with helping extend the benefits of its AI to organizations focused on social good—p

May 5, 2025, 9:20:03 PM | Fast company - tech