How to build reliable AI, and why you should
AI-powered apps fuel more and more companies' growth, and regularly gets involved in GAFAM-related controversies, reliable AI gets more and more coverage. Still, many stakeholders are reluctant to invest in its building. Here's why you should, and how to do so.
Artificial intelligence (AI) has attracted keen interest amongst companies of all sizes. According to Statista, the global AI software market is poised to reach $126 billion by 2025. Various strong cases have painted AI as the technological shift that allows businesses to thrive in terms of productivity, competitiveness, customer experience, and more.
Yet, in a rush to incorporate AI in business strategies, you must pause and question if your AI app is trustworthy by design. If you assume that the AI app is ready for deployment because it has been trained with large data sets, you risk potential fallouts from latent or undiscovered flaws.
For example, the Apple Credit Card caught attention for the wrong reasons months into its launch. Women customers found to their dismay that their male counterparts were accorded a more favorable credit margin. The incident triggered questions on the possibility of gender bias, whether directly or indirectly, in the machine learning algorithm used for issuing the cards.
As a key stakeholder in profit-driven organizations, ensuring your AI app remains consistently reliable is pivotal. Conventional AI development methods no longer suffice for increasingly complex and at-scale use cases. Instead, companies must base their applications on a reliable AI framework to add value to customers.
The 5 key traits of reliable AI
Reliable AI considers the implications of machine learning programs on customers' safety, fairness, and other sensitivities. Developing a reliable AI application requires revisiting and transforming from the proverbial 'black box' AI concept, which hinders data scientists from explaining how certain decisions were made.
To make AI more predictable and interpretable, your data engineers must design AI to exhibit the following characteristics.
Reliable
The AI app must be reliable across diverse types of data and noises in real-life applications. Not only does the app need to deduce accurately from the data inputs, but it must do so within a specific time limit. This is crucial if you apply the AI app to mission-critical applications.
Transparent
Instead of being shrouded in secrecy, data movements and decision criteria are conveyed clearly to the users. For example, an AI-driven credit card approval system furnished reasons to customers when declining their applications. Being transparent prevents miscommunications and fosters trust.
Reproducible
Reproducibility in machine learning is essential for data scientists to investigate and scale the AI model in real-life. To do so, data scientists must obtain similar results each time they feed the same datasets to the same algorithm. Several factors influence AI reproducibility, including code, datasets, and the environment.
Auditable
Auditable IA ensures that an organization is accountable and ethical by documenting the development of the machine learning model and on-field deployment. Such paperwork not only helps you to comply with industry regulations but also resolves issues promptly. It provides data scientists with well-documented processes, explanations, and test results.
Unbiased
An unbiased AI must not demonstrate specific preference toward specific groups of users. Data engineers must work towards mitigating preference-inclined behavior in the algorithm to ensure that the AI app treats all users fairly.
Why is reliable AI important?
For all the good it has brought to humanity, AI technologies have drawn flak in recent times. Take Facebook's intentional cover-up on a report that suggests Instagram's unhealthy influence on users' self-esteem and mental health. Instead of making the research public, Facebook chose silence to prevent disrupting the engagement of one of its largest user bases.
Given the spate of bad reps that AI systems suffered, businesses have every motivation to adopt a reliable AI framework.
Avoid reputational and financial damages
The haste to adopt AI capabilities to enhance services or internal business processes might backfire. Although introduced with well-intention, flaws like discriminatory behavior, unpredictability, and inaccuracy can result in reputational and financial fallouts.
Facebook's reputation suffered a blow as its AI model's transparency was placed under tight scrutiny by the European Parliament. Dubious handling of data and manipulative practices will alienate privacy-conscious customers from the service. Meanwhile, a Hong Kong tycoon's lawsuit against an investment firm is a case of how an AI bot went wrong by losing more than $20 million due to an allegedly misrepresented algorithm.
Comply with legislations
Companies have a collective responsibility to ensure that users are accorded fair treatment, privacy, and transparency when using AI technologies. Therefore, leading nations have adopted legislation to regulate the digital space. The European Digital Service Act (DSA) and Digital Market Acts (DMA) set forth new frameworks for AI systems, including the call for transparency in conventional black box AI models.
In the US, the NIST is devising an AI risk management framework that protects the privacy and promotes a bias-free environment. Meanwhile, the FTC warned that it would investigate occurrences of discriminatory AI algorithms according to provisioned regulations. Transgressing such digital acts might result in hefty fines for organizations, as in the case of Criteo's $65 million fine for breaching the GDPR.
Leaving a positive impact on the society
When deployed on a large scale, AI apps deliver significant impacts across the fabric of society. Any hint of discriminatory policies, intentionally or not, will be felt by the general population. For example, the PredPol policing software is accused of biased crime predicting that targets specific demographics, partly due to unreported crime data.
Meanwhile, the successful implementation of reliable AI helps address social challenges effectively. For example, Crisis Text Line identifies youths with self-harm tendencies by identifying associated words in text messages with machine learning and ensures prompt counseling. Therefore, it's wise to revise your AI strategy to reflect your corporate's stance on social responsibility.
Learn More
For an in-depth understanding of reliable AI, and the role Data-Centric AI has in it, download our ebook and access the 8 key benefits of a data-centric approach to AI
What are the stages for building reliable AI?
Reliability in AI apps requires a mindful and meticulous approach throughout designing, training, and deploying the machine learning model.
Design stage
The design stage involves engaging a group of annotators to label training datasets. It is vital that the datasets remain free from bias, or it could jeopardize the entire algorithm training process. Therefore, you should ensure the annotators have adequate knowledge and competencies to label specific data types.
Also, it's equally important to consider the broader perspective when annotating on a specific subject. For example, engineering professionals may use industrial terms to describe electronic components, while lay people use day-to-day words.
In such cases, enrolling annotators from a single group will lead to skewed datasets. And this will eventually affect the machine learning model when it is trained with biased data. As such, your team of annotators should comprise individuals from different fields, proficiency, and perspectives regarding the subject.
Preparatory stage
Considering that the complexity of implementing a reliable AI model lies heavily in the data set, you'll want to establish a feedback loop mechanism for the annotators. As data volume grows, systematic guidance helps annotators produce quality training data and resolve ambiguity.
For example, we recommend setting a disagreement metric to handle exceptions during annotations. The disagreement metric alerts you of data that annotators struggle to reach a consensus on. Besides enabling mutually-agreed annotation, it also provides insights into the complexities of the data set.
Remember that human errors are inevitable, regardless of how knowledgeable or well-trained the labelers are. Therefore, it's prudent to re-evaluate the labels for correctness, whether manually or via automated tools. It helps to engage an experienced supervisor to oversee the annotation and provide timely feedback to labelers.
Live stage
The AI model faces real tests once deployed in the production environment. At this point, it is imperative that you continuously monitor the AI app to ensure that it behaves predictably. Furthermore, it would be best if you continually fine-tuned the data sets to reflect the evolving environment in which the AI app operates.
To preserve data quality and relevance, I recommend using a two-feedback loop to train your AI model post-deployment. For example, Eidos and JellySmack improve their ML training efficiency by iterating the feedback process with automated and manual audits.
With the same strategy, you can hone into granular aspects that will substantially impact the retrained AI model. For example, your data engineers focus on increasing the data consensus, revising annotation guidelines, or reflecting new changes in the real world in the datasets.
Levering on DCAI approach to build reliable AI
So far, scientific discussions on AI have centered on algorithms and machine learning models. However, both aspects of AI have achieved a degree of maturity, and therefore, we should divert our focus to the data. After all, clean, unbiased data is the foundation of a reliable AI system.
Many AI systems demonstrate lackluster performance in production environments, despite being trained with cost-intensive machine learning models. This calls for a Data-Centric Artificial Intelligence (DCAI) approach to strengthen the foundation of AI applications – data quality. Check out this DCAI manifesto to find out what this new ML training approach entails.
The DCAI principle advocates clean and diverse data ensure optimal performance of your AI app.
Clean data indicates that the training datasets were built with standardized tools and procedures. This is important to ensure the resulting machine learning model training achieves a high accuracy rate.
Diverse data takes into consideration various scenarios that might occur in actual applications. It involves training the machine learning algorithm with every possible data instance to ensure it responds predictably to real-life situations.
If you're concerned about the reliability of your AI projects, adopting the DCAI approach helps in several ways.
Improves performance
Rather than solely banking on established AI models, DCAI refocused machine learning efforts to provide better datasets. The approach recognizes that quality data trumps data quantity. It won't matter if you have a massive data set if most are incorrectly labeled, biased, or redundant.
The DCAI framework emphasized getting data annotation right from the start. It involves mitigative measures to prevent biased labels from compromising data quality and ensure diversity in the datasets. As such, your team has lesser corrective inspections on their plate as data quality was established from the beginning.
Reduces development time
DCAI aligns with Agile methodologies that most app developers are familiar with. Instead of waiting for issues to pile up, data scientists take an iterative approach to perform continuous checks. They apply auditing methodologies, such as the honeypot and consensus metric. As a result, the data engineering team can respond to changing environments, latent flaws, and new behaviors more efficiently.
Empowers collaboration
As AI makes headway in mainstream applications, data scientists seek to work collaboratively with app developers and other software experts. DCAI envisions a technological ecosystem that allows the respective parties to streamline machine learning processes for increased efficiency.
For example, data engineers use Kili to reduce missing annotations by automatically keeping track of the labeling process. Kili uses machine learning iteration for shortening further the time it takes to train and deploy a complex machine learning model to the market.
Conclusion
The call for accountability, responsibility, and ethics when designing AI-powered products is deafening. Authorities are clamping down on AI programs that infringe human rights, stray from ethical boundaries, or pose substantial risks to consumers' well-being. As a business leader, you need to adopt a new AI training strategy or risk the consequences of releasing a flawed AI app. We've presented the DCAI framework as the paradigm shift in AI development. It's about time that you pay equal, if not more, attention to the data that powers your AI model.
Learn More
For an in-depth understanding of reliable AI, and the role Data-Centric AI has in it, download our ebook and access the 8 key benefits of a data-centric approach to AI