What’s Ahead for AI? A Steep Learning Curve

0
In the early months of 2023, ChatGPT rose in popularity out of nowhere and has instilled a mix of excitement and panic across classrooms and boardrooms. At the same time, Microsoft and Google formally kicked off the AI arms race with major strategic announcements leading many in the broader public to believe that we have achieved peak AI. While this may be true, these new applications didn’t happen by accident. Training a machine to learn, think, act and respond like a human takes massive amounts of data inputs across countless potential scenarios. A machine can’t validate a machine, and right now any machine learning algorithms are enabling these applications is only as good as its training data.

It’s (Still) All About The Data

The use cases for AI are getting more and more complex as organizations from retail, banking, automotive, healthcare and more look to implement AI. Many are finding these implementations much more difficult than expected because they underestimate the work that goes into proper data collection and training models properly. These organizations need different data sets and inputs, made up of authentic voices, documents, images and sounds, depending on the algorithms requirements. Essentially it’s source quality data at scale.

Foundation models (deep learning algorithms based on broad sets of unlabeled data) can help, but raise crucial questions about ethics and compliance. If the foundational data is flawed, or biased, so too will the outcomes be. It is difficult for algorithms to ‘unlearn’ patterns, so it is important that biases are not built into the algorithm from the earliest phases of implementation. For example, the most powerful language model ever created – the Generative Pre-Trained Transformers 4 (GPT-4) released in March this year does not reveal what data sets it is trained on citing competitive reasons. This raises serious ethical questions. Organizations will need to build governance and compliance into their development process and timelines to ensure that their machine learning models are not amplifying existing biases in datasets.

Test….and Test Again

While AI language models have been and will continue to be trained using large amounts of data, organizations continue to underestimate how much data they actually need. More training data means more learning for algorithms. Early or smaller sample sizes make it difficult to identify trends and make accurate correlations. Getting it right requires sufficiently representing all attributes of human nature, which means that organizations will still need people to test, develop and improve AI.

We’ve all witnessed how even the largest of tech companies have recognized the need for rigorous testing that combines real world external feedback with internal testing.

Avoid Costly Mistakes

Crowd-based testing can introduce a human element to help uncover issues that lab-based or structured test cases can miss, and can curate training data.

Crowdtesting allows companies to get feedback from a diverse group of users, which can help identify potential sources of bias in the model. By testing the model with a wide range of users in real-world scenarios, companies can identify issues that may have been missed during the development process and take steps to address them. This method also helps identify issues related to user experience and helps establish a feedback loop between companies and their users to ensure that the AI application continues to evolve and improve.

By leveraging the power of the crowd, companies can speed up the amount of time it takes to label large amounts of data, ensure quality control, and improve the diversity and relevance of their training data. By having multiple people label each data point, companies can identify discrepancies and errors, and take steps to address them. They can also have testers generate new examples of data that are relevant to the task at hand, improving the overall diversity and quality of their training data.

While the excitement and activity so far around AI is incredible, the remainder of 2023 will be a year of learning for AI teams and users. Incorporating and working with data from a broad variety of people with different backgrounds, experiences, and ways of thinking and behaving, will be time well spent toward eliminating bias and further advancing AI.

For more information visit the Applause website HERE.

Related News:

The Good and Bad of Using AI in Education

ML and AI: What Is Real Right Now and What Is Just Vapor?

Share.

About Author

Adonis Celestine is Senior Director and Automation Practice Lead at Applause. In this role, Adonis helps Applause’s clients to take a customer-centric approach to quality as part of their quality engineering evolution. He is an expert in test data management and compliance, as well as automation tools including Selenium, Cypress, Playwright, Tosca, UFT and Leapwork. Before joining Applause, Adonis was Associate Director and Lead Solutions Architect at Cognizant and held diverse quality engineering roles across the finance and telecommunications sectors, working for brands including Lloyds Banking Group, de Volksbank, DLL, Tele2 and Rabobank. Adonis is an accomplished writer and public speaker. He is the author of “Quality Engineering: The Missing Key to Digital CX” (2022), “Continuous Quality: The Secret of the Pharaohs” (2021), which won the EuroSTAR Software Testing Award, and “As the World Turns: A Predictive Test Approach with Machine Learning” (2019). He has delivered speeches and keynotes at events including TestNet, Testdag, Testcon, EuroSTAR, Romania Testing Conference and Belgium Testing Days. Adonis regularly writes for multiple tech publications and his articles have been featured in AI Journal, Digital IT News and AT Today.