In the early months of 2023, ChatGPT rose in popularity out of nowhere and has instilled a mix of excitement and panic across classrooms and boardrooms. At the same time, Microsoft and Google formally kicked off the AI arms race with major strategic announcements leading many in the broader public to believe that we have achieved peak AI. While this may be true, these new applications didn’t happen by accident. Training a machine to learn, think, act and respond like a human takes massive amounts of data inputs across countless potential scenarios. A machine can’t validate a machine, and right now any machine learning algorithms are enabling these applications is only as good as its training data.
It’s (Still) All About The Data
The use cases for AI are getting more and more complex as organizations from retail, banking, automotive, healthcare and more look to implement AI. Many are finding these implementations much more difficult than expected because they underestimate the work that goes into proper data collection and training models properly. These organizations need different data sets and inputs, made up of authentic voices, documents, images and sounds, depending on the algorithms requirements. Essentially it’s source quality data at scale.
Foundation models (deep learning algorithms based on broad sets of unlabeled data) can help, but raise crucial questions about ethics and compliance. If the foundational data is flawed, or biased, so too will the outcomes be. It is difficult for algorithms to ‘unlearn’ patterns, so it is important that biases are not built into the algorithm from the earliest phases of implementation. For example, the most powerful language model ever created – the Generative Pre-Trained Transformers 4 (GPT-4) released in March this year does not reveal what data sets it is trained on citing competitive reasons. This raises serious ethical questions. Organizations will need to build governance and compliance into their development process and timelines to ensure that their machine learning models are not amplifying existing biases in datasets.
Test….and Test Again
While AI language models have been and will continue to be trained using large amounts of data, organizations continue to underestimate how much data they actually need. More training data means more learning for algorithms. Early or smaller sample sizes make it difficult to identify trends and make accurate correlations. Getting it right requires sufficiently representing all attributes of human nature, which means that organizations will still need people to test, develop and improve AI.
We’ve all witnessed how even the largest of tech companies have recognized the need for rigorous testing that combines real world external feedback with internal testing.
Avoid Costly Mistakes
Crowd-based testing can introduce a human element to help uncover issues that lab-based or structured test cases can miss, and can curate training data.
Crowdtesting allows companies to get feedback from a diverse group of users, which can help identify potential sources of bias in the model. By testing the model with a wide range of users in real-world scenarios, companies can identify issues that may have been missed during the development process and take steps to address them. This method also helps identify issues related to user experience and helps establish a feedback loop between companies and their users to ensure that the AI application continues to evolve and improve.
By leveraging the power of the crowd, companies can speed up the amount of time it takes to label large amounts of data, ensure quality control, and improve the diversity and relevance of their training data. By having multiple people label each data point, companies can identify discrepancies and errors, and take steps to address them. They can also have testers generate new examples of data that are relevant to the task at hand, improving the overall diversity and quality of their training data.
While the excitement and activity so far around AI is incredible, the remainder of 2023 will be a year of learning for AI teams and users. Incorporating and working with data from a broad variety of people with different backgrounds, experiences, and ways of thinking and behaving, will be time well spent toward eliminating bias and further advancing AI.
For more information visit the Applause website HERE.
Related News: