What’s New in AI?
A new wave of AI has entered the chat. Literally. Large Language Models (LLMs), like ChatGPT, and other generative AI solutions are being touted as the next big technological breakthrough with applicability across industries. Being able to interact so directly with these technologies has brought more of the public consciousness into the global AI conversation. While new possibilities and potential are easier to imagine, we are also now hyper-aware of the ‘black box’ dilemma that AI practitioners have been struggling to correct for a long time. Issues like bias, data manipulation, copyright infringement and privacy violation, and the spread of misinformation pose a threat to making advances that benefit humanity.
There is a renewed emphasis on examining ethical and responsible uses of artificial intelligence, creating reliable, safe, and trustworthy systems, and implementing regulations to protect privacy, intellectual property, and autonomy.
None of this is achievable without traceability.
What is Traceable Artificial Intelligence?
An auditable, verifiable, and validation-ready system requires an answer to the ever-important ‘where.’ Trustworthy solutions should be capable of following information backward from prediction to the original trained record(s) that produced that prediction.
Traceability allows users to track an AI’s predictions and processes, including the data it uses, the algorithms it employs, and the decisions it makes. This allows us to understand how an AI system is making decisions, identify where errors or biases appear, and ensure accountability and transparency. Without traceability, we cannot be sure that an AI system is working as intended, and when erroneous behavior does occur, we cannot investigate and address any issues or errors.
For example, a neural network built to generate art may have copyrighted imagery included in its training data. That imagery is a small part of a massive training data set. Without traceability, it is impossible to locate and remove that data point despite the company having committed copyright infringement. When a system isn't transparent and causes harm, people lose trust in AI, and real progress is hindered.
Traceable AI Solutions for Compliance
The European Union Aviation Safety Agency (EASA) has recently issued a concept paper for newly proposed guidance for machine learning in aviation. The paper puts an emphasis on traceability for AI assurance defining the capability as “the traceability of the data from their origin to their final operation through the whole pipeline of operations”. They go on to say that each operation should be shown to be reproducible. This ML guidance is being shaped with neural networks in mind. At present, the current standard for software certification in avionics is DO-178C and no neural network-based machine learning technology has been certified under existing guidance for airborne ML applications.
Because of their ‘black box' deficiencies, emerging AI/ML technologies struggle to comply with regulations across industries, especially where safety is a concern. In spaces where regulations do not yet exist these technologies are introducing problems more rapidly than we are able to come up with protections. There have already been a number of legal challenges and liabilities that have arisen from deploying AI technologies without proper checks and balances.
If we focus on implementing traceable AI systems, as EASA proposes, we will enhance transparency, accountability, and ethical AI practices. AI practitioners with access to fully auditable systems will be able to identify and correct bias, protect privacy and intellectual property, become compliant, and ensure transparent and accountable decision-making.
Intelligent Artifacts' GAIuS™ is an ExCITE-ready AI system capable of tracing data from outputs, through internal processes, all the way back to original learned records. We are a certifiable solution actively pursuing DO-178C to be the first AI/ML component certified on a platform for airborne applications.