To round out our ExCITE methodology, we end our blog series with Editability; a critical component for the responsible implementation and regulation of AI systems across the board. Editability enables the modification and/or removal of learned (trained) records from an AI’s memory.
In our blog post on Traceability, we touched on the importance of examining ethical and responsible uses of AI given the widespread adoption of generative AI technologies.
To address these concerns, regulatory bodies are working to establish guidelines and standards for the development, deployment, and use of AI systems, ensuring that they are safe, fair, transparent, and compliant. As the world prepares for the era of AI regulations, organizations must understand the implications and proactively adapt their AI systems to meet the existing and forthcoming compliance requirements.
AI Regulations Are Coming
Software regulations specifically addressing AI/ML systems are in the works all across the globe. In response to emerging AI tech, regulations should examine an AI’s editing capabilities as part of the ExCITE methodology for analysis.
The Department of Defense has already begun to set guiding principles for the adoption of AI technologies. The Defense Innovation Unit has published ‘Responsible AI Guidelines’ stating that AI must be responsible (RAI), ethical, traceable, reliable, and governable.
The ExCITE methodology aligns with this Responsible AI initiative and editability in particular addresses some of the big concerns shared by many regulators and the public at large.
- Correcting errors
- Addressing bias
- Handling sensitive or classified information
- Adapting to changing circumstances
Editing Learned Data in An AI’s Memory
Neural-network-based technologies, such as generative AI, undergo extensive training on vast datasets to achieve their impressive performance in real-world applications. However, the interconnected nature of neural networks, where the memory structure, data, and algorithm are tightly coupled, poses a significant challenge. It becomes nearly impossible to evaluate individual components to understand their impact, making it akin to searching for a needle in a haystack when identifying problematic data. Additionally, these systems require the input data to be modeled rather than retaining it in some representation of its original form. This opacity, often referred to as the 'black box', hinders their editability. Moreover, if modifications or updates are required post-deployment, complete retraining of these systems is necessary, a resource-intensive and time-consuming process that still offers no guarantee of error or bias removal.
On the other hand, in non-neural-network-based deterministic systems, where data, memory, and algorithm are loosely coupled, editability becomes an asset rather than a difficulty. Traceability enables us to pinpoint specific data points and editability enables us to make necessary changes without the need for retraining. Given the deterministic nature of these systems, where the same inputs will always yield the same outputs, removing erroneous data ensures that the same error will not recur. This assurance offers a level of reliability and confidence that is crucial in maintaining accuracy and mitigating issues.
It is highly likely that open-source or public data sets will face attackers that inject malicious data to throw off results, create erroneous behavior, and spread misinformation. A NN/LLM will find it difficult and maybe impossible to isolate these records which could result in having to throw out the entire data set and start from square one. But, a non-NN, deterministic system with editability features baked in allows for continuous editing with no up-front modeling or retraining. Continuous editing improves a data set over time as erroneous, biased, or bad records are identified (traceability) and removed or changed so that you’re always working with the most up-to-date, accurate information.
Responsible AI - Here’s How Editability Can Help
Let's explore a few reasons why the ability to edit trained data is becoming increasingly important in the AI landscape and how it enables organizations to navigate the complex regulatory landscape successfully.
- Correcting errors: Trained AI models are not infallible and, like humans, they will make mistakes. If errors are identified in the trained data, such as incorrect, biased, and confidential or copyrighted information, being able to edit or modify the data allows for the correction of those errors, leading to improved accuracy and reliability of the system's output.
- Addressing bias: AI’s can inadvertently inherit biases from the data they are trained on as well as from the humans that build them in cases where upfront modeling is required. By having the ability to identify and edit or remove biased data from the training set, it becomes possible to mitigate or reduce bias in the AI's decision-making processes, ensuring fairer and more equitable outcomes.
- Handling sensitive information: Trained AI systems may process and store sensitive or confidential data. In certain cases, it may be necessary to remove or redact specific information from the training data to comply with privacy regulations or to protect sensitive information from being exposed.
- Adapting to changing circumstances: Real-world conditions and contexts can change over time. Being able to modify or update trained data allows AI systems to adapt to these changes and remain relevant and effective. For example, if an AI is trained on historical data and new data becomes available, incorporating the new information through updates, edits, or additions can improve the system's performance.
As AI regulations continue to develop worldwide, it is crucial to consider the editing capabilities of AI systems as part of the analysis process. Editability in combination with Explainability, Computability, Interpretability, and Traceability (ExCITE) make up necessary features of a regulation-ready AI system. These components work together as a toolkit for examining the trustworthiness and transparency of any AI. Systems that satisfy the ExCITE paradigm will be responsible, equitable, traceable, reliable, and governable as outlined in the DoD's AI Ethical Principles.
1 US Department of Defense - DOD Adopts Ethical Principles for Artificial Intelligence
2 Harvard Business Review - AI Regulation Is Coming
3 European Commission - White Paper on Artificial Intelligence - A European approach to excellence and trust