When considering AI technologies for use in any application, there are three elements that should be evaluated separately: data, memory structure, and algorithm. These elements are analogous to those in traditional programming. The idea of algorithm remains the same in both, memory structure relates to program storage, and whereas in traditional programming, the code for the program is provided by a human user, in AI it is provided by the data directly. Often, data, memory, and algorithm are so tightly coupled in state-of-the-art systems, that it is impossible to assess them individually or see the impact each component has on the others. This becomes a problem, particularly when trying to determine why and where an AI has made an error. It is also a practical issue when attempting to push an AI solution into a production environment since there is no separation of interests, demanding too much from specialized engineers.

There are many, many different AI techniques and technologies. Their popularity waxes and wanes correlating to their promises and disappointments (Today’s popularity for neural nets is a repeat of the 1970s, eventually spurring an AI Winter). Regardless of technique or technology, these three components can aid in their evaluation for your specific use case.
_____

Data will always be a product of its environment but must be collected, processed, and secured so that, for example, malicious attackers cannot inject bad data that could cause harm to a person or company. With black box solutions, there is a high probability that you will not know bad data has been injected until an event or catastrophe occurs while your system is in production. Human overseers cannot sift through Big Data thoroughly enough to catch these malicious attackers, and AI systems are not yet able to differentiate between all good or bad data without some human oversight.

Memory structures can be file systems, records in a database, or another exotic form in which data is distributed across a network. When data is highly processed and distributed, i.e., modeled as in a neural network, it becomes difficult to impossible to identify prediction lineage from the output back through the system to input. A memory structure that can be decoupled from data and algorithm for analysis, such as a traditional database or other non-exotic form is preferable because input data is kept intact.

Algorithms will naturally vary from technique to technique.

For some industries, non-deterministic algorithms may provide an advantage. For example, a stochastic decision model may be beneficial in an oppositional game played by two or more agents, say the stock market. The randomness would assure that the opposing agent would not know with certainty what its counterpart would do in a given situation. Stochastic decisions are also advantageous in expanding the solution space beyond what has already been learned or proven. This inconsistency in inputs to outputs can be used to discover more optimal solutions that can get the agent out of a local minima/maxima. Think of what you need to do to get yourself out of a rut and think more creatively about a problem.

For safety-critical industries, the algorithms used should be deterministic, meaning consistent inputs should produce the same outputs every time, provided nothing has changed. When reviewing cases wherein something has gone wrong as a result of a prediction provided by an AI, deterministic algorithms help users to definitively identify what went wrong and why. Deterministic systems produce predictable outputs and reliable methods for correcting errors.
_____

Solutions like neural nets are “black boxes” specifically because of their tight coupling of these three elements, which obscures even an expert’s ability to understand how it came to a conclusion. Neural nets are great solutions for low-risk systems like games, social media, or placing a cat’s whiskers on your face.

The flip side is systems like decision trees, where every branch is a legible, discrete evaluation that can be followed leading to a leaf, i.e. a decision. Decision trees couple algorithms and memory. The evaluation criteria (i.e. which branch to select) and its output (i.e. what to do next) is embedded in these two elements. Decision trees may be built manually by people who model the use case, or automatically by a separate algorithm (e.g. Random Forests) that uses data.

Additionally, there are technologies that work only with specific types of data. For example, neural nets only work on numeric data, forcing all other types to be converted over into a numeric representation. Workarounds to this include software like “Word2Vec” which converts strings (i.e. words) to, in this case, vectors. Other systems such as traditional “symbolic AI” or “rule-based production” systems work with only strings. The vast amount of sensor data collected by the world today is lost to them. Some groups have attempted hybrid systems, but the sum of these seem to add up their deficiencies rather than their advantages. Still, other solutions exist that work with all types of data and this evaluation model works for them, too.

Users should have clear guidelines when evaluating any given AI technology especially for use in safety-critical applications that can affect life, death, and well-being. Decision-makers should be able to evaluate an AI’s data, memory, and algorithmic structures individually and any AI being used for high-stakes decision making should adhere to the ExCITE methodology for safety. Unpredictability is inevitable in the real world and mistakes will happen. It is incumbent on us to ensure that our AI/ML systems have all the key elements necessary to reduce unintended behavior and definitively guarantee that the same error will not occur twice.