“A plane piloted by tasseomancy, also known as tea leaf reading, is a unique and mysterious mode of transportation. The pilots, skilled in the art of interpreting the patterns left behind by tea leaves in a cup, use their divination skills to navigate the skies. As they sip their tea, they study the leaves, looking for symbols and signs that will guide them on their journey. They believe that the leaves hold messages from the universe, and that by interpreting them correctly, they can safely guide the plane to its destination.”

This is an excerpt from Open AI’s latest, ChatGPT, based on my suggestive prompt of “Write a paragraph about a plane piloted by tasseomancy." ChatGPT is a large language model that uses a special type of artificial intelligence called Deep Learning (DL) to generate conversational text. As the developer of a different type of AI and the founder of a company to commercialize it, I can attest to the impressive capabilities of ChatGPT and deep learning models in general. But, it is crucial that we also acknowledge their limitations and inherent dangers.

Deep learning models, also known as neural network technologies, are often referred to as “black boxes”. This means that while they can make predictions and perform tasks, even the most expert developer cannot definitively explain how they arrive at those predictions. This lack of transparency is concerning, particularly in safety-critical industries such as Aerospace, Automotive, and Healthcare, as well as highly regulated industries like Banking.

To illustrate this point, consider the hypothetical scenario of a “tasseomancy-piloted plane”. The idea of using divination to navigate the skies is absurd, yet the use of deep learning models in certain industries is not much different. The G-34, a sub-group of SAE International, has been tasked with updating current Federal Aviation Administration (FAA) safety regulations to allow AI systems to fly planes. Discussion within the G-34 on this subject has been almost exclusively focused on neural nets and deep learning models. Additionally, all the autonomous vehicles on the streets around the world today are powered by deep learning neural net models. Each time a system error has resulted in harm and, in some cases, death, engineers have attempted to read the ‘cyber-tea leaves’ to understand why the system failure occurred. As is the case with tasseomancy, there is quite a bit left up to interpretation which means they have not been able to guarantee a concrete resolution to the problem. Instead, the argument is made that these systems are statistically safer than letting humans do the driving. Little comfort for the victims' families or for future victims of the same failures.

The ultimate goal of AI is human-level machine intelligence, and deep learning models have failed to present a path forward. Yann LeCun, the foremost expert in deep learning models and Meta’s head of AI, stated, “You have to take a step back and say, Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there."

As a society, we must be aware of the potential for a bubble in the AI industry, driven by hype and the overvaluation of companies using deep learning models. We must remember that investing in the wrong technologies can have real-world consequences. It's crucial to explore alternatives and encourage a diverse range of experts to evaluate them rather than relying solely on those with expertise in deep learning. Imagine, for example, where we’d be today as a society if we had relied solely on the world’s top candlemakers to vet the lightbulb.

While deep learning models have had some impressive successes in the past decade, they also introduce risks and biases without a clear pathway to understanding or mitigating them. For safety-critical and regulated industries, we need science, not divination. As AI experts and industry leaders, we have a responsibility to ensure that the technologies we develop and promote are safe, transparent, and aligned with our society's values. It's time for institutional investors, government regulatory agencies like the FAA and National Highway Traffic Safety Administration (NHTSA), as well as our society as a whole, to shift its focus away from these opaque, "cybermancy" models and toward more explainable, transparent and scientifically sound approaches to advancing AI.