Something that explorers, scientists, and every entrepreneur understands is that rewards are often tied with risk. The greater the potential rewards, the greater the risks involved. If humans decided never to take risks as a species, we would very likely be extinct. We would never have left our primordial nursery to travel all over the planet, a likely necessity at some point in the search for food or safe shelter. Survival is a great reward for taking such great risks.

Look at the other endeavors we take. Space travel, for example, has tremendous risks and many lives have been lost in its pursuit. Today is the 33rd anniversary of the Space Shuttle Challenger disaster. I remember watching it in school and the teachers suddenly shutting off the TV as they began to understand what we just witnessed. It was horrific. Hearts broke around the world. The space shuttle program took a pause to assess what went wrong and figure out a way of preventing it from happening again. Then, it continued again.

Some of the risks involved with AI/AGI are real. Others are pure fantasy. They make for fun movies, but the “suspension of disbelief” required by these stories from creators of true machine intelligence is high. So many things would need to go absolutely wrong. So many checks-and-balances designed to keep even people from destroying humanity would need to break down. Before all of these safety nets are completely torn, people typically pause and figure out what went wrong, fix it, then move on.

Yes, there are risks. Real risks. But these risks are mitigated. They are monitored. If/when something goes wrong, they are reassessed and the solutions are iterated to reduce the chances of occurrence, again. So, the answer to your question is in your question. People keep trying to make AI better to reduce the risks.