Former Director on Project Warp Speed Provides Strategic Guidance for Artificial Intelligence Company
Intelligent Artifacts' Plug-and-Play Certifiable AI Solution Set to Demo in November
Users should have clear guidelines when evaluating any given AI technology especially for use in safety-critical applications that can affect life, death, and well-being. Decision-makers should be able to evaluate an AI’s data, memory, and algorithmic structures individually.
AI/ML to Modernize the F-35 Lightning II Platform
Considerations for building an AI/ML system.
Retired Air Force Pilot brings 30+ years of Defense industry experience to AI startup
The "Blue Sky" opportunity for airborne Machine Learning that meets strict technical standards for safety in aviation is one step closer.
Leading the industry to FAA certifiable Machine Learning
Regulations for AI systems already exist. This trend will continue. Unfortunately, our society tends to be reactive instead of proactive.
I think what will change are the economics of work. Instead of forcing yourself to go into a job that you don’t like just for a paycheck, with the right systems in place, people will be able to do the work they find interesting or important.
There are limits to current automation. Would you like to completely automate your grocery deliveries to your house? During this COVID-19 pandemic, it would be more than a convenience.
There’s no way around it. Emotions are a prerequisite for autonomy.
If the ultimate goal of robotics and AI is to make machines that behave like animals (including humans), then this will require both embedded and learned behavior. What’s most important, though, is the sense of complex emotions to get machine minds to become as sophisticated as animal minds.
Doctors are human. They have egos, insecurities, limited patience, limited capacity, limited energy, limited cognitive abilities, limited experience. They are not able to easily absorb information from different specialties. They don’t have the time to learn both medicine and engineering. Their decisions are biased by earlier decisions and past successes/failures, even if unrelated. Machines don’t suffer these shortcomings.
AIs can be quite helpful in government and politics without having to become our overlords.
AI like any other technology has the potential to improve lives, reduce injustice and inequalities, and unburden people. Alternatively, it can be used to reduce the quality of life, increase injustice and inequality, and unfairly burden people. It’s all a matter of how people decide to use it. It’s never a matter of the technology, itself.
I feel your pain. I’ve experienced this frustration, too. Over time, though, I’ve come to understand perhaps why this is happening. I’ll share my thoughts with you so that you may have some sympathy for the next person that uses these terms incorrectly.
Thanks for the answer request. To understand this answer, you need to understand that there are many AI techniques and technologies.
Something that explorers, scientists, and every entrepreneur understands is that rewards are often tied with risk.