Let’s include humans in that group, too. Animals are not “semi-automated”; they are completely autonomous. Whether programming is hardcoded through genetics as an instinct, or it is learned behavior, the robot analogy works because it’s not an analogy. It’s the goal of robotics to achieve systems on par with animals. Animals are complex machines. Our ersatz robots and AI software are relatively simple facsimiles of these machines that have evolved processes including autonomic systems (e.g. immune and digestive). These are systems we don’t think about, but automatically do what they’re supposed to do. Inside you, there are systems that are very “robotic” in the way they go about their work.
How about thoughts? Is there an analog between AI and animal thoughts? Programming computer minds requires a keyboard and, often, lots of data. Is there a way of programming biological minds? Yes. B.F. Skinner’s work on Operant Conditioning (B.F. Skinner | Operant Conditioning | Simply Psychology) is a method for programming the behavior of animals. The animal processes its environmental data with feedback from the trainer via treats instead of a keyboard. Then the animal exhibits new behavior correlated with its training. This is how circus dogs are trained to perform long sequences of fun acrobatics.
At each choice juncture, an animal has a decision to make. Are these decisions automatic in that the animal always chooses to optimize its treats? Or, are there other motives available that may make the decision less straightforward? If the only motivating factor was the animal’s self-interest, and its brain was not damaged, then every decision could be predicted fairly easily. But, this isn’t often the case. Complex emotions provide additional motives.
It’s an unfortunate conclusion made by western scientists for the better part of the last two centuries that humans have a monopoly on emotions like love, happiness, fear, sadness, envy, and pain. No doubt, this had stemmed from religious notions of a natural hierarchy. It is a relatively recent attitude we’ve adopted in believing that animals may have the full spectrum of emotions. Not even our animal rights laws have fully caught up. (See Sentient Rats: Their Cognitive, Emotional, and Moral Lives.)
Emotions are unobservable internal states that inform decisions, which in turn drive observable behavior. If an animal’s range of emotions can include interests beyond itself, then the decision process, hence behavior, becomes less predictable. Elephants and crows are known to mourn their dead. Koko the gorilla expresses sadness over loss of a loved kitten. Monkeys get pissed off when treated unfairly. Even a lion has been video’d showing compassion to a hurt fox. Yet, theory of mind in animals is still controversial. Some scientists may scoff at the language “mourn”, “sadness”, “loved”, “pissed off”, and “compassion” as anthropomorphizing these behaviors. I don’t see this as a valid argument since the premise of anthropomorphizing is circular when considering we are also animals evolved from common ancestry. It’s counter to evolutionary science for humans to assert we have a monopoly on any emotion. The fact that we recognize the unpredictable nature of animals, yet fail to recognize this arises from their complex emotions is baffling.
If the ultimate goal of robotics and AI is to make machines that behave like animals (including humans), then this will require both embedded and learned behavior. What’s most important, though, is the sense of complex emotions to get machine minds to become as sophisticated as animal minds. Although animals are instinctive and autonomous, one shouldn’t conclude from this that animals are in any way lesser to humans on those metrics that we value: e.g. the capacity to love, empathize, trust, care for. Quite the opposite has been argued above! Additionally, when machines become advanced enough to exhibit these traits, they should also be afforded the same respect and protections.