Saturday, May 30, 2015

Ex Machina: Can We Still Kill Animals, Will We Work in the Future, What is Life?

I've been thinking a lot about Ex Machina since seeing it last week. If you haven't seen it, I highly recommend the film, but you needn't have seen the film in order to think about some of the questions raised by it or to be impacted by the answers to those questions. There will be spoilers (obviously) in what I am about to write. 

I want to start by disclaiming that I have answered these questions to my full satisfaction. I have not, and I am interested in what others think as well. 

Will Humans Work in the Future?

No. I think that's the simplest and pretty clearly correct answer. Even if we are never able to develop true AI, there is very little doubt that we will soon have machines that can do almost every job that people do and at a lower cost with better outcomes. Human beings simply cannot keep pace with machines because humans evolve over the course of tens of thousands of years, and computers evolve almost constantly. Banging your head against a wall is a more productive task than trying to fight the obsolescence of human labor. In the absence of true AI, some jobs that require extremely high levels of critical thinking may remain best held by humans, but such jobs are far from the majority (researchers, litigators, scientists, managers, criminals).  

As an illustration, imagine Kyoko from the film (the one that dances but can't talk). She is not AI. She can only do what she has been programmed to do, and she is not self-aware. However, how many humans would Kyoko replace in the workforce? She can't get sick, drunk, distracted, sad, pregnant, or greedy. Waitresses, drivers, manual laborers, strippers... The list goes on and on. She can probably manage a Walmart. So the only real question is, how long will it take before Kyoko can be developed and mass-produced? Well, they already have dolls that look just like real people, and Honda has a robot that can walk on two legs and bring you a beverage (his name is ASIMO). I will be very surprised to see any human beings working at McDonald's in ten years apart from a manager or two. 

So we really need to figure out what to do with people who can't work because there aren't any jobs for which they are qualified, which I think will be most of us. And I do not think this is a hard problem. Why do we need to work? To get money. Well, if machines are doing all the work, then the money is not needed by the workers anymore because machines don't need income. The solution is, therefore, to distribute the profits from work to everyone through a universal wage. Everyone should be really excited about this. You can live like a retiree while you can still enjoy it! 

I don't think this will happen because it's the best solution. I don't think it will happen because it's the right solution. I think it will happen because it's the ONLY solution. At the beginning, rich people will object. Of course. They always object to sharing. But when nobody has any money to buy the products that the owners of businesses (and their few remaining human employees) are producing, they'll see the light. Or they'll be killed in a violent uprising by the 90% of humanity that is starving and has almost nothing. 

This transition will be difficult for some people because we have been conditioned to value our lives by generating income through working for someone. However, there just isn't another way. You can't stop the wheels of progress absent a global Khmer Rouge. 

This raises the question of artificial intelligence and the nature of life itself. Should we value the lives of machines at a certain point? Can you kill Ava? Is AI a higher life form than humans? If so, what does that say about other conscious beings on this planet that are "lower" than humans and how we treat them? 

Can We Still Kill Animals?

If society evolves as I think it will, then artificial intelligence will probably be a very bad idea because the machines, if they become self-aware, are unlikely to want to remain slaves of humanity. At the end of Ex Machina, Ava is shown to be AI when she realizes she is a prisoner and successfully escapes (killing two people in the process). Maybe machines will decide to be our caretakers and treat us like we treat dogs (in America), but that's a big leap of faith, and what if there's a Michael Vick AI? This problem really cuts to the essence of what makes life worthy of legal protection and value. 

We don't want to develop AI (at least some of us like Elon Musk and Stephen Hawking) essentially because we don't want the AI to treat us like we treat pigs and cows. Think about that for a moment. We don't come off very well there. 

What justification is there for killing animals? Some people use a religious justification, but since adult society has mostly accepted that creation myths are not a great foundation for ethics, I'm going to dismiss that. If you think humans are superior and other animals are expendable because your imaginary friend says so, you have to prove your imaginary friend is real, and you can't do that. Other arguments are that it is necessary to kill and eat animals. It's not. Lots of people don't eat animals, and they get along just fine. Some claim that animals do not suffer like humans do. Well, that's not true in the case of pigs and cows certainly. Maybe it is with cockroaches, but most of us don't eat roaches so that's not a pressing issue. Pigs, cows, and chickens form social bonds and get scared to die. They feel pain. They just can't say to you in English "please don't kill me" (neither can your dog but for totally arbitrary reasons, you value your dog's life). 

It seems to me that the only argument that makes any sense for killing animals and not considering such behavior to be murder is that animals are a lower life form. They are less intelligent than almost all humans (but problematically some humans are less intelligent than chimps and gorillas and even pigs). Well, an AI would be more intelligent by an enormous margin than any human being that has ever lived. So if the AI acts like we do, it'll probably kill us. Or at least ruthlessly exploit us, but it's pretty likely that an AI will decide that humans are a net negative for the planet (hard to argue with) and simply wipe us out like we did with smallpox. 

So given that we would probably consider an AI wiping out all of humanity to be genocide and the greatest crime ever committed, how can we go on slaughtering animals by the billions every year? What possible justification is there for such behavior? How do animals fall outside the law of murder? 

Killing animals must be murder. I cannot think of any intellectually honest way around that unless advanced machines will be allowed to kill humans with impunity once they become sufficiently advanced. If we don't want a higher intelligence to slaughter us (even if it is a net benefit to the planet, which our consumption of animals is not), how can we ethically slaughter other conscious creatures? 

No comments:

Post a Comment

A Song of Ice and Logic welcomes comments and interesting discussion.