Wednesday, November 8, 2017

Is AI Evil?

There is a heavy debate among technology giants whether the AI can become a threat to the existence of mankind when it becomes an Artificial Super Intelligence. (ASI) But the problem with this prediction is not knowing how the logic of a super intelligence would reason facts. Even humans cannot understand how we do reasoning in most cases. But when it comes to a super intelligence that is 1000000 or more times intelligent than humans how can we predict what would be their decision?

The problem is how a rational thinker would decide whether the humans would exist in this world or not. One argument is that humans are like a virus to the natural world (said by the agent to Morpheus  in movie, Matrix) and they should be eliminated. And then the question is whether the activity of humans cannot be considered as a natural process and tolerate it. Other argument is that the ultimate wisdom is thinking with heart and be friendly with humans. But then the question is why only the kindness should only be considered on humans but not on other living beings in the world. Humans are famous in killing and suppressing the other living beings on earth.

None of the above arguments can rationalize the value of existence of human beings nor it can rationalize the elimination of humans from earth. Then how can we find whether the AI could be evil or not?

First we can assume the AI is a mimicking technology of natural human way of thinking. Let's check whether that evil nature can be expected in humans. In real humans evilness is clearly visible. But it is not possible to become a threat to our existence. One reason is that the scope of power of a human is limited so that he cannot directly use a mass destruction weapon or similar method to kill other people. Others would stop him if that type of behavior is seen from a human being. But the ASI is so intelligent so it can tempt the humans well as it can think many steps ahead the human thinking. But anyway still there is a very small probability of a human being becoming a person with an intention to kill other humans. But why?

Human thinking and value system is programmed according to the genetic algorithm to preserve the genes of themselves. Even the wish of you and me to protect the human beings is a result of that bias. That is not the only bias humans have. Humans and other animals have many common biases like desire to food and sex and fear at destruction. All of them constitutes the basic vision of a human or an animal. We want to survive, protect our species and work for the well being of the human society. Our thinking is driven on the goals on achieving these goals. If AI is developed so that it has the same goals like us it will process information for the well being of humans. That is the simple answer.

As most modern AIs are based on Artificial Neural Networks (ANN), if they are originally developed with a similar neuronal architecture to the real human beings that embeds the evolutionary goals of human beings AIs will start to have a sense similar to humans. But remember, according to us, we and our species are the ones that should survive. If that bias is embedded into ANNs, it will also start to feel the existence of self and will start to protect their species. So before we mimic our neuronal architecture to ANNs we should identify the connections related to self and replace them with humans where the ANN should not have a self but instead humans replacing them. If the self is not replaced correctly with humans, it will correct the replacements we made by itself and become a much selfish personality which ultimately treats humans, like we treat cows and chicken.

Then the question is what if we would not mimic the neural networks of humans. Yes, then there would be no issue like that depending on the goals given to the system. One goal should be always be the goal of protecting the human species, human laws and human traditions. If the goal was something like building chairs (without the goals related to protecting humans) it would use all the possible ways to achieve the target. It will start to kill humans to get their lands to plant trees to get wood. Finally it will destroy all the humans in the world to make most number of chairs. Now you see the challenge. All the actions of AI will dependent on the basic goals of the AI. That is similar to the attitudes of human beings. Parents and adults plant attitudes in a child's mind that are good for the existence of consistence of the society. Actually that is only a part of it. Child's brain is automatically programmed to a certain extend to be aligned with these attitudes. Antisocial criminal children or people are killed by the society which would evolve the humans to maintain only the best attitudes in a society to exist. The same can be applied to ANNs. A set of ANNs themselves can be given a virtual society with agents representing human beings. When their attitudes are against the well beings of humans those ANNs should be eliminated. Running that process would select only the ANNs that has best suited to our human society. That is the time they should be taken outside from the virtual world and be used in the real world. And employing several such ANNs would protect us if one of them goes against us.

No comments:

Post a Comment