Will Computers Revolt?
Charles J. Simon
Consciousness and free will have long been the hallmarks of human intelligence, which distinguishes us from "lower" life forms. With the increasing development of artificial intelligence, the gap between biological and mechanical minds is becoming smaller and smaller. Although machines are currently unable to express free will, Charles J. Simon in his book "Will Computers Revolt ̵
Charles J. Simon, BSEE, MSCS, is a state-approved computer software / hardware expert. Mr. Simon's experience includes pioneering work in AI, neuroscience and CAD. His technical experience includes the development of two unique artificial intelligence systems as well as software for EEGs and other neurological test equipment.
Will future computers be conscious entities? Do you have free will? Or will they just simulate those skills?
A popular argument that computers are able to think in an analogous way to humans goes like this:
We humans are conscious beings and have free will, and these are essential to our thinking. Computers are mechanisms that run in the same way each time and therefore can not have free will. Computers are made of materials that can not possess the essence of consciousness. Therefore, computers without free will and awareness will never be able to think.
With the discussion about free will and consciousness, we have reached the pinnacle of human mental processes and also a point of philosophical discussion.
For me The question is whether you accept modern physics as a description of reality or not. While there are certainly areas of physics that still need to be discovered, the key point is that every physical system can be represented by information that can be replicated in computers. If what we observe in the brain's electrochemistry involves consciousness and free will, there is no reason why a computer can not equally possess these capabilities. If we believe that there is some unobserved essential "magic", then we have the choice. Either the magic is ultimately observed, defined as part of the physics and enclosed in the computer, or the magic lies outside of the viewing area, meaning that it is beyond any future conceivable physics.
My claim is that human thought is the sum of a variety of general mental functions that work in unimaginably great ways in parallel. These functions have been introduced in the last chapters, and each function can be described and understood. Some of these functions already work in computers at higher levels than in humans, and those that are not possible in the near future,
I claim that all the functions previously presented to AGI are necessary and necessary to any semblance of consciousness. Without the "sensation" of the world and the modeling and imagination needed to understand, no AI system could put chess, mathematical proof or driving skills into context – the context is a real environment.
I further claim that the features presented so far are sufficient to give AGI the appearance of awareness. With these features, a robot or computer with appropriate peripherals can detect the environment, remember past situations, and understand how different situations affect them. It was able to simulate several possible actions at any one time, selecting and executing the one he thought was best.
We already have computers that speak and understand language to some extent. In combination with a robot-controlled "body" with vision, such a system could become acquainted with objects in its environment and appear to reason. It will apparently make well-founded decisions and be able to explain its reasons.
Because of its simulation abilities, your mind can select multiple options and play through simulation, then pick the one that leads to your most beneficial result.
If you should try to prove that you have a free will, you may repeatedly face a situation that is as identical as possible. Then sometimes make a selection and then another choice. Unfortunately, you can never put yourself in a truly identical situation, because after the first time, your experience, your choices, and the outcome become part of your mind, and the state of your mind is different at the next attempt. As a result, we have no method to measure whether a free will actually exists or not, because we can never really set up identical situations to determine if we could make other choices.
Here is a demonstration. Lift your right index finger.
Did you raise it? If you did, was it just part of my demonstration to learn? Or if you did not charge it because you wanted to assert your "free will"? I claim that the decision you made was based entirely on your current attitude – based on your experience with similar demonstrations.
OK, lift your left index finger now. Did you do the opposite of what you did in the previous paragraph? Have you enforced your free will? In any case, have you thought about raising your finger? I bet you have. I bet you can not help thinking about it when reading the text.
There is no way to prove or disprove free will in both cases. Your first decision is based on your previous experience, and your second decision is based on the same experience and your experience with the first decision.
When computers become learning systems, they will also incorporate the experience of an earlier decision into the decision-making process of a current decision. So the future computer left to its normal operation may make another choice if it finds an identical situation, as you might do.
On the other hand, we can create situations in computers that are truly identical. Computers can be restarted to a point in their previous backup, so their previous experience does not need to be part of their current operation. Restoring a backup can completely erase the experience of the first decision.
So, if you want to back up the entire state of the computer, let the computer make a decision, reload it from backup and put it in the same situation, and let it make the decision again, it would always be the same Make decision. If not, we would consider this a malfunction. In computer situations, it is convenient to control all inputs (including access to real time clocks) so that the situation is absolutely identical.
There are theories of free will and human consciousness that rely on complex mechanisms or mechanisms quantum mechanics and these may turn out to be relevant. The simpler theory, however unpleasant it may be, is that the free will of man is just like that of the learning computer. It's just that we can never set up identical situations for ourselves and therefore can not test if the theory is right. We only make the choice for the best expected outcome for every situation we encounter.
From the study of the brain and the functioning of neurons, it can be seen that there is nothing in the brain that gives the impression of being separate from the laws of physics. The laws of physics are deterministic until you reach the level of (very small) quantum particles. You could argue that in a truly identical situation, the human brain would make a different choice because its computational mechanism is determined by unpredictable quantum mechanics and chaos theory. Depending on the quantum effect, our synapses can send a few or fewer molecules of neurotransmitters. Therefore, we can come to an identical situation for a different result. For me, this is an unsatisfactory argument because it replaces the notion of deterministic "free will" only with the free will of a roulette wheel. It is disturbing enough to believe that your mind is a deterministic mechanism without being labeled unreliable (and then goes on to claim the superiority of the mind because of its unreliability).
Google Search always returns the best search results (ignores sponsorship). , You may not agree with the algorithmic definition of best, but Google computers can only do what they are asked to do. However, if users never click on the top search entry, the entry may be rejected and appear lower in the list. For a given search request, you could theoretically get a different search result each time the search was requested because Google's servers will satisfy you and get the "best" result. Google computers incorporate the qualitative experience of a particular search result into the ranking decisions for future searches, whether or not they satisfy users, by putting a particular outcome at the top.
Do Google search servers know they have free will? Of course not. Do they really have a free will in the same context as humans?
Think about it. Are your decisions different because you believe that you have a free will? Absolute. An innate goal of your mind seems to be to enforce your own individuality. Google's search engines do not take into account the possibility of presenting different results, just to demonstrate their free will (as you may have with your index fingers above). It seems obvious that one of the inputs to every decision you make, your belief in your ability to make a decision, your belief in free will.
You would probably make other choices if you did not believe that you had a free will. If you really do not believe that you have free will, why should you think about making a decision? You would be a purely reactive entity.
The reason for thinking about a decision (the reason for believing in free will) is the likelihood of making a better decision by examining the effects of various options. Your brain does not seem to be able to simulate different possibilities at the same time, so it tests them individually. Examining various possibilities leads you to believe in free will, and faith leads you to explore more possibilities.
But faith requires consciousness …
Will computers revolt? Preparing for the Future of Artificial Intelligence by Charles J Simon. Copyright 2018 by Charles J Simon. Published by Future AI. Used with permission of the publisher. All rights reserved.