Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Artificial Intelligence Consciousness is Impossible. But Why So?

  /  Artificial Intelligence   /  Artificial Intelligence Consciousness is Impossible. But Why So?
Artificial intelligence vs artificial consciousness

Artificial Intelligence Consciousness is Impossible. But Why So?

AI is becoming self-aware but attaining artificial intelligence consciousness is impossible from a hardware and software point of view.

Artificial consciousness is impossible as a phenomenon that computers are unable to exhibit. What defines consciousness in humans is being able to experience sensation, emotion, and thought to produce willful behavior in response to stimulation from the environment.

Many AI research project followers and scientists say they are working towards building a conscious machine, based on the idea that all brain functions encode and process information from different senses. To date, the field of studying brain functions has yet to offer a comprehensible understanding of how they work. 

It’s not strange that supercomputers are still on the drawing board. Not so far, attempts to rebuild them have been futile. A European project on which billions of dollars have been spent over several years is now considered by experts to have been failed. That effort has shifted to look more like a less ambitious project in the U.S. developing new software tools for researchers to study brain data, rather than simulating a brain.

Some people think that simulating the human brain with a computer can be successful. Others believe that this is a mistake as they don’t believe artificial intelligence consciousness is impossible. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data.

It’s often said that our brains operate like computers. But the truth of the matter is, artificial intelligence consciousness is impossible as our brains function quite differently than computers.

Living organisms store experiences in their brains by adapting neural connections between them and their environment. Despite this, artificial intelligence consciousness is impossible as a computer records data in short-term and long-term memory blocks. In comparison to the brain, this means that the way computers work is different from how information is managed by brains.

The mind actively seeks out things in the environment that guides how we perform actions. Perception doesn’t always rely on senses. Sometimes it relies on memory, like when you’re looking at an item in different angles without consciously interpreting the data. In the next few seconds, you could see how that pattern could be created by alternate views of that item. 

One other way to think about this is that the tasks that are often seen as mundane require engagement from several areas of the brain – some of which are quite large. Learning and expertise require physical changes in the brain. For example, these types of changes might help transform the strength of connections between neurons. It is impossible to replicate the transformations that occur to a fixed architecture in a computer.

 

Computation and awareness

Some additional reasons why artificial intelligence consciousness is impossible are as follows:

A conscious person is aware of their general state of awareness, meaning they know when they enter or exit certain trains of thought to their recollection. But that’s actually not true for computers as artificial intelligence consciousness is impossible More than 80 years ago, British computer scientist Alan Turing proved that there was no way of ever proving that any particular computer program would stop on its own – and yet the ability is central to consciousness.

His argument is based on a contradiction that consciousness in AI is impossible to achieve. He argues that if there were a general process that could tell whether any program it analyzed would stop, then this process would be among those programs it analyzes. To answer this question, we need to figure out what you’re trying to stop. If it’s a running process, it will continue running. Otherwise, you’ll have the option of either shutting the process down or keeping it going. That’s pretty straightforward. But then imagine that an engineer wrote a program that included the stop-checking process, with one crucial element: an instruction to keep the program running if the stop-checker’s answer was “yes, it will stop.”

The stop-checking process of a new program may be wrong if you run it on that program. That’s because the instructions of the stop-checker would make the program not to be stopped, even though it should have been. One of the main problems with this method is that the program will cease to run if the stop-checker determines that it cannot be interrupted. 

It may seem strange but gaining artificial intelligence consciousness is impossible as it is not possible to fully analyse a program. So, it’s impossible to have all the information that would allow us to be certain that artificial intelligence consciousness is impossible but if they can’t, then they will not have conscious thoughts.