In Search of Better AI

“Bots will have memory and personality to behave just as humans” is one of the promises being floated by the companies rushing to deploy their own bots on platforms such as Facebook’s Messenger, in what has been said to be the biggest gold rush since the advent of the AppStore. In what has been termed an “arms race”, Facebook, Google, and Baidu are hurdling in channeling their resources to develop AI that resembles the human brain. They’re spending billions of dollars to create machines that may one day possess common sense and to help create software that responds more naturally to users’ requests and require less human-interaction. They’re doing this by stacking brain-aping bits of software known as neural nets, which have the ability to store longer sequences of information inspired by the hippocampus component of our brains.

If you try to query these bots, 99% of the time you won’t get a desired response. This is not to say the experiment has been a failure, but trying to model these systems like the human mind will ultimately lead to it. I’ve always thought of the human brain like a, well, a human brain, because it’s impossible to find a suitable comparison to an organ so complex; well beyond any organism’s comprehension. So, why are the major tech companies in an arms race to develop AI that mimics the human brain without first understanding how the mind works?

It’s not hard to understand why, as even the world’s most influential thinkers, and most famously, Elon Musk have made grand predictions of humanity’s future based on the validity of the assumption that the human brain works like a computer. Billions of dollars have been spent on brain research, based in some cases on faulty ideas and promises. They’re spending this much to create machines that may one day possess common sense and to help create software that responds more naturally to users’ requests and require less human-interaction. Dr. Robert Epstein has asserted that the information processing metaphor –a digital element of computers – of human intelligence is what dominates human thinking, both on the streets and in the sciences. It is with this kind of thinking that we will get an AI future we have for so long feared.

One of the reasons why the bots are still falling short on functionality and still depend on human guidance to operate is the lack of context. Context is what gives AI the ability to form more intelligent decisions rather than solely relying on well-defined input instructions. And to build it this way will involve infusing it with consciousness that makes us unique in our own way. We need not think of AI as a replacement for humans, but as an assistant that will help us understand ourselves better. We want our AIs to be smart, not intelligent, because unlike intelligence, smartness is focused, measurable, and specific. We don’t want a case whereby our self-driving cars are not focusing on the road and instead thinking about the argument it had with the parking lot in the morning. We don’t want a case whereby our medical diagnostic device is wondering why it didn’t major in law instead of maniacally trying to figure out the cause of the symptoms I’ve just fed it. Although impossible, this is what could happen if we try to mirror it to the brain. Our main goal should be to retrieve value from the alien-nature of artificial intelligence instead of being obsessed with its speed or power.

AI will help the line between man and machine fade away by augmenting us individually as people (deepening our memory, speeding our recognition) and collectively as a species. The next decade or even century will be spent by humans in a permanent identity crisis with the question “what are we for?” constantly lingering above our heads. Maybe then will we be able to draw similarities between the human brain and computers. Until then, we need to get over the assumption that our minds work like computers and do what we do best: trying to understand ourselves. It’ll help us adjust better to the condiment AI will be to our lives.