How close are we to achieving artificial general intelligence

“Futurists are always, almost admirably, immune to realities on the ground.”

– Erik J. Larsen.

Have you been wondering if supercomputers can achieve artificial general intelligence? Or an A.I. achieving general intelligence by chance? Guess what, many experts were not in the same boat when it comes to this subject matter.

Many experts gave their best shot on this matter. For example, mathematician I.J. Good conceived of a runaway “intelligence explosion,” a process whereby smarter-than-human machines iteratively improve their own intelligence (recursive self-improvement process).

If we also account for the years which creates new surveys of researchers working in the AI field asking for their predictions of when we’ll achieve artificial general intelligence (AGI) — machines as general-purpose and at least as intelligent as humans. Median estimates from these surveys give a 10% chance of AGI sometime in the 2020s, and a one-in-two chance of AGI between 2035 and 2050. Kurzweil also stated his predictions to Futurism about artificial intelligence, “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

Even the world-renowned physicist Stephen Hawking warned us about artificial intelligence before he died. “It would take off on its own, and re-design itself at an ever-increasing rate,” he said. He even told to the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Elon Musk also joined the conversation, and warned that A.I. is “our biggest existential threat”.

Nick Bostrom, the author of the book, “Superintelligence: Paths, Dangers, Strategies, which was a New York Times bestseller back in 2016, with additional must-read recommendations from Bill Gates and Tesla’s Elon Musk showed the same warning that a superintelligent and sentient machine is more dangerous than climate change.

The front of Bostrom’s book is overwhelmed by a frantic peered toward, the pen-and-ink image of an owl. The owl is the subject of the book’s opening story. A gathering of sparrows is building their homes. “We are all so small and weak,” tweets one, feebly. “Imagine how easy life would be if we had an owl who could help us build our nests!”

There is one general arrangement among sparrows all over; an owl could shield the sparrows! It could care for their old and they’re young! It could permit them to carry on with an existence of recreation and flourishing! In view of these dreams, the sparrows can scarcely hold back their energy and take off looking for the turn-headed guardian angel who will change their reality.

There is just one voice of objection: “Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: ‘This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?’”

His warnings, inevitably, fall on deaf sparrow ears. Owl-taming would be complicated; why not get the owl first and work out the fine details later? Bostrom’s book, which is a shrill alarm call about the darker implications of artificial intelligence, is dedicated to Scronkfinkle.

Bostrom articulates his own warnings in a suitably fretful manner. In one part of the book, he talks about the “intelligence explosion” that will occur when machines much cleverer than us begin to design machines of their own. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he writes.

The book also mentions and describes “perverse instantiation”, “infrastructure profusion” and “mind crime” as possible effects. The so-called “control problem” remains unsolved as of now and it appears to be equivalent to that of a mouse controlling a human being. Without a solution, the introduction of an SI becomes a gamble (with a very high probability a “savage” SI will wipe out humanity).

But others are more hopeful for artificial intelligence.

“I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realized,” said Rollo Carpenter, creator of Cleverbot.

Most of these predictions are based on their optimism that eventually computers will surpass mankind.

But is this all true?

The Myth of Artificial Intelligence

Erik J. Larson in his book, ” The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do,” argues that, “The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time — that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations.”

Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on “big data” — the more observations, the better the model. We feed an algorithm thousands of labeled pictures of cats or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at the bottom, they’re all doing the same thing: statistical generalization from observations.

In Discussing Copernicus, Larson writes, “Only by first ignoring all the data or reconceptualizing it could Copernicus reject the geocentric model and infer a radical new structure to the solar system. (And note that this raises a question: How would “big data” have helped? The data was all fit to the wrong model.)”

For Larson, computers can only do calculations, but the key missing ingredient in machine intelligence is the ability to appreciate context, do analysis, and make appropriate inferences.

“Calculation is connecting known dots; applying the rules of algebra, say. The analysis is making sense of the dots, making a leap or guess that explains them — and then, given some insight, using a calculation to test it,” he writes. This is why it is so difficult for computers to identify who “they” are in the sentence.

He provides example after example of sentences with ambiguously defined pronouns. Humans can look at the context of the sentence and instantly understand who the pronoun refers to. Computers lack this analytical inference-making ability and get stuck. It is true that numbers are valuable, but it is not the numbers that actually make information valuable, it is the meaning behind those numbers that actually have sense, like words, if they are just words with random letters then they will be gibberish and nonsense. A computer can only do those tasks with human intervention, and what is programmed for them to do in the first place. He even told his audience, “The ultimate irony is we need human innovation to figure out how to compute, how to make general intelligence on a computer if that’s what we want to do. But make no mistake, the AI we have today is inadequate for that task.”

There’s not a lot had changed since the 2000s on the central inquiry of whether machines could think. Google put into high gear a thought that demonstrated gigantically helpful and amazing for viable AI: that information and human trust signals like HTML connects to site pages give an ideal dataset to business AI advancement. The remainder of the tech world in Silicon Valley and wherever else went with the same pattern, introducing another time of reasonable AI. More established strategies were resigned (however conventional information-based techniques are returning today in half and half frameworks improvement).

The futurists were still, by my lights, talking a sort of technobabble. Furthermore, nothing emerging from Google, Facebook, other huge tech organizations, research labs, or even the public authority truly endorsed the promotion.

We have been living for a long time with computers and even phones that store more data and can recover that data quicker than any human. These gadgets don’t appear to present a lot of danger to us people, so it’s difficult to see the reason why there might be cause for concern.

Later on, there will, in any case, be jobs for each man and woman of the future, however, at this point, we will actually see the distinction between a human being and a machine, and see in numerous ways that there are simply things that machines can’t do or duplicate.

Erik Larson helps remind us that humans remain exceptional in any case. We are capable of leaps of intuition which develop vaccines (in our case Covid-19 vaccines), of creative insights that better our circumstances in a present chaotic world, of creating deep explanations capable of explaining the deep intricacies of our universe, and all of that is done without a quadrillion amount of data with minimal neural speed.

La perla de toda la humanidad!

Sources:
Larson, Erik J. Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Harvard University Press 2021
Bostrom, Nick. Superintelligence: paths, dangers, strategies
Oxford University Press 2016

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post Creating an Artificial Magnetic Field can kickstart the Martian Magnetosphere