Disclaimer

Many of my essays are quite old. They were, in effect, written by a person who no longer exists in that my views, beliefs, and overall philosophy have grown and evolved over the years. Consequently, if I were to write on the same topics again, the resulting essays might differ significantly from their current versions. Rather than edit my essays to remain contemporary with my views, I have chosen to preserve them as a record of my past inclinations and writing style. Thank you for understanding.

May 2008

Superhuman Intelligence, Super Absurd?

I posit that it might be impossible to be smarter than a human

Brief Description

Talk of the singularity, and of artificial intelligence in general, often assumes that computers or computer-brain combinations will eventually achieve superhuman intelligence. Any subsequent discussion speculates on when it will happen, whether it is a good thing, what will happen afterward, and so on...but the starting assumption, that there is such a thing as superhuman intelligence in the first place, is never questioned. I would like to put forth that idea that it might not be possible to possess superhuman intelligence. The very concept itself may be fundamentally flawed.

Full Description

Sections:

Introduction

As one reads my essays, one must conclude that I am the sort of person who embraces talk of the singularity, that I believe it is a relatively accurate description of the future and that I embrace it as a good thing for the fate of intelligence on cosmological timescales. I would like to state up front that this essay does not necessarily mark a departure from that philosophy. I still believe, based on my materialistic view of the human brain and mind, that we will certainly create computers with the same intellectual capability as humans, and furthermore, that we will greatly augment human neurology through the application of computational gadgetry...and I do believe these things will happen soon. To some extent they are already happening, and in this century, these areas of engineering and science will doubtlessly make tremendous advances.

However, I call into question the very premise of superhuman intelligence. Is the term, practically by definition, utterly absurd? Is there no such thing as superhuman intelligence? This is the question I address in this essay.

Comparing Intelligence

This idea has been kicking around in my human-level mind for several years but I have never put pen to paper (there's an antiquated metaphor) on it before. It is readily acknowledged that a spectrum of intelligence exists across Earth's biodiversity. For example, most people agree that insects are more intelligent than bacteria, that fish, amphibians, and reptiles are more intelligent than insects, birds and mammals more so than reptiles, apes more than most mammals, and humans more than other apes. Anyone can see the trend and extrapolate it, and in so doing imagine a point on the intelligence plot that surpasses our own.

To discuss this, I must establish some kind of usable definition of intelligence. Instead of measuring the outright intelligence of a single animal as some sort of score, I prefer to ask the following question: given two animals, is one notably more intelligent than the other? For example, perhaps it is fair to say that a dog is smarter than a cockroach without actually saying that a dog has a smartness score of 83 and a cockroach 45. This aids our discussion because it saves us from getting bogged down in measuring and otherwise quantifying a given animal's intelligence. So long as we can make a quantitative comparison to some other animal, we can contemplate this issue.

What is the quantitative measure of comparative intelligence? In other words, in what way is a dog smarter than a cockroach? I present two possibilities here (and suggest a third at the bottom of this essay). Either dogs can solve all problems that cockroaches can solve, plus a set of additional problems that cockroaches can't solve at all, or alternatively, dogs and cockroaches can solve all the same problems, but dogs are merely faster. Which of these statements best describes a dog's intelligence relative to a cockroach? The answer seems clear. Dogs aren't merely super-fast thinking cockroaches. They seem to be fundamentally superior. Let us dispense with death and/or age-related mental deterioration as facts of life. Let's assume that animals could, in theory, live indefinitely and with full cognitive ability. This assumption removes limitations on intelligence due to limited lifespan or degrading mental ability. Given this assumption, can we imagine that a cockroach, given sufficient time, could learn to do anything a dog can do? Can a cockroach retrieve items for its owner (slippers, newspapers, tennis balls, whatnot)? Please ignore issues of mass and strength of course. Can a cockroach work in a pack like a pack of wolves to stalk, corner, and hunt prey? Can a cockroach learn classification problems the way many mammals and birds can be trained to choose colors and shapes to get rewards. Can a cockroach be trained to do tricks like rolling over and sitting? Can a cockroach learn a maze? Here's a slightly different one: Can a cockroach learn to feel and exhibit strong emotional attachments to, and responses to the suffering of, other animals, even other species, such a human owner? This is all somewhat speculative because we don't have immortal cockroaches to run these experiments on (and consider that similar animals like ants actually do work together which was one of my questions above), but in most cases, it seems like dogs aren't merely fast-thinking cockroaches (they're owners would leap at the opportunity to agree I'm sure). Dogs seem to be capable of mental and emotional feats that cockroaches are fundamentally incapable of. In other words, there exist problems which cockroaches simply cannot solve regardless of time, but which dogs can.

If the previous example seemed a bit unconvincing, let's ramp up the challenge. Maybe cockroaches actually can be trained to retrieve items or classify stimuli or run mazes (although I strongly question their capability for rich emotional awareness). Maybe the difference between cockroaches and dogs is merely the duration of time required to train them. But what about another level up in demand on intelligence? Homo erectus doubtlessly resides in the upper echelons of the intelligence spectrum of Earth's history and was genuinely a truly clever dude. He could control fire and manufacture extremely precise stone tools. He was also the first species to conceive of wearing the skin of other animals as clothing. Now, would an immortal dog, given the ages of the universe in which to live its little doggy life, ever achieve a Homo erectus level of intellectualism [I must point out that while a very similar scenario is presented in Verner Vinge's famous essay The Coming Technological Singularity, I had not read it at the time I wrote this essay]? Would a dog, given enough time, ever harness fire? Again, forgive their lack of thumbs and consequent inability to hold sticks and such. Can we conceive of the notion, even if they lack the physical implements? Would a dog ever build a tool as sophisticated as the stone tools of Homo erectus? We might visualize a dog scraping a stick or two to push things around a bit (chimpanzees, gorillas, and even crows have demonstrated such ability), but could a dog knap a piece of flint into a precise hand axe? Could a dog hit on the idea of carefully skinning its prey without shredding the hide and then drying it properly so as to produce a useful warm coat? I think the answers to these questions seem fairly clear (nothing is certain of course). Homo erectus was not just a fast-thinking dog. He was a smarter thing altogether. He could solve problems which a dog couldn't solve regardless of the time permitted. There exist whole suites of problems which dogs simply cannot solve, period.

Let's try again. Are humans just fast-thinking Homo erectuses? In this case we have some very clear evidence on the subject. Homo erectus persisted for nearly two million years by some estimates and appears to have truly and utterly stagnated. In fact, it is relatively well established they we didn't evolve from them. They existed in parallel with our ancestors (possibly even overlapping with humans) and eventually went extinct, most likely due to an inability to cope with the mental challenge of competing with our fast-rising ancestors. Homo erectus had every opportunity to be the ancestor of Earth's eventual inheritors (forgive the humancentrism, but really, it's true), and yet he failed to develop the necessary mental ability to outcompete our ancestors. The short of it is, it seems like Homo erectus was neurologically incapable of surpassing a certain level of performance -- no discredit to his phenomenal accomplishments mind you. No sirree Bob, we aren't just fast-thinking Homo erectuses. We can actually solve problems that they couldn't solve no matter how much time they were alloted. No Homo erectus, no matter how much education provided or much time permitted or how rich and stimulating an environment inhabited, would ever write The Iliad or invent and construct a television or solve a calculus equation. They just weren't up to the task.

Okay, let's follow through. The question on the table is, do there exist problems which humans are fundamentally incapable of solving, no matter how long we are granted to work on them, but which a superhuman intelligence could theoretically solve? We can simplify the question by completely dismissing the second half of it for a moment. Let's dispense with speculation on artificial intelligence or computer-augmented brains or anything like that. Let's just ask the question: do there exist problems than humans can't solve? The answer for every other animal in Earth's history seems clear. The answer seems to be yes, as suggested above, but is this true of humans or are we truly different in some measurable tangible way?

Turing-Equivalence

At this point I must get a little geeky on you, I apologize. In computer science there is a concrete, mathematically precise way of stating certain levels of computational ability: Turing-Completeness. This phrase is used to describe any computer capable of solving all the problems labeled as Turing-Complete, or at the very least, is capable of simulating a Universal Turing Machine (which can in turn solve all the Turing-Complete problems). I don't want to bore you with the definition of a Turing-Complete problem, but understand that this applies to all "normal" problems you can ask a computer to solve for you, even the hard ones like protein-folding and factoring large numbers. Every modern computer is Turing-Complete for example. All Turing-Complete computers can solve all the problems that all other Turing-Complete computers can solve, there is no exception. All Macs can solve all problems that all Windows computers can solve, and both can solve all problems that Cray supercomputers can solve. We like to bicker about whose computer is better than whose, but as far as abstract computational theory is concerned, they are all equally capable.

However, things get a little dicey when the issue of Turing-Equivalence is brought up. This term applies to a computer that not only solves all Turing-Complete problems, but which can simulate -- and be simulated by -- all other Turing-Complete computers. Turing-Equivalence is assumed to apply to all Turing-Complete computers, although this claim has not yet been rigorously proven.

Okay, that was confusing. Let's sum it up. All computers can solve all the problems that all other computers can solve. Some are faster, but that is the only distinction to be made. What of it?

Let me ask the following question: is the human brain/mind Turing-Complete? Can we solve any problem that any other Turing-Complete computer can solve or can we at the very least simulate any computer that can solve that problem for us? The answer is the biggest duh of them all. In order to invent computers we had to simulate them, piece by piece, on the drawing board. We can simulate them with pencil and paper, we can simulate them by pushing marbles around on a table. We can, if we are so inclined and meticulous, trace the electrical signals zooming around inside a computer one by one until we understand exactly how the computer achieved a certain computational result. Mere tedium does not preclude Turing-Completeness. We could do this if we were thoroughly motivated. In fact, all electrical engineering, computer engineering, and computer science students do such exercises to some extent during their education. We, human brains, can simulate other Turing-Complete computers, no question about it...which means we ourselves are, by definition, Turing-Complete since that is the only criteria of the definition. If computers are Turing-Complete (and they provably are) and we can simulate them (we provably can), then we are Turing-Complete, discussion closed.

This begs the followup question, are we Turing-Equivalent, meaning can any other Turing-Complete computer simulate us? Well, any advocate of the singularity or of general belief in human-level artificial intelligence certainly thinks so. More interestingly, we currently know of no Turing-Complete system, physical or theoretical, which is not Turing-Equivalent. We have no good reason yet to believe that such a thing is possible. Take for example the suggestion that a computer could simulate a brain without even having to understand what the brain actually does at any high level of abstraction, just the way we can simulate a computer by tracing its electrical signals without understanding the computer at a complex level. A computer could simulate a brain as a physical model of neurons, or at the lower level of molecules, or at the lower level of atoms, and so on, without truly grokking human thought. Ultimately, theoretically, a traditional computer can simulate a human brain, and in so doing, compute and solve any problem that the human brain can solve. Thus, we are presumably Turing-Equivalent.

The Challenge (why the term superhuman might make no sense)

This is the crux of my challenge. If we are Turing-Equivalent and all other computers are as well, then it is fundamentally impossible for a computer to be smarter than us in the way that we are smarter than other animals. To put it another way, if we are Turing-Equivalent, there is no such thing as a problem we cannot solve (within the realm of Turning-Complete problems). If we can solve all problems, then there is nothing left for a smarter computer to solve. We have them matched. Dogs don't seem to be Turing-Equivalent. Neither do Homo erectuses, but we do seem to have this quality. On these conditions, I would conclude that it is impossible for superhuman intelligence to exist, truly and utterly impossible, not in the way we consider ourselves to be smarter than other animals.

But surely computers can do things we cannot. They already do things we can't, right?...such as model planetary weather systems, track millions of credit card transactions, and play unbeatable chess. They are smarter in small domains and the claim is, they will be smarter in all domains eventually. On that day they will be superhuman, right? The answer is no. They are just faster. Speed is a noteworthy medal to pin to one's lapel, and I don't mean to disparage it, but it is a different kind of "smarter" than we usually associate with our stature over all other animals. We don't think we're better than animals based on speed alone, we think we are a definitively new kind of thing and that is the way in which the term superhuman is frequently presented. I think that by the more fundamental definition of intelligence that I have provided here -- the ability to solve problems which are totally unsolvable by lesser intelligences -- superhuman intelligence might be a nonsensical term.

I find it rather telling that when advocates of superhuman intelligence discuss the possibility, they rarely provide more than the vaguest fog of a notion of what they mean. Consider this definition by Nick Bostrom in his essay How Long Before Superintelligence:

"By a 'superintelligence' we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills".

Excuse me if this doesn't immediately beg Bostrom's definition of "smarter". Or consider I. J. Good's definition (he used a slightly different term, but follow along):

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever."

Disappointly devoid of depth in my opinion. His point, elaborated by the remainder of his quote is that such machines may give rise to an intelligence explosion. Observe:

"Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Generally capable AI may indeed design a faster AI, and so on, in an explosive fashion, no argument there, but what will the quality of such ever improving AI be? Once we design a computer as smart as a human, then even if that computer can design more computers, it can't, by definition, design a computer that outreaches our own (or its own) Turing-Equivalence. I have argued in this essay that we are already there. No one is more dismayed than myself. I am looking for the flaw in my reasoning. If you see it, please point it out to me.

Is speed the only metric by which we can judge future intelligences to be superhuman? We take a much stronger claim with relation to other animals. Is the best we can hope for in the future to simply be faster? Is that it? Billions of years in the future, when intelligence has evolved and spread across the cosmos, will it really just be a faster version of us? I am myself incredulous at so astounding a suggestion.

One might chastise me for dismissing the benefits of being faster, but what would such challengers say to the suggestion of a universe populated by super-fast thinking dogs? Simply being faster really isn't as impressive as having access to previously inaccessible realms of thought, is it? It would be nice to think that there are genuine domains of intellectualism that escape us at this time, but which will be realized in the future by superhumans, but there don't seem to be any areas of cognition that elude us. To rephrase the entire issue, we do, in fact, seem to be capable of thinking any thought that can be thought, of carrying out any computation that can be computed. Consider that it is easily proved that we can think any thought that can be expessed in a combination of words and phrases, since we can clearly enumerate all possible phrases if we are so inclined, and in so doing we can think every possible thought of a given length (length of the written description of that thought). Such a claim isn't merely hopeful or whimsical, it is inherent in our Turing-Equivalence.

I put it to anyone who reads this to consider it for themselves. As I have stated, I am eager to find a flaw in my reasoning, but any response must be rigorous and precise. Otherwise it is just wishful thinking.

Afterthought and Possible Sorta-Kinda-Solution

One possible solution to this quandary is to observe that there may be other ways of measuring intelligence. There is no getting around the apparent Turing-Equivalence of the human brain. No superhuman intelligence can ever hope to best us in the Turing-Completeness game, we have already won (a tie for first place is a possibility of course), which means no superhuman can be better than us in the way that we are better than animals, but aside from speed, can superhumans be better than us in some other way?

There are other, slightly hazier notions of intelligence. For example, most humans can visualize one-dimensional, two-dimensional, and three-dimensional space, yet are quite flummoxed at the challenge to visualize four-dimensional space, much less 2539th-dimensional space. I doubt this limitation is a physical property of the fabric of the universe, to be imposed on any theoretical computer or mind, but rather of the fact that our brains evolved in bodies that inhabit a three-dimensional world. Evolution solved the problem posed to it, and spent not a wit of selective energy selecting for four-dimensional visualization ability. But computers aren't evolved naturally, and even if evolutionary principles are used in their design, forward-thinking design can also play a role, as it always has in all engineering disciplines. We can, therefore, make computers that can visualize higher spatial dimensions by designing them that way.

As another example, visualize a point in a plane connected by lines to the five vertices on a surrounding pentagon, like a five-lobed asterisk. Pretty easy. Now, at each of those surrounding points, visualize its surrounding pentagon and five more lines connecting to those vertices. That's harder. Take it another level up, visualizing the surrounding pentaplex of points around every end point. Visualize the entire image. Can you do it? I intentionally chose pentagons because they don't tessellate on a plane. One could cheat at this task with triangles, squares, or hexagons by simply zooming out to a picture of a large triangular, square, or hexagonal lattice, but it isn't very easy with pentagons because there's lines crossing each other and points hanging free all over the place. A computer could probably do it though, if sufficiently programmed.

Third example: consider our ability to hold in our head a complex interconnected network of nodes and edges, along with a truly deep understanding of the network's structure and properties, like imagining the network of our friends' friends' friends with a deep realization of cliques of close friends, central individuals around which larger social circles orbit, peripheral introverts, all the various exchanges of gifts, support, and occasional resentment. Picture all the various levels of overlapping and intertwined romantic relationships, and picture all of these properties steadily evolving over time. Any large network evades our ability to comprehend, but computers already do this, although without a hint of conscious awareness. They could of course, conceivably, retain this ability even after we imbue them with conscious awareness in the future.

The previous examples are suggestions of some ways in which computers or augmented brains of the future might surpass our ability which do not translate literally to a measurement of speed or Turing-Equivalence. These kinds of descriptions of intelligence, the ability to understand increasingly complex arrangements of parts or the ability to visualize increasingly complex or extra-dimensional spaces, may hold the answer to my quandary. They do not represent an end-run around the Turing-Equivalence challenge, but do represent a metric of performance (albeit a rather vague metric) that is different from speed or problem-domain-accessibility and describes what the term superhuman intelligence might mean. Perhaps we can find a way to state this kind of comparative intelligence in concrete terms the way we can with speed and Turing-Equivalence, and in so doing resolve the challenge I have presented here...but until that day I remain underwhelmed by this suggestion as a final and conclusive answer to my original question -- it is just too vague. For the time being, whenever I hear someone speak about superhuman intelligence I will immediately wonder if they have given the concept more than a brief moment's worth of actual consideration.

I would really like to hear what people think of this. If you prefer private feedback, you can email me at kwiley@keithwiley.com. Alternatively, the following form and comment section is available.

Comments

Name:
Comment: characters left

(Html tags will be intentionally stripped for security reasons, sorry.)
Verification: = (solve the equation, don't just duplicate the text)

Name:N. Harlan Hancock Date/Time:2021/04/17 15:17:23 GMT
To make my first point more succinct: Superintelligences might not be able to think of things that humans would NEVER think of, but they could think of things that humans would never think of in a million years, because a million years is a finite amount of time. This sort of issue with infinities is the crux of cryptography. Normal cryptographic algorithms CAN be solved, but it would take so much computation that its EFFECTIVELY impossible.

Name:N. Harlan Hancock Date/Time:2021/04/17 15:10:02 GMT
1) My main problem with your argument is that it uses an infinity, and I don't think it properly addresses it. Sure, any computer or human could THEORETICALLY solve any problem given enough time, but immortal monkeys would also eventually type Hamlet and infinite power series can exactly match transcendental functions.

2) Your "comparing intelligence" section is not very convincing to me. You make lots of statements and guesses about animals based on normal human intuitions with very little evidence. I think they may have as much to do with human biases as with reality. You say the lack of advancement of Homo Erectus proves they fundamentally couldn't think like us, but maybe they just never invented language, so there information wasn't passed down as effectively, and they had to keep restarting in every generation in a way we didn't.

3) That skill of simulating complex systems in your "sorta-kinda-solution" section is called memory, as well as ease of accessing and altering that memory.

Name:Jean-Claude Kouassi Date/Time:2018/02/15 19:53:21 GMT
If I got you well, you are trying to explain that we humans have our specificity (innate abilities) that could not be
beaten by machines. Or at least nothing ensures that at the latest event that will lead to current definition of
superintelligence, some of our qualities, unhandeable by machines, should not be left behind (due to increasingly
complex or extra-dimensional spaces). So, a precious part of our humanity could be lost. As "computers aren't evolved
naturally", "no superhuman can be better than us in the way that we are better than animals".
So, in conclusion machines have to perform on the way they perform better (extra-dimensional spaces), while we should
find new concepts to integrate the lacking aspects of humanity as possible.

Now, in concordance with AI Safety my point of view is something like a blackbox will for machines, different from
simple rule based system, to allow human orientation for them.
medium.com : the-reason-why-we-should-give-a-will-to-cognitive-robots

Name:Anonymous Date/Time:2014/06/04 15:56:54 GMT
Yes we humans can solve anytin only that it takes time.I tink what should be meant by AI is combining the qualities of both the speed of a computer and the intelligence of man.this means problems can be solved in a fraction of a second.imagine an AI computer that could solve what took Einstein decades in milliseconds.we would be centuries more advanced if not even milleniums.

Name:Keith Date/Time:2013/12/06 18:43:42 GMT
@Michael Brunnbauer

Thanks for the responses. I think you make some good points, with the exception of the following statement:

-- "You said that human level intelligence is a computable problem. So why did we not come up with a solution yet ?"

I disagree with the notion that anything that is possible must necessarily have already been achieved, and likewise, that anything that has not yet been achieved must necessarily be impossible. Therefore, I don't see any reason to take the current state of AI as informative about its theoretical plausibility.

Cheers!

Name:Michael Brunnbauer Date/Time:2013/12/06 18:28:26 GMT
On the other hand, every proof that a concrete algorithm does halt or not should be computeable and if there is no proof, no "superintelligence" can do anything about it.

So if one day automated theorem provers are better than any mathematician, we may have the same situation as with chess: We are beaten by computers in another discipline without them being really intelligent.

Name:Michael Brunnbauer Date/Time:2013/12/06 08:44:03 GMT
I think this argument misses the nature of intelligence, which is more about understanding problems than about solving them.

What about non-computable problems like the halting problem ? Humans can decide if an algorithm halts for some simple algorithms. Yet, there is a complexity limit where humans fail to make this decision.

I can easily imagine a complex algorithm where the halting problem cannot be decided by any human with any amount of time. Yet, a more intelligent being might just "see" that the algorithm halts or is stuck in an endless loop - while failing to see it for even more complex algorithms.

The same probably applies to computable problems. Humans might not be able to come up with an algorithm to solve some problem while a more intelligent being can do it.

You said that human level intelligence is a computable problem. So why did we not come up with a solution yet ?

Regards,

Michael Brunnbauer

Name:Keith Date/Time:2011/08/25
NOTE THAT ALL COMMENTS OLDER THAN THIS ONE PREDATE THE COMMENT SYSTEM. They originated as email feedback and have been retroactively converted to public comments to seed the new comment system. As such I have redacted them where appropriate for the purpose of preserving their anonymity.

Name:Anonymous Date/Time:2010/07/06
Hi,

I really enjoyed reading your article, and it sort of truncated my excitement about a super intelligent human arising in my lifetime. I was wondering if you would post some of the responses and criticisms this article received because I would be interested to see how you responded to them.

Thanks.