Singularity Sigh

Ah…the Singularity.  Yeah.

I mean, I thought I was done with it.  But apparently it isn’t done with me.  I think you’ve all heard me say that I’m only interested in the Singularity as the fearful shadow of what’s really coming toward us from the futureward direction, i.e. a diaspora rather than a unification, a splintering of science and technology into two seperate things:  a balkanization of knowledge-bases, massive theory-shock setting in as our increasing experimental power starts to collapse the ideal of Ye Progresse around our very ears.  Yeah:  quite the opposite of our machines taking the burden of science away from us, as though you could send your arms and legs to go to work for you while your torso sleeps…it’ll be more like our machines delivering too much of science’s burden to us, too many observations for our contextualizations to keep up with them.  Hey, we’ve had it easy up ’til now, folks!  All we’ve had to worry about is society keeping up with technology — “alienation”, we call it in the English biz.  But soon we’re going to have to worry about knowledge keeping up with facts, and that’s going to be, yeah…a bit of a liminal experience.

But not a Singularity.  Although the feeling will be much the same:  in that it will be an antipodal feeling, a feeling very nicely matched to the feeling explored in “Singulatarian” SF, by being its exact mirror image.

You can see its first stirrings all around you, if you look.  Discoveries in Discipline X rely on yardsticks supplied by Discipline Y, which rely on the consensus figures delivered by Discipline Z, which gets its benchmarks from Discipline X again…but the more exciting our scientific times, the more the yardsticks are all in flux, and these are indeed very, very exciting scientific times.  So you can’t really do Discipline X in glorious isolation anymore…not that you ever really could, except the “brain” of Science was large, and its neural firings intermittent, and its emergent thoughts slow, so you had time to work within a provisional conceptual framework, maybe even spend most of a career there, before the cogency of new information caught up with you.  Time between thoughts:  beautiful, beautiful interval.  But it’s not like that anymore, and it’s going to get a lot less like that as time goes on.  So right now to be a really good specialist, you need to have good interdisciplinary skills as well…at least, you have to be a good interdisciplinary reader.  Which is a tougher job than it sounds!  But you don’t yet have to be a specialist-level cruiser of X and Y knowledge, just to do some fruitful Z…and thank goodness for that, but make no mistake, these are the good times, and we’ve already passed Peak Interval, and it’s all uphill from here.  The days of being a standalone “X Specialist” are going, and soon they’ll be gone…and then the days of being an “XYZ Specialist” will go.  And the prospect is certainly like that of a Singularity, isn’t it?  As all disciplines seem fated to become one discipline, and all knowledge one knowledge, it seems we approach a miraculous unity-point where a sufficiently-intelligent being or being-grouping could know everything all at once.  And okay…so we know that intelligence won’t be us.  But maybe it could be a daughter species of ours that contains some vestigial “us-ness” to it, enough for our own agency to be plausibly displaced into it…enough so that we can imagine being carried along, somehow, into those unsown fields that bear ripened fruit.  Where the children of men and the children of gods too, will see Baldr come again to the war-god’s fane.

So the Singularity, you see, is a solution.

And that’s the problem.  Except, it isn’t the problem critics of the Singularity think it is…and that’s another problem.  Because what use is it, to argue the nature of technological change with Singulatarians?  It’s quite beside the point:  as absurd as arguing biology with Creationists when they could be so much more easily brought low by arguing astronomy with them.  The fact is, to get right down to it, the Singularity is bunk.  It’s pie in the sky.  It can’t withstand the facts.

Here are the facts.

There is no strong AI.  There isn’t going to be any strong AI.  What we have here is a definition problem, to which the tools of engineering (“reverse-engineer the brain!”) have already proved inadequate, and will continue to prove inadequate.  The belief that we can produce conscious self-awareness simply by throwing complexity at a system, is a primitive one;  not unlike to the idea that we can produce “life” by throwing electricity at a bunch of dead tissues.  Surely we would’ve already created life, if that was the case?  Just how complex do systems have to be before they acquire self-awareness?  This isn’t the enlistment of natural forces, it’s the blind invocation of them:  stand the thing out in the rain and God will touch it with heavenly fire…probably.  Of course if there’s anything more improbable than something working in accordance with our wishes based on our lack of understanding of its principles, then the word “probability” must surely acquire a new meaning before any amount of metal and any amount of current can acquire what is popularly referred to as “intelligence”…I mean, we might as well wish to induce salt to taste sweet by cutting out our tongues, as assume that Moore’s Law will deliver strong AI to us without us even doing anything…!  Ah, the metaphor of the “electronic brain”, how powerful a hold it has on us!  Only build it, and consciousness will come…!

But what is “consciousness”?

See, the “electronic brain”…that was a literary solution to an existential problem too, right?  “Man will be discovered to be nothing but a machine!” Okay, fine…but what if a machine can gain a “soul”?  Dwell for a moment with me on the word “animal”, if you will…what does it mean except “objects that move around without the wind blowing on them”?  Objects with behaviours instead of properties.  Yes, once we thought of them so.

So why shouldn’t we think of robots the same way?

You see what I mean, it’s a cool idea for a story…but it really isn’t how real robots work.  And it actually doesn’t solve our existential difficulty, just to say “if it moves without the wind blowing on it, it must be as alive as anything else!”  If that were the case, windows would be more alive than doors, and fires would be more alive than foxgloves.  It’s a definition problem.  We don’t know what constitutes “life”.  We don’t know how to tell if something’s intelligent, unless it’s made outta carbon compounds and has a face.  All of this “life” stuff, all of this “inanimate” stuff, it’s all just atoms and molecules…and nanotechnology’s the same.  I’m not going to be the one to say it’s impossible to make a pencil with a stainless-steel tip instead of a graphite one, but I am going to suggest that no amount of trying to fool the pencil-user into thinking steel is really graphite is going to make any difference toward being able to write with it, unless the “fooling technique” consists of coating the steel with graphite.  I mean ultimately, what’s the difference between a thing that moves on its own, and a thing that only moves when the wind blows on it?  Energy is energy, right?  Reaction is reaction, right?  Processes are processes, surely?  Why should we ever think anything is not alive and conscious, if we think we are?

Like I said:  what is consciousness?  Is this consciousness, only consciousness with an even slower thought-process than Science?  Sure, it’s “wind” that sculpts it and causes its changes, but it isn’t “real” wind…or is it?

Can the electrochemical processes of the brain be considered “wind”, by some definition?

You see…it’s ridiculous.  Because of course the electrochemistry of the brain can be called “wind” (by the way, did you know that Odin was originally a Wind-God?), but that doesn’t help us to make machines that can pass the Turing test.  Just like, if you’ll pardon this brief digression, you can’t discover Einsteinian physics from quantum-mechanical postulates:  because the first is physical reasoning based on visual apprehension of systems, and the second is physical reasoning based on auditory apprehension of systems.  Yeah:  I’ll say it.  The barrier to any GUT is in essence a sensory barrier.  Bear in mind that from a very strict evolutionary perspective there is no difference between sight and hearing — both are equally based on physics and chemistry, the properties of elements and media, the fact that from a certain perspective there is no difference between an organism and its surroundings…or, sorry, is that evolution talking, after all?

Because it sounds like philosophy, doesn’t it?

Except it doesn’t:  because philosophy doesn’t even exist without the supposition that there are differences in the field of existence, that are capable of categorization.  And understanding.

So where is the understanding of consciousness, in the world of AI research?


There is none.  Not since Alan Turing.  And by the way have you read his famous test?  To this day there is no computer that can satisfy its conditions.  We have computers that can beat Grand Masters in chess…we have computers that can “learn” from their environments, backed by a speed of thought unimaginable to the human physical structure.  So why don’t we have thinking machines yet?

For heaven’s sake, how long do we have to wait?

That’s how you know the Singularity isn’t coming, friends.  Because our technologists may be fully of the twenty-first century, but their philosophy’s strictly of the nineteenth.  Possibly even:  the eighteenth.  Because that’s when John Locke coined the term “consciousness”, eh?  And meant it to mean:  that which can be used to determine a person’s responsibility for their actions.  With “consciousness”:  responsibility.  Without “consciousness”:  by definition, no responsibility

…Hey, what were we talking about, again?


Singulatarians do not know what they are talking about.

There’s no strong AI.

There can’t be any strong AI.

Because we don’t even know what it would look like.

None of our yardsticks for the thing popularly known as “intelligence” may be right.

In fact they may always have been wrong…and we’re only just now finding out about it.  And, how unfortunate that would be!  What a problem for the progress of human knowledge!

Oh, if only there was some solution to it…!  Preferably an easy one…!  BUT THERE ISN’T.



10 responses to “Singularity Sigh

  1. I really enjoyed the piece, and your work here, along with some pieces written by Andrew Hickey too, has really made me think about a matter which, for reasons too dull to detail, I’ve always put to one side in the past.

    There’s a great deal that we can learn from the things that a society chooses to worry about despite not being able to establish any objective criteria for measuring; whether it’s god, UFOs or the singularity, there’s so much to learn from listening to folks discussing that which either doesn’t exist or which can’t be proved to exist.

    I’m not suggesting that you might agree with the above, but I did want to say that your piece made me think, and that’s always a grand thing. My thanks to you.

  2. Colin: You’re welcome, and not only do I agree with you but I wish I could’ve put it so succinctly! Or, you know, even succinctly at all. The Singularity, like time-travel, like nanotechnological grey goo gone wild, like aliens, like the world-computer that’s taken over, is a marvellous metaphorical cloak for contemporary anxieties. And I think it’s pretty easy to peg these things to the places and times of their origins, in that sense. SF in particular lends itself so well to the construction of psychological romans a clef, that taking these stories at their face value makes them seem actively misleading — imagine not just reading Foundation without realizing that psychohistory is a metatextual device, but reading it and extracting the belief that we ought to have (or that we could have) psychohistory as a genuine real-word analytical tool! Just like a rocket ship. This is running with philosophical scissors around the house of Literature, while pretending you’re really running a technological football down into the endzone in the big game of Science…and wow, I’ve outdone myself in the tortured-metaphor department just there, haven’t I? I’ve practically waterboarded the thing, but I think the point itself is not too insane…?

    And Angsar John, I like your graph, but…you know, that’s still just measuring “calculations per second”…and increasing that rate isn’t all there is to “intelligence”, is there? One presumes thinking is always thinking whether it’s fast or slow…

  3. Righto! We’ll call it Babel-21C: The Confusion of Tongues and make a mint!

    A bit seriously, yeah I think you’re on the right track; there just won’t be enough researchers for all the possibilities, or enough funding. The tree of knowledge may wither somewhat at the tips, arguably it is already.

    Try this one … Our hero knows full well that the price of investing years in one twiggy branch may be that he’ll be stuck there while funding and legitimacy dissolve in the world-wide flailing. So he secretly invests in mimicry with a little forgery. He modifies software of kinds we already know — Markov-chain gibberish generators, Biblical text comparators, academic style guides — to seed sub-peer-review literatures, like arXiv now, with plausibly well-informed commentary and a few actual minor research papers, so as to appear qualified for the next branch he wants to jump to. We eavesdrop on his increasingly frantic private diary, as he begins to trust his software to make sense more than he does the equally frantic professors whom he’s trying to con …

  4. Sorry, spent foolishly on beer tonight! SIGH. Unable to glue thoughts together into cogency, whoops. But tomorrow I will certainly have something to say…!

  5. No doubt, the near future of AI will involve using AI as to run subroutines, to increase efficiency of processes…rather than gift us with sentient companionship. Between Hammeroff-Penrose OR Theory and my Machine Man essays, I dipped into the “what is sentience” question recently, so I really enjoyed this take on it. I’d really have to get it quiet around here to reply properly.

    I am so gratified my “7 Soldiers” entry has brought so many visitors and enjoyed the commentary thus far, as I can’t predict what will be said!

  6. Dig those Machine Man essays.

    Also, hold on for more commentary on your Gerberverse work…I’m slow but sure!

    Now, I actually have a story on the boil about the Princely world of awful futurity that the Singularity plays such a wonderful Pauper to…the basic thrust of which is just as Jonathan says: how in the hell can you learn to negotiate a world like that? (The name of the story, in case you were curious, is “The Lord Of Every Sentence”.) I mean, what accomodations must you make in a situation like that, and what accomodations are optional to you? Some necessities you might not even recognize as such: if tea and coffee were taken entirely out of the world, might we not come to realize just how valuable caffeine-delivery mechanisms really are? I mean why in the hell did people ever bother to cultivate tea in the first place. It wasn’t because they didn’t have more important work to do, you know. But we find it convenient to forget such things. Likewise, we may think we can do without any of our major scientific theories. We think it doesn’t really affect us if the world goes around the sun, or the sun around the world. We think that, because we swim in a centuries-old heliocentric sea…we don’t see how much of our lives are propped up by Copernicus’ little charts. Knowledge…it’s damn hard to put a price on it, either the price you gotta pay to get it, or the price you gotta pay to live without it…

    But maybe the toughest thing to put a price on is, just as Jonathan points out, a life lived somewhere in the middle…

    Okay, now I’m ready to crash out like a ton of bricks. Whew.

  7. I want to go with the flow on this and not be a reactionary, but it does seem as if the diehard who declares “I expect to negotiate the small print and I insist on signing my name” rather stands out as a landmark.

  8. Sorry, that was somewhat scrambled!

    Another interesting thing you’ve got ahold of there, Jonathan, that reminds me eerily of things lurking in my to-be-done file, is that business of the guy trusting his software to make sense of it all, and then starting to worry about it…yeah, that spotlights the problem quite well: did we ever stop making the damn stuff up, did we ever really know what we knew? It’s the sort of lugubrious notion SF has been known to handle briskly and efficiently, where other genres may be tempted to bog themselves down. Not that I’m speaking against being lugubrious, of course…!

  9. Aye it’s a fair cop. I guess my comment came from trying to follow some train of thought — any train! — which would naturally be implication of science’s general semantic explosion as you envisage it.

    And my train went:

    If we’re heading toward Strong AI in the sense that software systems are starting to initiate solutions and concepts which no single person had ever put on paper; if in brief the systems are thinking for themselves

    or even if they only seem to be doing that, or are reputed to be doing it

    then the question arises, what are the consequences of delegating our decisions to them?

    For instance, wouldn’t it be nice to have an application on hand, which will read through End-User Licence Agreements, mark them up and annotate them, so that e.g. you can confidently ignore this section because you are not in fact a free-for-all Torrent conduit, or so that you see this other section marked: “Danger! 21 purchasers have been sued on the grounds of violating this section; 41083 have received warning notices …” etc? Are you game to rely on this nice app? What if it has small print of its own: “The output of this program may not be construed as factual information, and has no standing in courts of law of your nation (see other nations).”

    You remember you were at least skirting this situation, when they said you needed to subscribe to this particular phone plan, to get into your office. That’s what my comment was referring to. The person with the time and inclination to trace every decision back to the poor fool who signed off on it is a potential thorn in the side of the service provider.

    Obviously it’s not a new story. “Patrick McGoohan is on the phone, sir. He insists on speaking with our legal department.” “Oh, god!” It must happen many times a day, worldwide.

    The story gets a bit more interesting when the EULA, or the EULA-Vet software, can speak for itself, and might indeed have legal standing. Or even, be capable of independent thought — or able to hire automated independent thought at … umm … need.

    It gets more interesting still when a lot of things like this are going on, and when we at least think we’re on the margins of strong AI, and some entities are being attributed awareness or reputed to have it. The two obvious stories are the Pinocchio Error, where the entity is aware but nobody believes it, and the Tar Baby Error where somebody takes an entity to be aware when it isn’t. We might guess we know society’s path of least resistance, but we could be wrong.

    When it’s a matter of science, though, I’m rather clearer on where the defences against Error lie. I think I could quickly find something on the web to tell me how Google works, or genetic algorithms. When scientists are touting a system as “smart”, there are probably scientists near at hand who’ll explain: “When we say smart, of course we just mean it’s following this here method”.

    Furthermore, I’d say that scientists are pretty well immunized to Strong AI malarkey, albeit Turing and I.J. Good had a hand in proposing it. They’re immunized because a hard materialist answer is always preferred, and because scientists thrive on incorporating the inadequately understood into business-as-usual.

    For all the sophistication in the best Singularitarian arguments, a classic picture still haunts the debate. We see the scientists in their white coats assembled in the great MULTIVAC operations room, sober, competent, every factor under control … and the next minute they are no scientists at all but the priesthood of an unfathomable entity.

    I mean, really? The day after God is born, the participating post-docs will be quarreling about the order their names should appear on the paper. And of course they’d had ready a catalog of NP-hard problems to put God through Its paces. They’d already had some kind of theory of trans-human cognitions, because if it’s a real experiment you must have made predictions. They already understood godhead to the extent needed to implement it.

    And that’s the 1940s Astounding Science Fiction extremity of the tale. In reality, AI researchers inch their ways ahead through a conceptual terrain dimly litten by logic and human cognitive faculties, rarely seeing which lights are nearer, which further, which illusory. They have to limit their interpretations. They’re the last people to believe in Strong AI. If they said they did, we’d ask them why they haven’t got on with it.

    But now I want to ask, do you really think that science is about to blow itself to semantic smithereens? By mere accumulation of specialist Ph.D. theses?

    Isn’t it more likely that we’ll just slow down and consolidate for a while?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s