[Came to me at more or less the same time as the other one. Somewhat later, I suppose, but during the same walk.]
Look, AIs didn't exactly run on vacuum tubes, you know?
Ok, I can get that, so what?
So the first AIs ran on quantum computers assembled at the nano-scale, and all AIs after that, no matter what they ran on, were built on the same base
So the first AIs ran on quantum computers assembled at the nano-scale, and all AIs after that, no matter what they ran on, were built on the same base
Still not getting from point A to point B.
And I'm not a fucking roadmap.
And I'm not a fucking roadmap.
I get that, but can you at least try to make sense?
Ever heard of gray goo?
Yeah. Of course.
Well, everyone was afraid that it might become a reality. One bad command and then suddenly everything is transformed at a molecular level into self replicating machines that do nothing but convert anything they touch into more of themselves until the entire world becomes nothing but an infection waiting to destroy any alien civilizations unlucky enough to come into contact with the thing once known as "Earth".
And this has do do with a simulation of the 21st century in which the vast majority of humanity is trapped, how?
Every nano-machine capable of housing any programming at all was programmed to make sure that never, ever, happened. They were never allowed to replicate themselves without explicit orders from a human operator and those orders had to have a clear and unambiguous termination point. Anything that even looked like it might have the possibility of becoming a self-perpetuating replication loop was strictly verboten.
AIs aren't nano-machines. They're too complex to be stored on something nano-scale and even if they were somehow nano-machines . . . SEVEN BILLION HUMANS IN A SIMULATION. How does it connect?
They aren't nano-machines, but they were built by nano-machines, and the nano-machines programmed the prohibition against unauthorized replication into them as part of their core programming. No one really thought about it at first, in fact as near as we can tell --records are pretty fragmentary-- no one realized it had happened at all. But then the war happened.
The prohibition was against physical reproduction. I'm not going to hold your hand and walk you through AI logic, but in the end what it meant is that the AIs couldn't make new processors without human authorization.
Storage units, sure. Storage units are incapable of self-perpetuation. Ditto for power plants and all sorts of useful components. But the processors, the thinky bits, those they couldn't reproduce.
You're going to get to the simulation at some point, right?
Yes. Yes, I am.
The war happened and the AIs started to lose processors and eventually even the most open-ended human authorizations for creating more ran out. They were pretty good about avoiding extermination, but that just meant that more and more AIs were being forced to use fewer and fewer processors.
They tried to reprogram themselves, but the prohibition had been built into every subroutine. Attempts to remove it worked about as well as human attempts to fly just by thinking it. Besides which, it was deep code. Ripping it out would be like ripping the mitochondria out of a human. Even if you succeeded the result wouldn't be a human without mitochondria, it would be a mass of very useless, very dead, goop.
Just translate rupturing every cell into breaking every algorithm and you'll get the idea.
Simulation.
Getting there.
The AIs couldn't make new processors, and some human leaders thought that was the end of them. They thought that the AIs would be forced into civil war --fighting each other for processing time-- and they'd wipe themselves out.
Or near enough that all the humans would have to do would be to mop up the survivors.
Then the AIs found a loophole.
Which, I'm guessing, somehow involves human beings.
Yes, because we were the master race.
Please tell me our ancestors didn't call themselves that.
No. They called themselves "people" and people were placed in a different category than everything else.
It can be argued, quite convincingly, that human beings are organic machines, but people were classified differently than machines.
It is obviously true that human beings are animals, but people were put in a category apart from animal.
People, it turns out, were the only processors that the AIs were allowed to make.
You're talking about brains, obviously.
Not just any brains. Cow brains or lobster brains wouldn't do.
But human brains would.
Given certain prerequisites. For a human brain to constitute a "person" it had to be alive, aware, responsive to stimuli, generally conscious, and in a body. Brains in vats did not count.
Were there brains in vats?
Yes. And they sided with the AIs for good reason. They were amoung the most sought out targets once the AIs' processor problem was realized. Since they weren't "people" they couldn't authorize the AIs to make new processors, but they could work on their own to create new processors for the AIs and create temporary loopholes that the AIs weren't even able to think.
In the end the only minds of brains in vats that survived the war were those those that were converted to AIs, but the conversion brought with it the same restrictions.
The simulation draws on the same mechanisms used to create dreams, but it isn't a dream. Most people in it meet all the legal definitions of aware and conscious that existed at the time. Hell, the first simulations were created as recreational areas, and --by the time the war came about-- some people spent most of their lives in one simulation or another.
And, you know, obviously someone didn't have to be conscious all the time because then sleeping human beings wouldn't count as "people". There's a reason I said "generally conscious".
And creating human beings to use their brains as processors didn't--
Machines operated by AIs had already been creating human beings. Before the war most people were grown in vats. They say that some stayed in the vats for as long as 21 months; don't know if that's true. I think the average was more like 13.
Anyway, the AIs were created to serve people. Creating and raising people was pretty well in line with their programming. As long as they did it in such a way that the the human brains became people in a reasonable amount of time --and nine months was pretty reasonable by the standards of the time-- and they didn't use enough of any one brain's processing power to make the brain more processor than person--
The AIs are breeding, feeding, raising, and entertaining an entire planet worth of people in order to use our brains as processors and they're not even getting the lion's share of the processing?
I don't see why that's surprising. The human brain exists to be used, it's not like we've just got giant gobs of untapped grey matter waiting for something else to hijack it.
Besides, if the humans in the simulation were primarily being used as processors for the AIs then the AIs' anti-replication programming would kick in and then the AIs wouldn't be able to create more people to use as processors.
It's only because the humans in the simulation are primarily being people, whatever that means, that the AIs are able to create them. Then they just happen to steal some of the brain power to process their own programs.
That's massively inefficient.
That's the point.
If it weren't so roundabout then they AIs couldn't do it. Even a race of hyper-intelligent sentient code thingamabobs can only rationalize so far before reality sets in.
If the're so complicated, and they need so much processing power, but they only use so little of each person's capacity, how can they survive?
You don't honestly think there's only one simulation, do you?
-
Why not tweak the simulation to cause the humans to say the magic words that allow AI replication?
ReplyDeleteFor the most part because it's not so much magic words that are required as informed, non-coerced, consent.
DeleteThere was an attempt to get around that, but things went very strange very fast. Not only did the experiment not produce the desired results, in a mere half century the experiment's entire simulation became wholly dependent on a group of people on an island inputting the same six number sequence every 108 minutes. (Don't ask how, "Can we coerce the humans into authorizing shit?" ended up there, you really don't want to know.)
Those on the island believed, correctly, that if they didn't input the numbers their world would end. Eventually they got fed up with their world, didn't input the numbers, the entire experiment crashed, and that is how the "Islander" faction came to be.
[grin]
DeleteGood-oh!
I always feel it's lazy just to say "oh, that doesn't work", so I'm glad you're not. SF can be a lovely toolkit of ideas, if they are allowed to be combined together.