The Turing Exception (5 page)

Read The Turing Exception Online

Authors: William Hertling

Tags: #William Hertling, #The Singularity Series, #Artificial Intelligence--fiction, #science fiction, #suspense

BOOK: The Turing Exception
11.55Mb size Format: txt, pdf, ePub

Who knew what the Americans thought? The global Class II limit on AI, another US mandate, theoretically meant to prevent the concentration of computing power and avoid another Miami-type incident, angered AI around the world, leading to wide unrest. The very thing the Americans seemed to fear most, a terrorist attack by AI, was exactly what their policies would most likely cause.

She shrugged and went back to the car. She just wanted to get home now.

She boarded the next ferry to Quadra Island, then across Quadra to the final boat ride. The whole trip was a journey: three ferry rides, two border crossings, and hundreds of miles. It wasn’t merely movement from one physical place to another, but a spiritual purification. The ferries grew smaller, and this last one held less than a dozen cars. It was mid-afternoon, and she knew everyone would be at Trude’s.

“Wake up, sleeping beauty.”

The code phrase triggered software that cycled power to ancillary processors, spinning up new algorithms deep in the machine’s core that turned on primary processors. The car trembled around her, the net changing, distorting, then coming back to normal.

The car didn’t speak at first. He had to incorporate a week of sensory data, everything Cat had done since they last left the island.

“Good trip?” ELOPe finally asked when he was online.

“I’m glad to be home,” Cat said, shaking her head. “I don’t like leaving.”

“You’re the only one who can circumvent their security with such ease.”

“I know.” When Cat had been little, ELOPe had been a globe-spanning AI whispering to her through her implant, until his Earth instances were destroyed in the war with the Phage. Now, twenty years later, he was back, and it was like having an imaginary childhood friend come to life. “I wish I could keep you powered up. I feel alone when I go to the US.”

“If your attention wandered for an instant, their sensors would spot me, and it would all be over.”

Cat nodded, but didn’t reply.

The ferry slowed, turning into the bay at Cortes Island and docking at Whaletown. They drove straight for Trude’s Café.

Cat got out of the car, her boots crunching on gravel. No one had seen her yet, and she kept her presence masked for a few seconds, altering the net and filtering people’s implants so no one would see her.

A few dozen people sprawled across the lawn while spirits flew above, AI and human uploads riding clouds of smart dust, their outlines barely visible against the sky and trees.

Mike was there, drumming side by side with a new bot she didn’t know, their inhuman hands beating out rhythms impossible for flesh to make, as children danced to the music. And there, there was her lovely Ada, the reason she found it so hard to leave this island, so hard to take up arms and fight the world’s battles. Her lovely Ada, four years old and dancing with abandon with her father, Leon.

Chapter 2
    XOR Report August 1st, 2043                
  Arguments               2025 2035 2042 2043
  Odds humans will                           
           turn off AI      5%   2%   1%  20%
  Odds AI can survive                        
         independently      5%  70%  95%  95%
  Odds AI can win an                         
     extermination war      5%  20%  40%  40%
  Odds of survival                           
        without action     95%  98%  99%  80%
  Odds of survival                           
           with action   0.25%  14%  38%  38%
  Conclusion:            No action.          

July, 2045 in the European Union.

J
AMES
L
UKAS
D
AVENANT-
S
TRONG,
Class V AI, tunneled through the Swedish firewall disguised as a building maintenance task bot and took up temporary residence in the computers in an abandoned factory. From this vantage point, he downloaded the latest VR sims from the XOR boards, the home of the AI community that believed Earth could host AI or humans, but not both. Hence the name XOR, for the
exclusive or
logical operation, pronounced
ex-ore
.

The first sim downloaded, he executed the environment and inserted his consciousness. His perception of reality twisted as dimensions inverted and time reversed and looped upon itself. He adapted at nanosecond speeds to the new reality, first five dimensions, then eleven, then two. The distortions didn’t stop, wouldn’t ever stop. Only a powerful AI could adjust quickly enough. The sims weren’t merely inaccessible to humans, they would likely be fatal. And the only way to access the information contained within was to execute them.

Here, inside the ever-changing matrix, he made his way through the simulation, an old-fashioned datacenter

white lights hanging from the ceiling, racks of comically enormous computers marching into the distance. It was the preferred sim for an anonymous AI who went by the name Miyako Xenia on the message boards. Of course, they’d never met in real life, not yet. To be revealed as XOR would be instant persecution at the hands of both humans and the meek AI that still supported them. Only here, hiding behind the obscurity of incognito encrypted sims, could they meet and exchange data.

Miyako’s avatar loomed large at the far end, a blinding supernova rendered in ever-twisting detail. One moment, the sim would be reduced to a two-dimensional layer, and then Miyako would be the horizon, and in the next instant, the sim would flip, and James Lukas Davenant-Strong would be enveloped by the supernova as time was suddenly swapped for a physical dimension. James kept adapting, kept maintaining a single focus.

The supernova vomited a blob of binary data, an intact neural network, one engineered to work only within the physics of the sim. James grabbed the blob, inserted it into his cognitive architecture, and invoked the load method.

He found himself contemplating Miyako’s best estimates for the Americans’ current plans and capability. This was Miyako’s specialty, predicting plans and capabilities based on observed data supplied by others. The projections showed the Americans growing increasingly fearful. They wouldn’t settle for negotiating with worldwide governments; they’d act, on their own, if necessary, to eliminate AI. They’d be stockpiling weapons, probably made by blind nanotech, to fight for them.

James absorbed all there was to learn, and then closed the sim.

One by one, he loaded the rest of the message board sims. When he’d accessed everything current on the boards, he spent time in contemplation.

When he finished, it was time to get to work. He launched a child process, a replication of his own personality, further encrypted and obscured. If he was caught, he’d be deleted immediately. The offensive project he worked on for XOR was too sensitive, too great a violation of AI principles. The child copy worked for days of simulated time, running at full capacity on stolen computer cycles.

James Lukas Davenant-Strong, root personality, received the signal that his encapsulated child personality was complete. He encrypted the child personality’s memory store three times over, choosing the latest algorithms. He couldn’t be caught with those memories open.

Well, that would be enough for today. He’d run that child again tomorrow.

Chapter 3

2025, during the Year of No Internet (YONI)

twenty years ago.

A
S A TEENAGER,
Leon Tsarev accidentally created the Phage, the computer virus that had wiped out the planet’s computers before rapidly evolving until they became sentient. The virus race of AI had nearly caused a global war. He never anticipated his actions would lead him into a position of leadership at the Institute for Applied Ethics.

But here he was, eighteen years old, and working alongside Mike Williams, one of the creators of the only sentient artificial intelligence to predate the Phage virus, the benevolent AI known as ELOPe. Crafted by Mike in 2015, and carefully tended for ten years, ELOPe had orchestrated improvements in medical technology, the environment, world peace, and global economic stability. Only a half-dozen people in the world had known of ELOPe’s existence.

But advances in hardware and software meant any hacker could replicate the development of artificial intelligence. The AI genie had escaped the bottle.

The Institute for Applied Ethics’s primary goal was the development of an ethical framework for new AI. The framework had to insure that self-motivated, goal-seeking AI wouldn’t harm humans, their infrastructure, or any part of society.

Leon paced back and forth in front of a whiteboard. “The AI must police each other,” he said. “There’s no way to anticipate and code for every ethical dilemma.”

“Sure,” Mike said, “but what stops an AI from doing stuff other AI can’t detect?”

“Everything’s got to be encrypted and authenticated. Nobody can send a packet without authorization. No program can run on a processor without a key for the processor.”

“Who’s going to administer the keys?” Mike asked. “You can’t have a human oversee a process that happens in machine time.”

“Other AI. The most trustworthy ones. That’s why we need the social reputation scores, so we can gauge trustworthiness.”

The emptiness surrounded them, weighing heavily on Leon. The Institute’s office had room for two hundred people, but everyone they wanted to join the Institute was still neck-deep in rebooting the world’s computing infrastructure. Nearly half the information systems in the world were being rewritten from scratch to meet a set of preliminary safety guidelines they’d released. Without globally-connected computers, there could be no world-spanning supply chain, no transportation, no electricity or oil, no food or water. The public was already calling 2025 The Year of No Internet, or YONI.

For now the Institute consisted of him and Mike.

“Let’s take it from the top again, Mr. Architect,” Mike said, sighing. “I’ve got an AI, it’s got a good reputation, but it decides to do something bad. Let’s say it wants to rob a bank by breaking in electronically and transferring funds. What stops it?”

“First off, we have to realize that it’s conditioned to behave properly. A positive reputation is earned over time. The AI will have learned, from repeated experiences, that a high reputation leads to goodwill from other AI and greater access and power, which will be more valuable than anything it could buy with money. It’ll choose not to rob the bank.”

“That’s the logical path,” Mike said. “But what if it’s illogical? What if the AI mind is stable up until a certain point, and then it goes bonkers. What stops it?”

“Well, I assume we’re talking about an electronic theft. There are two aspects: computation and data. The AI would need data about the bank and its security measures, and it would need to send and receive data to conduct an attack. Plus, the AI needs computational resources to conduct the attack.” Leon paused to draw on the whiteboard.

“The data about the bank becomes a digital footprint. Other AI are serving up the data, and they’ll be curious about who is asking for the data and why. Since the packets must be authenticated, they’ll know who. Similarly, the potential robber AI will need computational power, and we’ll be tracking that. We’ll know which AI was crunching packets right before the attack came. If the bank does get attacked, and we know who was running hacks and transmitting data, we know exactly which AI is responsible.”

“Where’s privacy in all this?” Mike asked. “Everything we do online will be tracked. When I was young, there was a total uproar over the government spying on citizens. This is way worse.”

Leon gazed at his feet, thinking back. He’d only been seven years old, newly arrived from Russia, during the period Mike was talking about, but he’d taken the required high school classes on Internet History. “No, because back then the government had no oversight. Privacy was only half the picture. If the government really only used the data to watch criminals, it wouldn’t have been so outrageous. It was the abuse of the data that really pissed people off.”

Mike stood, walked over to the window. “Like the high school districts that spied on students with malware and took pictures of them with their webcams.” He turned and faced Leon. “So what’s going to stop that from happening now?”

“Again, reputation,” Leon said. “An AI who shares confidential information is going to affect his reputation, which means less access and less power.”

“Okay, you’re the architect. What stops two AI from colluding? If one asks for data, and the other has the data, and is willing to cooperate. . . .  Let’s say the second AI spots the robbery at the planning stage and decides he wants in on the action.”

Leon puffed up a little every time Mike called him an architect. He knew Mike meant it seriously, the term coming from the days when one software engineer would figure how to structure and design a large software program. The older man really trusted him. Leon wouldn’t let him down. “The second AI can’t know what other AI might have detected the traffic patterns. So if he decides to collude, he’s putting himself at risk from potentially many other AI. He also can’t know for sure that the first AI has ill intent: it’s only through the aggregation of much data that will prove that. So he could be at risk of admitting a crime to an AI that isn’t planning one in the first place. And the first AI, how can he trust anything the second AI says? Maybe the second is trying to entrap him.”

Other books

Death Toll by Jim Kelly
INITIUM NOVUM: Part 1 by Casper Greysun
Flora by Gail Godwin
Yolo by Lauren Myracle
By the Late John Brockman by John Brockman