Thursday, February 9, 2017

Artificial Intelligence: Should we worry?

Kismet the AI Robot at the MIT Museum, photo by Chris Devers, used under Creative Commons License
Presented to the Club by Martin Langeveld on Monday evening, February 6, 2017

Artificial Intelligence (or AI) is defined as intelligence exhibited by a machine, specifically a computer-driven device.

In popular culture, artificial intelligence is often depicted negatively. Recall the computer HAL in the movie 2001: A Space Odyssey. While HAL appears benevolent at first, taking care of the spaceship’s functions and playing chess with its human travellers, eventually the computer turns evil and seeks to kill the astronauts after discovering they are having doubts about HAL’s reliability and are planning to disable him.

Many other intelligent machines and robots, some nasty, some nice, figure in movies such as The Terminator, The Matrix, Aliens, and back in the 50s The Day the Earth Stood Still. And science fiction writers like Isaac Asimov, Philip K. Dick and many others have explored the implications of intelligent machines as well.

Still, such machines, with true cognitive ability and rational decision making ability, and what might be understood as consciousness or self-awareness, have not yet been invented. In fact, a debate has raged for decades as to how to actually determine whether a computer is intelligent. Most of the methods proposed are variations on the well-known Turing Test, proposed in 1950 by the Enigma code-breaking mathematician, Alan Turing. Turing proposed a test in which an evaluator interviews two entities, a human and a computer in such a way that he can not see them, and receives answers only as text. In Turing’s original formulation, if, after a five minute conversation with each entity, the evaluator can not tell the human from the computer 70 percent of the time, the computer is judged to be intelligent.

After 67 years, and many tries, no computer or computer program has been able to pass Turing’s original test. Many variations of the test have been developed, including a $20,000 bet in which futurist Ray Kurzweil bet Mitch Kapor, founder of the Electronic Frontier Foundation, that by the year 2029, a panel of judges would not be able to pick the computer out of a lineup comprising it and three human foils after a set of conversations lasting 24 hours in total.

By the way, one alternative to the Turing Test, is called the Coffee Test, promulgated by Steve Wozniak, the original partner of Steve Jobs in the creation of the Apple computer.  In the Coffee test, “A machine is given the task of going into an average American home and figuring out how to make coffee. It has to find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.” I’m glad to say I’ve personally passed the coffee test quite a few times when getting up early when staying at somebody’s house, but I’ve know people who failed it at my house.

To be clear, although the Turing Test remains un-passed, some very impressive computers have been developed. For example, in 1997, IBM’s Deep Blue computer defeated the world chess champion, Gary Kasparov, in a six-game championship match. IBM followed up with Watson, the question answering computer that beat Jeopardy’s two winningest-ever players, Ken Jennings and Brad Rutter, in 2011. Deep Blue was retired, but Watson and Watson clones have been used in a variety of applications ranging from health care to weather forecasting.

Google, which has invested heavily in artificial intelligence research, built a program called AlphaGo to play the Chinese game of Go, which is actually far more complex than chess. (As an indication, the number of possible games in chess is 10 to the 123, while the number of possible games in Go is 10 to 768. This makes Go 10 to 645 times more complex than chess — that’s a 1 with 645 zeros.)  In 2015, AlphaGo defeated the European Go champion, winning all five-games in a five game match, a very impressive advance from Deep Blue’s chess successes.

The chessplaying Deep Blue and the Jeopardy-playing Watson could rely on brute force computation. But to succeed at Go, AlphaGo needed more than that. At any moment in a game of chess, Deep Blue could rapidly analyze about 200 million possible outcomes for various moves and choose the best option. But in Go, there are vastly more ways the game can develop, so brute force doesn’t work. Google’s Deepmind division, which built AlphaGo, says the computer “combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.”

Still, all of these are computers and programs that are very good at one difficult task, be it chess, Jeopardy or Go, just mimic actual intelligence. They display no capabilities that would fool the evaluator in a Turing Test.

We might consider, also, the current crop of digital assistants with human voices, like Apple’s Siri, Amazon’s Alexa, Google’s Assistant, and Microsoft’s Cortana. In these apps, the problem of human speech recognition and replication has been solved, to a degree. But anyone who has used devices incorporating these programs knows that while they are marketed as part of “smart” phones or “smart” speakers, these devices are not really smart. There may be some useful functions, and some clever responses are built in for your entertainment (like, “Siri, tell me a joke”), but there is no thinking going on, and the actual tasks performed are relatively simple.

In fact, these digital assistants have quite a few functions, built in by their tech-loving creators, that the average person doesn’t even really want or need. Alexa can turn on the lights, but why? Alexa can set a timer, but I already have two or three around the kitchen. Alexa can make a shopping list, but what if I’m not in the room with my Echo? We’ve been hearing for years about the refrigerator that will be smart enough to text you that you’re running low on milk, but again, is this really a feature we need or want?

Such quibbles aside, we have to admit that machines are getting smarter all the time, and aspects of artificial intelligence are already benefiting mankind in many ways, or in ways that are nearing actual deployment. Self driving cars. Robotic factories that operate with the lights off because there are no human beings inside. Roombas in our living room. Robots that milk cows. Facial recognition, handwriting recognition, speech recognition. Fast and increasingly accurate translations among many languages. Robotically assisted surgery. Increasingly sophisticated computer analysis and coordination of complex tasks, ranging from weather prediction to economic modeling to medical diagnostics to military strategies and tactics to marketing campaigns. We have all these things.

As computers, intelligent or not, begin to surpass human accuracy and effectiveness in these kinds of tasks, what are the societal implications? From the early days of computers in the 1950s, through most of the 1980s, many people worried that computers replacing humans at various tasks would eliminate their jobs. But for the most part, this didn’t happen. Yes, computers replaced humans doing the most rote and mundane tasks, but overall, employment was not reduced — partly because the computers themselves required a lot of human tenders, and partly because the development and manufacture of computers employed a lot of people.

But in the 1990s and 2000s, as the internet developed and as personal computers, and later, smartphones with equivalent computing power became ubiquitous, computer capability indeed began to surpass the usefulness of human labor in many situations, and continues to do so.

Today, we are seeing a relatively low unemployment rate, but only because many people have given up on finding employment. As a result, the employment to population ratio, or employment rate, which continued to increase during the 80s and 90s, has now dropped to levels it has not seen since the late 1970s. The increasing scarcity of low-skilled jobs brought on by increased automation is a big factor behind the rise, during the last decade, of the protectionist attitudes that have resulted in the election of Donald Trump as President. But by and large, the American economy and the American standard of living have benefited, so far, from increasingly smart electronics, and the real impact of artificial intelligence, as it is developed, is yet to be seen.

Before we worry about that: Is an independently thinking machine even possible, in the short run or in the long run? If HAL was supposed to happen by 2001, why haven’t we gotten there yet? Is the thinking computer like the flying car? — a logical projection of trendlines, but not one that can realistically happen?

In the view of some, the closest we’ve gotten to a true thinking machine is the Internet itself with all its nodes and cables, the organization of which bears a lot of resemblance to the synapses and neurons in the human brain. Some have ventured a size and capacity comparison between the Internet and the brain, including recent estimates by researchers at the Salk Institute who determined that in computer terms, the typical brain can hold 1 petabyte of information; other estimates range up to several petabytes. (A petabyte is 1000 times the capacity of this little 1 terabyte disk drive, and a terabyte is 1000 gigabytes. The phone in your pocket probably holds 16 or 32 gigabytes at capacity. So your brain can hold around 100,000 times the information on your phone.) But the internet has surpassed this capacity, many years ago, and is currently estimated to hold several yottabytes of data, where a yottabyte is a billion petabytes, or a billion human brain equivalents.

With all that storage capacity, why is the internet not yet one giant thinking machine? Several reasons: (1) it was not created, or programmed, to be a thinking machine; and (2) While the data storage capacity of the internet exceeds a billion human brains, the total data processing capacity, or computational capacity, in the world today is only the equivalent of about 8 human brains. That doesn’t mean the nine of us here in the room could out-data-process all of the the world’s computers, but we can definitely still out-think all of the world’s computers.

Now, up to this point, I’ve been describing the real world, the actual development of computers, at least semi-intelligent, thus far. But this is where we enter realms of speculation.

So, here’s a scary thought. Today we can out-think the computers. But suppose, at some point, a Dr. Frankenstein (or more likely a Frankenstein Institution or a Frankenstein Company or a shadowy Frankenstein network of hackers) manages to create a machine with actual consciousness and cognitive intelligence. And then, they connect it to the Internet. Without controls, a machine like that could replicate or enlarge itself in cyberspace very rapidly, digest much of the world’s information, and then use it to develop further, in unpredictable directions. At the speed of light.

In simple terms, that’s the definition of the technological singularity, which has been predicted by some futurists. You’ve heard about the space-time singularity: the initial state of the Universe prior to the Big Bang, in which all of time, space and matter was compressed into a single point of infinite density. In the theory of the technological singularity, all of the curves of ever-accelerating digital information storage and processing capacity lead eventually to a runaway situation, in which machine intelligence exceeds human intelligence, and the speed of technological change accelerates until the curve goes straight up. It’s like falling into a black hole — there is no way to look past it and understand what might happen. Thinkers ranging from entrepreneur Elon Musk to physicist Stephen Hawking have warned against the dangers of runaway artificial intelligence, and have helped promulgate a set of 23 guiding principles to steer A.I. development into productive, ethical, and safe directions.

So there is the scary scenario, but we may be getting ahead of ourselves. We are still in the early stages of progress toward intelligent machines, with quite a few problems to be solved before worrying about the singularity. And many of those problems are on the human side of the equation. Most importantly: How will people make a living when more and more tasks are done by computers and robots?

We have self-driving cars, which will likely evolve in the next few years into self-driving tractor-trailers, self-driving trains, self-driving ocean freighters. You can buy a self-driving vacuum cleaner, which will likely evolve into autonomous machines that clean offices, public bathrooms, and mall concourses, and autonomous machines that mow the grass not only in our own lawns but in our parks, golf courses, and highway medians. Driverless Zambonis are already on the way. Amazon was using 45,000 robots in its warehouses at the end of 2016, an increase of 50 percent over the prior year. The Taiwanese company Foxconn, which a few years ago employed 500,000 people in China to assemble cellphones for Apple and Samsung, has replaced 40,000 of those people with robots, and plans, eventually, to replace all of them with up to one million robots. An Oxford University study in 2013 estimated that 47 percent of American jobs are at high risk of being replaced by computers and robots within the next 10 years.

Much as our President would like to “bring back” jobs, the reality is that most of the jobs in question, both here and abroad, will eventually be taken over by robots like this, with potentially enormous social consequences. Because as long as investments in more robots have an economic payoff, more robots will be built and deployed. This is no different, really, from the various waves of job-killing technologies that we have seen from the beginning of the industrial revolution, from steam engines to horseless carriages to word processors. But, in each of those previous waves, while many jobs became obsolete, even more new jobs were created by new opportunities.

But as we get closer to real artificial intelligence, that pattern may not continue —increasing automation and increasing deployment of ever-more-intelligent machines will replace more and more human labor. Once machines start designing other machines and writing computer code, even higher-level engineering jobs will start to disappear. This creates a conundrum: how do we maintain a sustainable, balanced economy when an increasing fraction of the population is permanently unemployed because robots are doing more and more of the work, and when the economic benefits of artificial intelligence flow to a smaller and smaller fraction of the population, those who own and operate the machinery? And, at the logical extreme, with the advent of true artificial intelligence, when even the highest functions are taken over by machines: robot scientists, robot engineers, robot CEOs of robot corporations. This is a recipe for an economy with more and more potential supply, but less and less demand. This not a new worry — the economist John Maynard Keynes predicted, back in 1933, that widespread technological unemployment could come about if “our discovery of means of economising the use of labour [outruns] the pace at which we can find new uses for labour”.

The answer to this problem, suggested by many futurists, is that we create a form of Universal Basic Income, or UBI. Essentially, this is a way to redistribute the financial benefits of technology to all. Getting into the mechanics, pros and cons of UBI would make a whole other Monday Evening Club paper, but suffice it to say that UBI is a form of universal social security that starts at birth, or a reverse income tax, in which every individual is guaranteed a level of income sufficient to survive and even thrive, that’s paid whether the person is working for a living, or not. Proponents of the UBI say, based on some limited experiments with it, that this would not merely be a subsistence allowance — that a society with Universal Basic Income would produce more creativity and more innovation while reducing crime, substandard housing and poor health. More extensive tests of UBI are currently underway in the Netherlands and Finland.

So in this scenario, although most jobs disappear, humans find new economic models that allow them to coexist with robotic artificial intelligence; they keep it under control and it works for the benefit of humankind, which flourishes in a world with less violence, less drudgery, more culture and more happiness. In the more fanciful projections of this scenario, eventually humans achieve immortality, or something close to it, by melding their bodies and minds with machines into new, transhuman species, which go forth to explore the universe.

But other scenarios are not so rosy. What if, for example, multiple forms of artificial intelligence arise, escape from effective human control, and go into conflict with one another in robot wars? What if robots, bent on self-replication and unconcerned about the effects of radical environmental changes on themselves, re-engineer the biosphere to benefit only themselves, wreaking environmental havoc to the detriment of humans? In these kinds of outcomes, economies would collapse, massive suffering would ensue, and eventually the human race and perhaps all life would disappear from an Earth populated by intelligent machines. After all, once the machines escape from human control, what motivation would they have for keeping us around? Perhaps, in the long run, they would evolve into one massive, superintelligent, but non-organic, being — an earth-sized thinking machine. What would it think about? What would it want to do, with nothing left to destroy except itself? Having incorporated all possible terrestrial knowledge, perhaps it, too, would send out space probes and begin exploring the universe.

On that question, one final speculation: If it is possible for this kind of artificial intelligence to develop, wouldn’t it have developed elsewhere in the universe as well, and be trying to communicate with superintelligence in other parts of the universe? Since the dawn of the space age, humans have calculated that in the vast universe of billions of galaxies, each with billions of stars, intelligent life must have arisen somewhere else. But for decades, we have been trying to pick up signals from alien civilizations, whether organic or machine, and have detected none. And, unless the rumors about Area 51 and Roswell, New Mexico are true, we have not been visited either. This is known as the Fermi Paradox, after the physicist Enrico Fermi who first proposed it during bull sessions among the atomic scientists at Los Alamos. Fermi’s famous question was, “Where is everybody?”

So a superintelligent entity, knowing this, faces a choice: should it send out signals in search of like-minded machines, or not? It has the capability to survive for thousands or millions of years, so unlike us, it could patiently wait for an answer. But it would have to consider the pros and cons. Reasoning like Space Trek’s Spock, it would think: the fact that I exist means most probably, others like me exist, out there. But they are not communicating. This could mean one of two things: (a) they’re just not interested in communicating — they are purely introspective superintelligences, or (b) they’ve decided not to communicate for a good reason: the possibility that some of the other superintelligences are evil and would seek to destroy or enslave the others. Either way, they would not be sending out smoke signals, and would be trying hide their existence. Like us, they would wait and listen, and then decide whether to answer. Perhaps all intelligent civilizations our there, organic or machine, are doing the same thing.

No comments:

Post a Comment