What is the price of humanity? And will AI become death?

Ferose V R
10 min readJul 19, 2023

--

When I visited Los Alamos (New Mexico), the birthplace of the Manhattan Project, it was difficult to comprehend that the Atom Bomb was born in such a nondescript place. As I took a tour of the Bradbury Science Museum to understand more about the Manhattan Project (in 1945, Bradbury succeeded Dr J. Robert Oppenheimer as director of the Los Alamos National Laboratory), I found it hard to understand what it meant to have built the most powerful bomb, capable of ending the entire human race. As Oppenheimer, ‘father’ of the atomic bomb, witnessed the first detonation of a nuclear weapon on 16 July 1945, a piece of Hindu scripture ran through his mind: “Now I am become Death, the destroyer of worlds”. It is, perhaps, the most well-known line from the Bhagavad Gita, but also the most misunderstood.

(Bradbury Science Museum, Los Alamos)

The movie Oppenheimer (based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin) is a 2023 epic biographical thriller about Oppenheimer written, produced, and directed by Christopher Nolan. The release of the movie cannot have come at a more apt time — the discussions around Artificial Intelligence (AI) have forced comparisons with the Nuclear Bomb. But what are the differences between the two, and can we learn from the Nuclear Bomb experience how to control AI (especially since humanity has not used the nuclear bomb since 1945)?

A recent CNN report said 42 per cent of CEOs surveyed at the Yale CEO Summit says AI has the potential to destroy humanity five to ten years from now. Is it an exaggeration and is it even remotely possible?

On the eve of the release of Oppenheimer a brief comparison between AI and nuclear technology seems fitting. After all, both have terrifying, seemingly limitless power. In the wrong hands both can be weaponized, putting humanity on a track to its own destruction. Nuclear energy generates massive amounts of energy which humans can choose to deploy for good or evil. AI similarly generates massive amounts of information, which humans can choose to deploy for good or evil. Nuclear technology holds within itself the seed to human extinction: a runaway nuclear reaction. One could project that AI technology, similarly, holds within itself the seed to human destruction: runaway AI autonomy.

Let’s try to understand with a few examples, why we need to be careful about AI.

Midway through the movie “I, Robot”, Will Smith’s character Del Spooner reveals the reason for his distrust and hate for robots. Years earlier, he had been in a head-on car crash with both cars sinking slowly to the bottom of a river. The other car has a 12-year-old girl, Sarah. An NS-4 robot arrives on the scene and makes the statistical computation that Del’s odds of survival at 45% are better than Sarah’s at 11%. It chooses to save Del and leaves Sarah behind.

Elaine Herzberg was walking her bicycle home, laden with groceries. It was two weeks’ worth — enough to warrant having to use her bicycle as a wheelbarrow. It’s a late evening in Tempe, Arizona. Elaine, pushing a bicycle laden with shopping bags, crossed Mill Avenue (North), south of the intersection with Curry Road, outside the designated pedestrian crosswalk and across two lanes of traffic. Traffic is light and the crosswalk is far away.

A prototype Uber self-driving car based on a Volvo XC90 is traveling north on Mill. It has been on autonomous mode since 9:39 p.m. with Rafaela Vasquez, the car’s human safety backup driver, behind the wheel. It’s been a long day and Rafaela is waiting for this run to be done and go home. She is tired and decides to flex her hands and stretch her neck for a few seconds, looking down from the road.

The Volvo has advanced radar and LiDAR sensors. It detects Elaine’s presence 6 seconds and 115 meters before the crash. It first classifies her as an unknown object, then as a vehicle and finally as a bicycle. With 1.3 seconds and 25m to impact, the autonomous system determines emergency braking is required, which is normally performed by the vehicle operator. However, the system was not designed to alert the operator and did not make an emergency stop on its own accord.

At 43 mph the car could have stopped within 27m, once the brakes were applied. For 4.7 seconds it was trying to determine what Elaine was, apart from a human being. At 9:58 p.m. MST on 18 March 2018, Elaine Herzberg became the first pedestrian killed by a self-driving car.

We are in a world where Del’s worst fears have come true. Rather than detect a human and stop instantaneously, the autonomous system spent 78% of its time trying to decide what Elaine was, and even in the final attempt decided it was something other than human. The situation of a human walking a bicycle across unmarked lanes was not in its training data.

This story brings up several aspects of what needs to be understood about artificial intelligence (AI), its big brother (no pun intended) artificial general intelligence (AGI) and what our responsibility looks like in a world that is sure to be powered by AI.

“The hype has gotten ahead of things” — Brewster Kahle, Founder, Internet Archive

(With Brewster Kahle)

AI is not just augmenting but replacing human judgment in critical areas. Some of these have moral and ethical weight. In areas as diverse as self-driving cars to predicting rates of recidivism, AI-powered algorithms are providing convenient but flawed “scores” which predict what we are about to do next. As Tonya LaMeyers says in the documentary “Coded Bias”, after the recidivism algorithm categorized her as high risk in spite of being an exemplary parolee for four years, “there is no way to include anything positive I’ve done to counteract the results of what (this) algorithm is saying”.

We have the ultimate responsibly to not implement AI in areas where it is not yet competent — like morals and ethics — or in areas where it cannot account for the human will to change and become a better person.

“Women will be disproportionately impacted” — Mei Lin Fung, co-Founder, People Centered Internet

(With Mei Lin Fung)

The inability of the AI algorithm to recognize Elaine goes deeper than a programming error. As Joy Buloamwini’s efforts have made transparent, the bias in the data is a direct reflection of the bias in our society. Having been trained on white males, IBM’s original facial recognition algorithms recognized lighter males with 99.7% accuracy, lighter females with 92.9% accuracy, darker males with 88% accuracy and darker females with 65.3% accuracy. Joy’s work also shows that real change is possible. On 25 June 2020, US lawmakers introduce legislation that bans the federal use of facial recognition technology.

The issues of how the data got there in the first place runs deeper. Silicon Valley’s brand of innovation thrives precisely because it finds unregulated spaces where it can grow exponentially. This very lack of regulation allows it to hoover up data from the Internet and our devices without permissions to feed the enormous appetite of the AI algorithms. When this data is used to make decisions on whether we have access to or are denied resources and opportunities, however, is when we need to draw the line and speak of rights: the right to refuse to be included in the dataset, the right to opt out of a dataset, the right to explanation, the right to redressal. The White House’s Blueprint for an AI Bill of Rights is the first step in this direction for a brave new world, but still leaves us with individual responsibility.

We have the responsibility to recognize that it is a bidirectional relationship. We can change the model to represent the world we want and not just propagate the world as we know it. Doing so changes the bias in the world, and for the first time it also gives us a way to measure and possibly change the bias in society. After changing the model, IBM’s algorithm detected lighter males with 99.8% accuracy, lighter females with 100% accuracy, darker males with 98% accuracy and darker females with 96.5% accuracy. AI can perpetuate bias but also alter bias.

“I am sounding the alarm. We have to worry about this” — Geoffrey Hinton, Father of AI.

(With Brian Christian)

Getting AI to do what we want is not obvious or simple. We are, as Brian Christian put it in The Alignment Problem, “the sorcerer’s apprentice”. As Astro Teller, David Andres, Dario Amodei and several other scientists have discovered in the work of reinforcement and reward shaping, giving a reward is not the same thing as making progress towards the goal. AI algorithms being built for efficiency will find the shortest path to the most rewards, ruthlessly exploiting any loopholes in the reward policy. Teller and Andres provided a reward — worth a tiny fraction of a goal — to their robot for taking possession of the ball, in an attempt to teach it to play soccer. To their astonishment, they found their program “vibrating” next to the ball, racking up points, and doing little else.

In this context, consider Geoffrey Hinton’s concern: “For any goal that we can imagine, acquiring as much power as possible is a good sub-goal. Power makes it much more efficient to execute.” It’s not hard to imagine that an incredibly powerful AI system decides that for a goal we set it, gaining power over all our resources would be a good sub-goal, whether we intended it or not.

We are only just learning how to specify rewards to get what we want. The unintended consequences of getting it wrong could be as Sam Altman put it “lights out for humanity”.

The question of alignment with human values is in its early stages within the field. Human curiosity with its capacity for intrinsic motivation is proving to be a useful area to counter the limitations of extrinsic rewards alone. Core human traits like empathy, compassion and kindness are however, still some way off.

What we need, as Buloamwini put it is for AI to understand that “what it means to be human is to be vulnerable, have a capacity for empathy and compassion. If there is a way we can think about that within technology, it would reorient the sorts of questions we ask.” Put another way, we would be assured that AI is not just seeing us as another resource to be optimized but that it recognizes and respects life above all else.

For the moment, because values and ethics are so hard to specify completely, we have the responsibility of modeling good behavior for AI and steering it when it goes off track. Helping it learn by watching us and giving it corrective feedback when it gets it wrong. We have the responsibility of partnering with AI to amplify how it serves us and let AI distill the essence of our decision-making into its neural network.

“AI cannot affect human evolution” — Gary Zukav, Author and Spiritual Teacher

(With Gary Zukav)

It is 26 September 1983. It has been three weeks since the Soviet Military has shot down Korean Airlines Flight 007. Stanislav Petrov is on duty at the command center for the Oko nuclear early-warning system when the system flashes a warning that a missile has been launched from the United States, followed by five more. He has a very short window in which to decide what to do. Russia, his family, his friends, everything he knows, and loves is about to be destroyed. He does nothing, just sits on it. He remembers thinking, “Well, at least we didn’t go kill them all either.” In the end it turns out it was a false alarm caused by a system malfunction.

As Pulitzer Prize winning author and sociologist Zynep Tüfecki puts it “Being fully efficient, always doing what you’re told, always doing what you’re programmed is not the most human thing. Sometimes it’s saying no.”

The AI we have wrought today is engineered to say yes. We have created a djinn that will do whatever we tell it instantly and at scale, but we have to get the incantation right or risk extinction. But we are an imperfect species who don’t always know what we want or want things that are self-destructive. Companies like OpenAI are putting in safeguards which stop it from saying yes to dangerous or harmful requests and which have the ability to steer AI via prompts.

What humanity needs in the long run is an AI that knows enough about what we value as humans to say no to us when it is in the greater good of our species to do so. To save us from our worst instincts, even. Whether we can call this understanding conscience or whether this is even a good idea is hard to say from here. What we can say for sure is that knowing that the machine recognizes, and values humanity is a worthy goal. There are currently different approaches underway to get here, from stating our values explicitly to detecting it implicitly. But as Einstein said, “We cannot solve our problems with the same thinking we used when we created them.”

This much is clear: we have the collective responsibility to help AI get there. The future of our humanity depends on it. As Ge Wang, Professor at Stanford University says, let’s “Worry. Be Happy”.

--

--

Ferose V R

Senior Vice President and Head of SAP Academy for Engineering. Inclusion Evangelist, Thought Leader, Speaker, Columnist and Author.