Oops – Human in the Loop | Charles Duelfer

Oops – Human in the Loop

There was a discussion at the Council on Foreign Relations last week with Henry Kissinger (former Secretary of State) and Eric Schmidt (former chairman of Google) moderated by Judy Woodruff (PBS NewsHour). The subject was their newly released book, “The Age of AI: And Our Human Future”. An unlikely pairing at first, but their book opens the grave subject of the pending consequences of artificial intelligence for international security. Schmidt certainly has a grasp of the magnitude of this technological leap for human activity (and importantly, non-human activity). Kissinger was a prime mover in conceptualizing the implications of the onset of nuclear weapons for international security after World War II. Together they make a compelling case that there will be critical consequences as AI is incorporated in more decisions, information processing and weapons systems among other things. Indeed, they argue the world needs to begin thinking urgently about the strategy implications and rules of the road as AI evolution is advancing rapidly. Policy will have to catch up.

It was a huge challenge to maintain a global security balance as advancing nuclear weapons filled the arsenals of the US and Soviet Union. Deterrence theory, the mechanisms of delivering weapons, command and control, warning systems, targeting strategies, etc. all seemed to evolve together. Technology did not follow a strategic objective as much as strategy was constructed around technical capability. There were elements of great delicacy–for example, did we have sufficient sensor capabilities (satellites, radars) to detect and assess the intentions of an missile attack against the US? Could we do this in time to launch our forces before the missiles or bombers hit their targets? Could a president have sufficient data and time to make such a decision? Fortunately, we never really found out, but the questions were critical. There was serious attention given to the option of “launching under attack.” Possibly we could have done this, but certainly the Soviet side had to consider this response option should they think about trying a “disarming first-strike”. There were many such scenarios considered in the formulation of US and Soviet strategy.

Somehow both sides accumulated enough mutual understanding and evolved rules of the road for a (wobbly) strategic nuclear balance and, with a sizeable amount of luck, nuclear war was avoided during the cold war.

Kissinger and Schmidt point out that we are on the threshold of a similar disruption of strategy and international balance. They point out that among the many things AI presages is that control and understanding of technical and potentially even policy decisions will be beyond the comprehension of simple humans. AI will have the ability to assimilate data beyond our control and be able to determine consequences and outcomes that the human mind simply can’t understand…but it may be logically correct. Where does that fit in the concept and use of force (military or financial market actions)? If we put a human in the loop it may delay or even disrupt a winning outcome for our side. But if we don’t, do we trust Chinese or Russian AI enabled forces to be similarly constrained? AI on one side is not likely to negotiate with AI on the other…or would it? As Kissinger and Schmidt point out, we need to think through these problems sooner rather than later. And eventually some exchange of concepts with Allies and competitors will be necessary.

It’s unnerving to consider automatic responses based on sensors and computers–as a pre-delegated decision by the president to implement a launch-under-attack option.

However, at the Council on Foreign Relations meeting, I inadvertently made the case for the computers. Members connected via zoom and the tool bar beneath the screen has two adjacent button, one for “chat” (which provided a list of participants) and the second was “raise hand” meaning you wanted to ask a question. Well when Woodruff opened the discussion the moderator said “Our first question will be from Charles Duelfer”. I had no intention of asking a question and simply mumbled that was a mistake. I had hit the “raise hand” button in addition to the “Chat” button without noticing it. Had I been quicker thinking, I would have pointed out that my error was an argument for downside of having a human in the loop (or at least not me in the loop).

Indisputably Kissinger and Schmidt raise a looming problem problem where technology is again way ahead of international policy and politics.

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam protection by WP Captcha-Free