Quotable (#196)

Socially-networked media warfare:

ISIS stumbled upon something new. It became, in the words of Jared Cohen, a former State Department staffer and now the director of Jigsaw (Google’s internal think tank), “the first terrorist group to hold both physical and digital territory.” […] It will not be the last. The fate of the self-declared caliphate, now under the assault of nearly two dozen national militaries, is uncertain. Yet the group has already proved something that should concern any observer of war and peace, law and anarchy. While the Islamic State has shown savvy in its use of social media, it is the technology itself—not any unique genius on the part of the jihadists—that lies at the heart of the group’s disruptive power and outsize success. Other groups will follow. …

Remember what the printing press did? That’s the precedent.

Quotable (#153)

There a lot of excitable feedback circuits to be discovered on the way down the slope. This looks like one:

Analyzing data from a large, worldwide sample, two Chinese psychologists report people whose countries are more involved with wars and similar conflicts experience higher levels of existential fear, which drive them to greater religiosity. […] Previous surveys have found highly religious Americans tend to be more supportive of war, as well as of torturing one’s opponents. This raises a profound and troubling question: Could it be that armed conflict and intense religiosity are in a mutually reinforcing relationship? […] “The relationship between war and religiousness may be bidirectional,” write Hongfei Du and Peilian Chi of the University of Macau. “War strengthens individuals’ religiousness (due to) their worries about war, while fundamental religious beliefs result in violent conflicts and war.”

The Du and Chi paper is here.

Silver Linings

In crisis, opportunity:

The stocks of America’s top weapons manufacturers are climbing as France and the US have renewed their determination to snuff out ISIS. As journalist Aaron Cantu tweeted this morning, Raytheon, Northrop Grumman, Lockheed Martin, and General Dynamics — 4 of the biggest defense contractors in the world — are expecting to have a big day on Wall Street in the aftermath of the terrorist attack in Paris, as Wall Street Journal’s Marketwatch has labeled all 4 on the far end of the “bullish” scale. …

Game Theory

Attempting to hold rationality and humanity together is an unenviable task, if not simply an impossible one:

In a series of interventions, Adil Ahmad Haque and Charlie Dunlap have debated the Defense Department Law of War Manual’s position on human shields (here, here, and here). Claiming that the manual does not draw a distinction between voluntary and involuntary human shields, Haque maintains that it ignores the principle of proportionality, thus permitting the killing of defenseless civilians who are used as involuntary shields. Dunlap, however, insists that the manual includes all the necessary precautions for protecting civilians used as shields by enemy combatants, and argues that the adoption of Haque’s approach would actually encourage the enemy to increase the deployment of involuntary human shields. …

Sensitivity to the plight of ‘human shields’ directly increases their tactical value. That is the ultimate ‘proportionality’ involved in the discussion. Disciplined attention to incentives under conditions of unbounded competition reliably heads into dark places.

Make it Stop II

Autonomous Weapons: an Open Letter from AI & Robotics Researchers (with huge list of signatories):

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

This is an important document, that is bound to be influential. If the orchestrated collective action of the human species could in fact stop a militaristic AI arms race, however, it could stop anything. There’s not much sign of that. Global coordination in the direction of explicit political objectives is inaccessible. The process is already “beyond meaningful human control”.

Arms races — due to their powerful positive feedback — are the way threshold events happen. Almost certainly, the terrestrial installation of advanced machine intelligence will be another instance of this general rule. Granted, it’s not an easy topic to be realistic about.

(‘Make it Stop’ I, was devoted to the same futile hope.)

ADDED: At The Verge (with video).

Holy War

Chas Freeman discusses the “peculiarly American presumption that war naturally culminates in the unconditional surrender and moral reconstruction of the enemy” and its consequences for foreign policy competence.

At its deepest level, diplomacy is a subtle strategic activity. It is about rearranging circumstances, perceptions, and the parameters of international problems so as to realign the self-interest of other nations with one’s own in ways that cause them to see that it is in their interest to do what one wants them to do, and that it’s possible for them to do it without appearing to capitulate to any foreign power or interest. Diplomacy is about getting others to play our game. […] Judging by results in the complex post-Cold War environment, diplomacy is something the United States does not now understand or know how to do.

Continue reading

Quotable (#94)

Razib Khan, reviewing Azar Gat’s War in Human Civilization, sets up the topic:

The authors focus on two species as a contrast with humans, common chimpanzees and social insects, Argentine ants, which have been known to engage in war. War here can be thought of as coalitional intergroup conflict. Chimpanzees are informative toward any discussion of human evolution because they are phylogenetically close to our own lineage, while social insects are not, but like humans are highly complex in their organization (they even farm!). But, there are important contrasts between the wars of chimpanzees and social insects, and those of humans. Chimpanzee wars are of small scale, on the level of the band, and always opportunistic. That is, they occur in a manner which could be modeled as competing firms acting in their own rational interests. When two bands interact, and one of them is much larger, then the larger band proceeds to attack the smaller. Chimpanzees do not engage in conflict by and large when there is parity between two bands. The attackers take on little risk, to the point where there hasn’t been a documented instance of casualty on the part of attacking bands in field observation. Social insects are very different. The scale of their warfare is on the same order of that of humans, millions of ants for example may be party to conflict. But, unlike humans the coefficient of relatedness of the opposing coalitions are such that it can be explained via traditional inclusive fitness theory.

Quotable (#87)

The rise of the Islamic State was clearly anticipated by the American security establishment — it seems — in a manner that can only be considered remarkably relaxed:

According to Brad Hoff, a former US Marine who served during the early years of the Iraq War and as a 9/11 first responder at the Marine Corps Headquarters Battalion in Quantico from 2000 to 2004, the just released Pentagon report for the first time provides stunning affirmation that:

“US intelligence predicted the rise of the Islamic State in Iraq and the Levant (ISIL or ISIS), but instead of clearly delineating the group as an enemy, the report envisions the terror group as a US strategic asset.”

Anybody shocked by this probably hasn’t been paying serious attention.

The Unimaginable

A warning at The National Interest:

COULD A U.S. response to Russia’s actions in Ukraine provoke a confrontation that leads to a U.S.-Russian war? Such a possibility seems almost inconceivable. But when judging something to be “inconceivable,” we should always remind ourselves that this is a statement not about what is possible in the world, but about what we can imagine. As Iraq, Libya and Syria demonstrate, political leaders often have difficulties envisioning events they find uncomfortable, disturbing or inconvenient.

As a general principle, the future can be expected to deliver the ‘unthinkable’. (It’s in our nature to forget, very quickly, how much of that happens.)