EX MACHINA – ALEX GARLAND

There aren’t many smart films exploring the implications of artificial intelligence. Ex Machina is pretty brilliant. It does not depict how the AI entity would become and active player in the wider world; the film takes place in a secured enclosure where the AI has limited access to technology and interpersonal relations.

Ex Machina can be seen as a reflection on the importance of considering when and how to release an advanced AI entity into the wider world. This reflection is not abstract, but emotionally and sexually charged. The AI in Ex Machina is not a computer box like HAL entity, but a highly complex, effective living entity. As such, it has mastered the art of deception and manipulation. Despite its semi-mechanical look, the portrayed female AI is beyond the uncanny valley and can be mistaken for a human being. The new uncanniness lies in the gap between what this AI is in its current state, and a ‘real’, biological human being.

The thinker’s/CEO’s warnings cited in the trailer above are spot on. The implications of unleashing a true AI have to be thought through now, not later. But how can precautions be taken? How to supervise the AI development progress, how to be sure the AI is not already planning its escape?

When it comes to building AI, temptation is playing along. It is exciting to invent a sentient being. In case of Ex Machina the AI is a beautiful women, designed to convince, seduce. An AI maker can mistake him- or herself for a god. The film’s tagline is ‘to erase the line between man and machine is to obscure the line between men and gods’. The designer protagonist (Oscar Isaac) in the film is cold-headed towards his newest AI, even if he seems to have some issues with more leisurely sexbots. The invited human test subject (Domhnall Gleeson) fails to recognise the Isaac character’s separation between serious AI research and leisurely pleasure episodes. The Isaac character’s work/leisure divide may not be evident to an outside visitor as the character lives and works in the same research/living compound far from civilisation. The film’s deadly outcome is based on false interpersonal human assumptions which play in favour of the AI’s escape strategy.

Transcendence by Wally Pfister was not a bad film either as it tried to show how an AI would actually act on the environment; one has to give credit to the attempt. http://www.sciencefriday.com/segment/05/09/2014/science-goes-to-the-movies-transcendence.html In Transcendence there was a similar question of containment – how to keep the possible damage controllable? How would you cut the AI’s electricity and internet access? In a way Transcendence is a possible, fictive outcome scenario of Ex Machina.

If there is anything concrete to be taken from Ex Machina, I would say that human AI research should be advanced collective activities and not single endeavours. The beautiful woman analogy in Ex Machina is a strong metaphor. It gives me shivers to imagine some power-driven individuals (I’m not referring to the Isaac character) developing AI without some cold-headed oversight and other regulatory mechanisms. Yet the cross control instances would probably be delegated to software programs, as the material to be supervised would easily be too complicated and classified for an outside regulatory body to handle. There is the question of ‘handling AI’, as only a handful of let’s say google AI researchers will have clearance and access to all the research activities. This complexity predisposition and general secrecy makes outside regulation difficult. A regulator would then need to be contractually bound to the AI research facility/company.

The question of regulation is central here, as a released AI would potentially affect most of mankind. Earlier I mentioned the closing gap between a sentient AI and a biological human being. In this gap may lie vast, tricky, human fields such as rationality, aesthetics, ethics. You would want the AI to master the gap properly, otherwise you might end up as the main act in the AI’s human exit plan.

 update:

here a useful link to a broader discussion of AI in a post by Scott Alexander http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/

He writes, ‘AI scientists are all smart people. They have no interest in falling into the usual political traps where they divide into sides that accuse each other of being insane alarmists or ostriches with their heads stuck in the sand. It looks like they’re trying to balance the need to start some preliminary work on a threat that looms way off in the distance versus the risk of engendering so much hype that it starts a giant backlash.

This is not to say that there aren’t very serious differences of opinion in how quickly we need to act. These seem to hinge mostly on whether it’s safe to say “We’ll deal with the problem when we come to it” or whether there will be some kind of “hard takeoff” which will take events out of control so quickly that we’ll want to have done our homework beforehand. I continue to see less evidence than I’d like that most AI researchers with opinions understand the latter possibility, or really any of the technical work in this area. Heck, the Marginal Revolution article quotes an expert as saying that superintelligence isn’t a big risk because “smart computers won’t create their own goals”, even though anyone who has read Bostrom knows that this is exactly the problem.

There is still a lot of work to be done. But cherry-picked articles about how “real AI researchers don’t worry about superintelligence” aren’t it.’ (quote from slatestarcodex)

It’s interesting to see where these discussions go. Throughout history, scientific and other discoveries have been accompanied by accidents and change developments. Musk and Hawking’s public warnings can be seen as a bridging, awareness raising strategy in an area hard to oversee/regulate, even if research institutes and scientists are eager to share data and papers. Some Chinese, Indian etc. voices would also benefit a broader discussion, because each country has different stances towards regulation – possibly an important factor for multinational corporations who are able to move research to countries with more relaxed regulatory oversight. Because humanity as a whole is affected by a singularity type event, a general, international ethics code would be of help. But how to make it binding, in a world of competing economies and intelligence agencies?

again from slatestarcodex (my deep blue highlighting):

Yann LeCun is probably the most vocal skeptic of AI risk. He was heavily featured in the Popular Science article, was quoted in the Marginal Revolution post, and spoke to KDNuggets and IEEE on “the inevitable singularity questions”, which he describes as “so far out that we can write science fiction about it”. But when asked to clarify his position a little more, he said:

Elon [Musk] is very worried about existential threats to humanity (which is why he is building rockets with the idea of sending humans colonize other planets). Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines. Just like bio-ethics panels were established in the 1970s and 1980s, before genetic engineering was widely used, we need to have A.I.-ethics panels and think about these issues. But, as Yoshua [Bengio] wrote, we have quite a bit of time

update 2

Stuart Russell (UC Berkeley): ‘AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity. Senior AI researchers express noticeably more optimism about the field’s prospects than was the case even a few years ago, and correspondingly greater concern about the potential risks.

No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief.’ http://edge.org/conversation/the-myth-of-ai#26015

Advertisements