Virtually Human

11 August 2015 | IBM Watson, New York City, USA

Virtual Futures presents a panel on artificial intelligence, intelligence augmentation and mind clones at IBM Watson, New York.

Artificial intelligence (AI) is already part of our daily lives. Humans increasingly delegate and outsource their brains to artificial agents that are able to make decisions and perform routine tasks on their behalf. Some of the most ubiquitous agents perform medical diagnostics, stock market trades, and make decisions as to our eligibility for car insurance or bank loans. Indeed, IBM’s Watson is such a cognitive computing platform that helps humans inform their decision-making process.

The pervasive nature in which AI has entered the public consciousness has driven an increasing fascination with the semi-autonomous, semi-intelligent software agents that will come to form the next wave of computing, inform the future of business, and maybe the next incarnation of human experience.

This panel has been curated to address an urgent need to explore some of the latest perspectives on the relationship(s) between humans and non-human machine agents. The panel will explore if we might ever create machines that match, or even surpass, distinctive traits and achievements that make us human. It will also address the need to understand where a fascination with machines that exhibit human-like capabilities and intelligences originates.

Most importantly this panel will address the fundamental differences between artificial intelligence, cognitive computing, software agents and machine intelligences – terms that are often used interchangeably.

It will be increasingly important to make these distinctions if we are to better navigate a world where non-human/ non-biological systems act intelligently and begin to exhibit life-like behaviours.

Panelists

Martine Rothblatt, CEO of United Theraputics, Author of "Virtually Human: The Promise - and the Peril - of Digital Immortality

Prof. Steve Fuller, Professor of Sociology at the University of Warwick, founder of Social Epistemology

Dr. Dan O'Hara, Senior Lecturer at New College of the Humanities

Michael Karasick, Vice President Innovations, IBM Watson Group

Moderated by: Douglas Rushkoff, Media Theorist

Team Human Podcast

An edited version of this discussion was released on Douglas Rushkoff’s Team Human Podcast in September 2017 - with additional insights by the host. Listen below:

Images

Transcript

Douglas Rushkoff: Let's start at the beginning. Michael, I'm still trying to understand: what is Watson? Maybe it's my hubris as a human, but my natural tendency is to keep trying to put Watson in a very particular box. Now, this is as a very interactive form of Google: a Google that can talk back and refine. I want to know, really: What do you guys mean by a cognitive machine? What do you mean when you say, "Watson reads books" or "Watson has confidence"? Is this drama to help us understand processes that are going on, or does Watson read books? Does Watson have confidence? You know - what is going on in there besides the database? 

Michael Karasick: Alright. Five minutes, huh? Okay, so short story. I guess IBM Research has been working in AI technologies for a very long time - probably 40 or 50 years. So people talk about Watson playing Jeopardy in 2011 as sort of new work, and I laugh. We had a team in IBM Research working on an open problem; an AI called 'Deep Question and Answer'. The point was, can you build a system that can answer domain questions? Ask a question; get an answer; have the answer be useful. We built this system that answered a particular question that we'd call a factoid based question, trained the system by reading Wikipedia and by ingesting its enormous numbers of documents, doing some natural language processing, training the system to understand the kinds of questions in the case of Jeopardy that are asked, and being able to answer one, two, three, four of the questions in a very highly redundant corpus. So that was where we started.

The confidence point is that statistical machine learning or deep neural network based learning - none of them are precise - so the genius of what the team did that built it was to do a pretty good job with hundreds of different scoring functions and algorithmics to combine the scoring functions, to provide a ranked set of answers. Along with that, an aggregation step is a sense of how good each answer is. That's where the confidence comes from. 

Fast forward a couple of years, what we've been doing is decomposing the original Watson system and augmenting it with many other engines, that we call cognitive - and I'll define that in a moment - to build a platform. If you fast forward to where we are now, we're building a set of cloud delivered services which do interesting things. Everything from answering questions to analysing personality of a writer by analysing their words - not OCR, but literally the text that they produce. The idea is to build a system that derives insight from signals, extracting the signal from the noise, if you like. Did that get all your questions? 

Rushkoff: Yes and no. The core question is: do you consider using the phrase, "Watson has confidence" problematic, as compared to saying, "Watson reports probability of correctness"?

Karasick: That's fine.

Rushkoff: But do you see...

Karasick: I'm not hung up on the word 'confidence'.

Rushkoff: I'm not saying that, no. I'm hung up on the word 'has'.

Karasick: Oh, you're hung up on the word 'has'?

Rushkoff: "Watson has confidence" is very different from, "Watson can report the probability of an answers' correctness."

Karasick: I'm not hung up on the word 'has' either.

Rushkoff: But...alright. But that's good to know, because it seems that the Watson team has meticulously worked to present Watson in the shape of a character. Not just anthropomorphised, but personified to the extent that people think this is going to be a less threatening partner in your exploration, because now you've got this thing that has confidence. There's a choice there that's made, and there's a black box because it's all proprietary. A lot of us in the outside, regular little world think: Oh Watson has confidence. But Watson doesn't have confidence, right?

Karasick: Yeah, so Watson is a computer programme with a lot of algorithms that a number of us know about or don't know about. At the end of the day, Watson does a lot of processing and training particular domains. Depending on the algorithm you use, you end up with a probability - 'confidence' is a fine word to use - of the answer being correct. But we talk internally about standard notions of precision and recall. It's not magic.

Rushkoff: Right, and there's no confidence, as it were? You know, so it's not a being with confidence? Just like a Google result might say, "Our system predicts that this is 46% relevant to your search", the Watson algorithms have predicted that this is getting 99% certain.

Karasick: 99 would be good.

Rushkoff: Yeah - for example. Great, okay, because I think what a lot of our conversation is going to be about is public perception of AI and where this is going, and trying to bring more rigour about the way we think about this stuff. If you've got a public that thinks that Watson has confidence, you can see where they get real Chappie, real fast.

Karasick: One of the things that's been interesting to me is when you build a system that demonstrates traits that we associate with intelligence: learning; having confidence; whatever we want to call it - you're right, people do anthromorphise a little bit. Our view on what Watson is used for is to help people. We don't expect to see little Watsons all over the place doing what people do now, but we like to look for use cases that amplify someone's intelligence, giving them ability to scale. We don't say 'cuddly'...although one of the companies I've been working with has a really cool little dinosaur available in three colours - powered by Watson - for kids. But you know, it's in some sense gratifying to see that the system has been treated and accepted in the way that it has.

Rushkoff: Okay, and we're going to come back to a lot of this.

Martine, I'm really interested to know what you mean by 'mind clones' and 'cyber consciousness', and what, from the real world, informs these predictions and visions of where things are going. In other words, with Philip K. Dick, we know what informed those visions of where things were going. It wasn't necessarily science or technology that was going on - it was medication. I'm interested - because you are the CEO of various technology companies, and a technology researcher - if you can explain these ideas and then what you brought to bear to assemble them.

Martine Rothblatt: Sure. So the book 'Virtually Human' is basically a book of ethical explorations. I'm an ethicist and not a pure scientist. I take the premise for the book as what is pretty much a thought experiment. Suppose there's a software that one or more people consider to be humanly conscious - meaning they believe that software has the interior feelings, thoughts, thinking, fears, etcetera, that another person would have. If one or more people feel that way about some software, what sort of ethical rights and obligations apply to other flesh people with respect to that software, and apply to that software itself?

So if that software - including an associated database - is deemed by one or more people to be identical in their core consciousness to that of another human they know - their best friend, themself, their mother, what have you - that's what I call in the book a 'mind clone'. The idea is that one or more people feel: wow, this software is me, just outside of my body. Or my mother, but outside of my body; my friend, but outside of my body. So what kind of ethical rights and obligations apply to other people in treating that software? What kind of rights and obligations apply to that software itself? That's what the 'Virtually Human' book spends its 350 pages or so exploring.

Rushkoff: The cyber consciousness; the mind clones - are you thinking of them as conscious or as good-as-conscious, as far as the interactor?

Rothblatt: Well, as mentioned, there's a very broad continuum of possibilities. As I know it, it depends on what one or more people feel. One person's conscious cat is another person's empty box in a soft, feline sort of shape. What I point out in the book is that there's an infinity of different perspectives that one would have with regard to whether or not any particular software is conscious or is a mind clone. I suggest some different methods for developing a larger social consensus as to whether this is cyber consciousness or just a fancy puppet, or an interactive journal or what-have-you. I point out that probably until the end of time, there will be debates and arguments over whether or not what one person thinks is cyber consciousness is, in fact, cyber consciousness.

Just like after probably 10,000 years of humans and dogs hanging out together, there is still to this day a huge diversity of points of view. Somebody throws a dog out of the car window on a freeway, somebody hunts Cecil the lion in Zimbabwe, and loads of other people are horrified. Sometimes the consensus reaches to the level that it's a felony crime in most states of the United States to throw a dog out your car window and obviously, I agree with that point of view. Already in Zimbabwe and already around the world, there's a diversity of points of view on whether or not Dennis, who shot Cecil the lion, was a converservationist in drag or what-have-you. It's really fun as an ethicist, because the subject of the ethical discussion is itself as fluid as honey.

Rushkoff: It feels as if these kinds of conversations are new. You're alluding to the fact that especially with regard to things other than technology, they're old conversations. 

Rothblatt: Old, old, old.

Rushkoff: I'm wondering - and this is where Dan can really come in handy - I mean, how old are these conversations? Are we in a unique conversation now or is this something you've seen before?

Dan O'Hara: I think we've seen it repeatedly. There's a limit to documented history. We can only know as far as human documentation - or perhaps even archaeology - can go back and tell us. Maybe giving you concrete examples of ones that are directly proximal: I like the way, Martine, you're emphasising the degree to which a society might project a view of consciousness onto something. If we take the most recent point in time, when we probably had something a little bit like the debate we have now about AI, where there is great trepidation on one hand but also a massive transcendental optimism about something on the other. It was about 20 years ago. It was in about 1994 when people were first starting to hear of this Internet thing and not a lot of people really thought it would catch on. On the other hand, there were people who were proselytizers for a vision of cyberspace. The virtual was going to be a place into which we could penetrate; another realm. That's an analogous situation, but now when we're talking about what Michael described - a machine that does something that is perhaps like thinking - if we subtract all the metaphors such as reading and understanding and learning, then yeah. We've had lots of those.

My favourite is a guy called Ramon Llull - a 13th century mystical philosopher who lived on Majorca. The Argentinian writer, Borges, writes about him, interestingly. He had a thing which he called a 'thinking machine'. It was basically a machine made out of a series of concentric circles, with terms written around the side. Things like 'wisdom', and 'truth' and 'eternity', with all of these crossed cables in the middle. You could rotate any of the discs so that the terms would correspond and deliver you a message. He had a whole load of disciples who followed this thing called the 'Ars Magna' using these machines. When Botges analyses these machines, he points out that Llull and Llull's disciples knew the machine didn't think. It didn't work and they knew it was broken, but that really didn't bother them. They said, "That's fine. It's fine that we've got the machine that doesn't work. We'll just combine it with another, and another, and another, and another." By this recurrent, sequential process of comparing one against the other, you would eventually rectify the errors of the first, and the second, and the third. What you'd end up with would be of course - not thinking, not thinking in that sense - what you would end up with, because he was a mystical philosopher, would be the word of God.

Now when we were in the Immersion Center and I was looking at Watson, I was thinking that this current neural network type of process is very similar to what Ramon Llull's disciples thought they were doing back in the 13th century. They ascribed to it a kind of consciousness. Does that, kind of...just to begin. I mean there are lots more examples.

Rushkoff: Yeah, when you talk about it like that, it's what my ancestors thought they were doing when they were writing and rewriting Torah again and again and again. It was an iterative loop through which something extra happens.

O'Hara: That's a really good example, because it actually brings us to the ethical questions and ethical obligations that Martine was bringing up a little bit earlier. What technologies in history have we granted absolute autonomy and executive power? Have we ever done that before? We're talking at the moment about the possibility of strong AI or of autonomous weapon systems that can make their own decisions independent of human agency. Has that ever happened before with any piece of technology? Yes, of course it has. As long as you take a broad definition of technology that isn't just to do with tools and gadgets, but say that technology is an ology like any other: like biology; archaeology - it's the logic of the study of how to do things. The logic of techne. Yeah, books are technologies, and we have granted absolute autonomy, absolute executive power, and absolute transcendence to the Torah, the Bible, the Quran - with varying degrees of complicated, real world effects.

Fuller: You left out the US Constitution. 

Rushkoff: I was going to say, yeah.

Karasick: Okay, why don't we start with the Magna Carta.

Rushkoff: Steven. So it feels like a lot of the time we go into this conversation about AI and then we end up talking about tools that are really good. We end up talking about some kind of intelligence augmentation thing and everyone backs away from Chappie. Until we're back in the movies and then we're thinking: oh my God, mummy, mummy! Running around, looking for its mother It's like: oh no, poor thing. How do we bring rigour to this discussion? How do we develop an informed, rigorous discourse about this and where's that happening? 

Fuller: Okay, well look. What I'm going to do in this brief period is lay out some benchmarks here. First of all, I think it's fair to say that this whole business of trying to create artificial intelligence that potentially can surpass whatever human beings can do is actually something that I think is quite recognisable from the philosophical tradition. In other words, there's nothing weird about this. In fact, most of our philosophical theories - especially in the modern period about what it means to do science, how do you get at the truth...even our notions of ethics whether we're talking to Kant or Bentham - actually presupposes agents that are much more powerful, cognitively, than ordinary human agents are. Part of what we call postmodernism is a retreat from that. Postmodernism is largely a retreat from that, saying, "Look. Human beings aren't these super utilitarians. They aren't these infinitely principled creatures. They aren't these creatures who can just amass all possible evidence and come up with the optimal solution to a problem." That's what postmodernism has basically been about. It's been about scaling down what it is that human beings should expect as 'reasonable'. There may be biological constraints with regard to human beings, homosapiens, that inevitably make sense of that idea. Nevertheless, it seems to me that insofar as we do identify with the philosophical tradition and about ideals of rationality and optimal goodness and all the rest of it, there is a sense in which artificial intelligence actually does respond to that. I think that always needs to be seen very much. It's not a strange or weird thing. So that's the first point to make. 

I do think, however, that as we get into a position of improving artificial intelligence and all the rest of it - and I'm quite comfortable with the idea that we are going to do this - then we're going to have some interesting Turing test style problems. Basically, what we're talking about here is using a kind of sophisticated version of the Turing test as a kind of citizenship criteria. In other words you're trying to figure out: okay, is this being that has been artificially instructed and that can do all of these wonderful, amazing things that we think is very valuable in our society and so-forth...are they sufficiently X in order to count them as one of us, and to give them citizenship rights?

Rushkoff: Like corporations or something.

Fuller: No no, but the interesting point about this, of course, is that if you look at human history and the various struggles that members of homosapiens have had to fight in order to get citizenship - we're talking about women, or ethnic minorities, or whatever - part of what has had to go on in that process is that one has had to think about the notion of belonging and citizenship in a somewhat more abstract way. In a sense, you don't start off at the get-go being prejudiced because the thing isn't the way you expect someone to be in their physical composition. I think it's taken a long time for that to be overcome, and in fact, it's still a struggle today, in terms of the way in which we interpret differently bodied beings. We have Martine here, but of course even just talking about women in the biological sense, or even talking about ethnic minorities. There's a sense in which in order to ascribe a level of equality where all of these beings can be part of a common citizenship and common society, there has been an enormous struggle with homosapiens, per se. We should expect that it will be a tricky issue. The fact that we might end up in - if you've seen Bladerunner - these scenarios where you have an actually very sophisticated notion of the Turing test going on in Bladerunner, which gets the interrogator going into the psychoanalysis of your relationship to your mother and things like that as a way of trying to work out if these beings are really human. We should expect something like that to happen, but that itself is not any greater problem than the problems we've been facing in the past. I don't see this as an in principle problem. I just think it's part of the problem of increasing enfranchisement. 

One of the things that's very good about the mind clones book - Martine's book - is that it gets into some of the legal stuff and the criteria issues about what you're looking for and how you judge these things. That's what it's going to boil down to. It's going to boil down to judgement calls that can end up receiving a sufficient amount of social consensus, and then it gets accepted by everyone as the new normal. 

Rushkoff: It's interesting to me that the conversation that we're having is basically about Watson's rights. Before Watson or any AI actually becomes an AI - which may just be a fictional thing, anyway. We have no evidence that there's cognition or that there's experience that's going to happen - it seems to me that that whole sci-fi scenario is a distraction from the very real impact that intelligent agents can have right now, as Kevin has shown us. An algorithm following its instructions can crash the stock market, destroy wealth, and lead to real problems. We're sitting here because we're so ethical worrying about whether this algorithm is going to feel it if we turn the fucking thing off, when it's extracting value from all of us because it's programmed to do what giant Fortune 500 companies can do.

Fuller: Yeah but that's just the problem with capitalism, come on. That, in a way, is showing how capitalism is scaling up...

Rushkoff: Through artificial intelligence. 

Fuller: Sure, yes exactly - but that...

Rushkoff: I'm not, it is.

Fuller: I understand that. 

Karasick: So we had Philip K. Dick twice and now capitalism. This is awesome. Keep going. 

Rushkoff: It is, and IBM!

Fuller: No, no, no, no, no, no - because I do think the kind of issue you're talking about of how surplus value is going to be extracted in the future and how it might just be extracted by our mouse clicks as we're making choices on things on the Internet - I think that's a serious issue. That's an issue that you might say is part of the problem of capitalism. I think in a sense, we still have to address the issue of whether these machines have agency.

Rushkoff: Yeah, and that's part of that announcement that Hawking and those folks did. The development of this technology dovetails very conveniently with traditional corporate capitalism in a way that seems more than coincidental, more than a side-effect. It's investment that's going into the most promising work in this field and if that investment is looking for a certain kind of return - yeah, we will teach doctors how to heal patients but also what we're really doing is helping doctors cope with big, industrial medical problems.

Fuller: Sure, sure, no, but this is where we need sophisticated social and political theory for the future.

Rushkoff: That's what we're here for. That's us. We are the human intervention, right now. This is the moment right here.

Fuller: Yeah, yeah, yeah, yeah, yeah. No, no. But I just want to make sure that you're not saying to pull the plug on this.

Rushkoff: If we have to, sure. I'm not saying whether to pull the plug or not. What I am saying is that humans should intervene in the process. 

Fuller: Well of course. 

Rushkoff: If you think intervention requires pulling out the plug, then we'll pull out the plug. I bet there's ways to intervene that don't...

Karasick: So there is a journey here. You've jumped there - there's a whole bunch of intermediate steps.

Rushkoff: That's what I'm saying, we've jumped to artificially thinking things.

Karasick: Please, let me tell you what I worry about. As you pointed out, Watson does some fairly sophisticated things and we're using it for some fairly sophisticated tasks. We are writing systems to help people deal with scale and complexity. The first domain we looked at after the game was oncology. Every day I come in thinking about that is an incredible responsibility. I worry that people forget - back to your discussion earlier - that these things are computers and they do what they do. I worry that people ascribe notions of intelligence or cognition, or freewill - thank you nobody for bringing that up - to these computing systems which are designed to help people solve problems. I very much worry that people forget what they are. 

Fuller: Can I ask you a question on this point?

Karasick: Yeah, please.

Fuller: I don't know if you know but there is this concept in library and information science called 'undiscovered public knowledge'. You know about this, right? Because when we had the little Watson Immersion thing earlier before this event, it was about there being all this literature out there and no individual scientist can read everything, and so the idea is to bring all of this stuff together. The way it was presented - in the demonstration at least - was kind of Watson operating as a tool, in the sense that you'd already have a medical researcher or somebody who's looking to solve some kind of problem and then you'd deploy Watson to bring together the relevant literature to try to come up with some kind of...

Karasick: Tap you on the shoulder and say, "What about that?" 

Fuller: Sure. Now the question is: is there any anticipation or any expectation that Watson could do this for itself? As we know, the vast majority of academic literature goes unread. There's a potential...

Rushkoff: Someone's going to read my stuff! 

Fuller: There's a serious issue here, because undiscovered public knowledge is, in a sense, a new raw material being generated by the academic community that no one's really looking at very seriously to see how the stuff may be brought together. When we talk about research grants - especially in the sciences - the levels at which you have to show that previous research has been done in the field that you wish to get funding in is relatively minimal and superficial. Whereas in fact, if you had very sophisticated Watson-style things on the lookout all the time, trying to mine stuff, they could actually come up with solutions to problems that ordinary humans might think: Oh my God, I've got to do a new piece of original research because they never joined the dots between the different articles in different fields.

Rushkoff: Right, well this is why Al Gore built the Internet, if you remember. 

Fuller: Ah! That's the man who did it.

Rushkoff: That was his original conversation. That was what he was arguing.

Fuller: Well no but the point is, given that you're the guy running the thing...[pointing at Karasick]

Karasick: So we've deployed systems that do that in a directed way.

Fuller: Can you programme Watson to do it spontaneously? 

Karasick: I ask questions like that. My team is a mix of engineers and researchers and we have a Braintrust research team at IBM as well. I always ask them about programming the ability to do strategic thinking. To me that seems like the holy grail so to speak, here. They look at me and say, "You know, that's a pretty hard problem." I go, "Yes it is." But it's an ongoing question. I don't know if anybody here has seen people attempt to do this kind of thing, but that is what you're asking. 

Fuller: Yeah, yeah. Because you know, we're totally insane in academia. We're constantly churning out publications that nobody reads, it's a waste of everyone's time and money. I'm saying this for the camera here. But there's a sense in which if you had some kind of being - whether it be human or Watson - actually reading stuff and trying to put stuff together, you might actually make a lot more progress a lot more quickly, than just commissioning new research from scratch.

Rushkoff: Yeah. I mean, the one other thread I want to explore before we have everyone engage with us, and I'll start with Martine on this. There's what feels like a West Coast singularity trend, right? Where people seem to be arguing - or actually are, at me - the idea that the history of our cosmos is information striving for complexity. It uses matter to become atoms, to become molecules, to become cells, to become organisms, to become cultures, to develop a civilisation. Now, we have machines and silicon. As silicon and our machines become more complex than us, they are essentially the next level of evolution, and then we are really only important in that journey insofar as we can keep the machine going. When I argue, "No, no. Humans matter", they say, "Oh, you only say that because you're a human." - as if it's some kind of a hubris. That concerns me. But as an ethicist, is this a kind of a hubris, or a solipsistic understanding of consciousness and what matters?

Rothblatt: First of all, I live on the East Coast, so I can't speak for the people on the West Coast. But, I completely agree with what you're alluding to. I think it's totally hubristic. From my own ethical well, I'm a very big believer in what I would say 'The School of Awe and Wonder'. To me, just appreciating the plants outside, the beautiful design in here, each individual as an individual person - this is enough purpose for the whole universe, right there. We can't help ourselves creating greater and greater degrees of order and complexity, and we can't help the fact that so often, that order and complexity crashes down on us. The road to hell is paved with good intentions, sort of thing. I would be really sceptical, myself, about an ethical paradigm based on some kind of Nietzscheistic view that we're on this railroad to Ayn Rand-ism. Instead, I would be much more comfortable with an ethical worldview that was based on cherishing the beauty and the importance of all the life that's around us.

Karasick: That was poetic.

Rushkoff: That was.

Rothblatt: That's what I do. No, no, no! I only just got rights a couple of years ago as a transgendered person, so I'm not going to go running for Office. 

Rushkoff: You could join Larry Lessig. That's another story for another day - he's running for President.

Fuller: He is!? Which party?

Rushkoff: Because the PAC failed, so now he's running. I was going to jump to Dan for a minute. There's this natural urge for an omega point. I don't know if that's male and Christian, but it's certainly old. This notion that this singularity or flip happens; that there's this moment, and after which then, Ray Kurzweil's dad talks to us. Do you know what I mean? There's this moment that we're transcended. Where does that come from? Is that alchemy or something, or is this new? That we somehow evolve beyond ourselves; that the next thing happens; that we have another species: homodigitalus.

Fuller: Ooh, I like it.

O'Hara: The desire to produce duplicate simulations that have some kind of transcendent aspect to them is in fact, profoundly not a scientific endeavour at all - but an artistic endeavour. One of the things that is happening here is that something that is an artistic inquiry is being talked about in very scientific terms, and I think that causes a great deal of confusion. There is the desire to transcend always, or to have some transcendent quality, but there's something else going on here as well. There's a thing in nature called emergence, or in computing terms when we're talking about AI systems or autonomous weapons, we call it unintended consequences or unforeseen consequences - sometimes good and sometimes bad. When you combine certain elements, functions or processes and you don't know that you're going to produce something new.

A very basic example would be the oldest bit of technology known to man: the fire drill. The stick for making fire where you rub the stick and keep rubbing it on the tinder. Eventually, what happens with one iteration of the stick where it's got a string around it and you're pulling the string is that it cuts a spiral into the stick. This is thought to be, by archaeologists, where the threaded screw came from. You've got an object that's then the threaded screw, but you didn't set out to create a threaded screw. You set out to create fire, and in the process, totally unintentionally with no agency at all - or human agency - you discovered a form inherent in nature: the helix, and you discovered a human use for it. That's a very positive example, generally taken, but it's also perfectly possible that we can produce unforeseen consequences that are not so positive for us. I think those anxieties are worth dealing with.

Rushkoff: Yeah, and emergence has become this sort of religious excuse of science, on a certain level. It's where they shove all of that mystical stuff. It's going to change state, and then consciousness is just emergent. It's become a hand-wave on certain levels, too. It's real and mythological.

You all are smarter than us - or certainly smarter than me. What are you all thinking? What did you come for? What would you like to know?

Audience participant 1: I have some very, I guess, bringing it back down to the here and now kind of points. I think that there's still a lot of confusion around the terminology and what marketing technology companies are doing now. How they're utilising AI to sell; to influence; to convince. I'm interested to know, in a sense, what's happening now? To your question, when we were doing the tour: where does the obligation begin in terms of informing the consumer/user?

Rushkoff: Yeah, maybe I can start on some of this disclosure. It certainly feels like our phones are getting smarter as we get dumber. The more we use mythological language to describe these things, the less informed people actually are about the fact that Watson is not thinking at me, he's doing whatever he does. 

Fuller: Yeah. I have to say that the thing that you're alluding to that took place in the tour - I was a little surprised. Again, this is maybe a question about where IBM sees its own responsibility with regard to these matters. As something like Watson gets smarter and smarter and is able to amass more data and to be able to learn from that data, to be able to inform in a more strategic way, various sorts of insurance underwriters - which is what the question was about, that I asked - to what extent are end users entitled to know what is going on? The answer that I received was basically: It'll vary from insurance company to insurance company, and IBM doesn't feel any kind of particular obligation with regard to what it might want to prescribe as appropriate levels of information that users ought to have, so that they can make informed decisions etcetera etcetera. There is a kind of practical question here about where you think your responsibility ends in regard to this.

Karasick: Wow, so this an old, old, old problem. The most recent one that comes to mind is this internet safe hardware discussion - the same thing. What's your editorial responsibility? The answer you were given was my take as well. I don't want Watson to have the responsibility to have to do that, I want a person to do it.

Rushkoff: Right but then, how can IBM present these technologies in ways that continuously imply the transparency benefits? That's the main thing I'm looking at tonight, now. IBM is responsible for the way Watson is being characterised. If Watson's being characterised as - and I know it's no matter to you, the company is doing it - characterised as a kind of person...

Karasick: I do work for them. 

Rushkoff: I know, but you know what I mean. I'm not saying that you're responsible for the personification of Watson, or for trying to lead people to feel like Watson is confident or not confident - but that's the way he's being presented. Likewise, Watson's attributes are being presented as the ability to empower insurance companies to do actuarial data on our Facebook or God knows what else is out there. Rather than saying, "Look what Watson can do" which can inform all these people about what kind of data is being used, so that they know how to change their lifestyle in order to be better insurance risks. It's a matter of presentation. 

Audience Member 2: So there's all this stuff about the atmospherics of agency, which we've experienced: this kind of sense of intentionality; this sense of friendliness. But there are situations where the question of responsibility is much more concrete. I'd like to ask the panel how you go about programming a self-driving car. In a crash situation, should it be programmed to maximise the safety of the person inside the car at the risk of hurting people outside? Or, should there be some sort of utilitarian principle where the car makes a calculation that ploughing over the ten people at the bus stop is worse than killing you. There's a life and death situation which can be programmed into a very imminent technology, and I'd like to know how we're going to handle that ethical question.

Rushkoff: How do we handle that? Someone?

O'Hara: Not very well. I know the problem to which you're referring and it's being debated repeatedly in just basically two ways. The two ways you outline: a calculation of utility versus privileging the user. Traditionally, I'll go back to Ramon Llull, and the way Borges talks about thinking machines. He says that thinking machines, mythologically, have always been and can be defined as 'the desire to create a methodological application of change to solving the problem'. The problem you're posing here is: what to do in a life or death situation when there are multiple lives involved. The answer of a thinking machine like Llull's would be that you need to have a much more complex system. You need to be able to calculate many, many, many more variables in order, simply, to minimise the number of deaths.

Audience Member 2: But that's not good enough though, is it?

O'Hara: It isn't good enough.

Audience Member 2: In the Watson model, you've got the various factors that are used to score things and then you have a set of algorithms that are used to rank the importance which those scores are given. Somebody's got to make a decision here. There has to be some responsibility somewhere - legally and ethically.

Fuller: Can I take a shot at this?

Rushkoff: Please.

Fuller: In terms of these self driving cars, I think what will happen - this is my guess, assuming these things come into existence - is that there will be a kind of plastic aspect to this. In other words, you as the person who buys this thing will have an opportunity to programme it to a certain extent. If the car ends up doing horrible things, it will be as a result of the specific programming that you added beyond what the manufacturer added. The manufacturer will provide some basic things that will enable the car to be self driving. In terms of how one calculates risks and when the car will speed up and all this kind of stuff: that will be left to the person buying it, to actually programme those final bits. If it then leads to some kind of catastrophe, you are responsible. 

Audience Member 3: The car can tweak your preferences.

Fuller: Well that might be useful, too, as you're driving along. That could be the next step of GPS or something. I think that that's kind of the way to go on this, right? In the sense that I think manufacturers are kind of smart enough, that part of the self-driving thing is also to enable a certain amount of discretion with regard to the car owner. After all, for people who own cars, discretion is part of their whole idea of the freedom of the road and all this kind of bullshit.

Audience Member 2: But you can't have total discretion because there will have to be legal parameters. I can't programme my car to go into blood donor mode all the time and kill everybody else on the way.

Fuller: No, no. There will be limits as to how you can programme, and those will be legal things. But then, it's up to you to decide. If you end up deciding to programme your car in such a way that it does lead to a disaster, it's pretty clear that it's as a result of your programming.

Karasick: We're hitting on something that we talk about a lot. We're hitting on two issues, here. One of them is liability: presumably, when you buy a car, whether it's self-driving or not, you assume the liability of driving and assume that the car behaves as intended. This brings you to the second piece here, which is going back to the unintended consequences point: how does one validate that the car is doing what it's supposed to do? Whose responsibility is it for the car functioning? In fact, can you even assert what a self-driving car is supposed to do?

Fuller: Yeah but if you're able to programme it in a certain kind of way, you've got that kind of discretion, and if that is in fact what ends up leading to explain what the reason for the crash is, then it is your fault.

Karasick: So is it his fault if he takes the car right off the line in default mode?

Fuller: In default mode, no. That's a different matter.

Rushkoff: So then how do we make default? I guess in default, you just distribute injury as evenly as possible, and have an algorithm for that.

Fuller: No, I don't think so. It'll be kind of like what we do now. I would think that the default mode is kind of what we do now.

Rushkoff: Self preservation.

Karasick: That was my point on liability, exactly. 

Mark Stallman: I'd like to try to turn this around a little bit. I'm Mark Stallman, and I'm president of the Center for the Study of Digital Life, which, among other things, is an attempt to recreate what Marshall McLuhan did in the 1960s with his Centre for Culture and Technology in Northern Toronto - which by the way, was funded by IBM. 

Karasick: Canadian [raises hand].

Stallman: IBM Canada, correct.

Karasick: I'm Canadian.

Stallman: Oh good, well then we should talk. The point I want to make here and the question I'd like to ask is: what do these technologies do to us? I want to be more specific about that. I was a computer architect in the 1970s, and everybody who's a computer architect knows that computers don't compute - they remember. The structure of all computers, including Watson, isn't how fast can I do this? but: can I find the stuff I need to find and move things around? Human memory was radically transformed by the alphabet. We now have a company called 'Alphabet'. 

Fuller: As of yesterday or something. 

Stallman: What will these technologies do to human memory?

Rushkoff: Help us forget. It's interesting, what will they do? While we're at it, I'm interested in what the bias of the technology is. Does it make a doctor more compatible with big-pharma, say - rather than more compatible with some vitalistic sciences or something else? 

Stallman: No answers about memory?

Fuller: Well, I do have an answer to the way you opened the thing about what these machines do to us - because I think that is an interesting conversation. Here, I would bring up this concept from social psychology of adaptive preference formation. Namely, the more that you deal with machines and computers as a basis for getting along in life, the more that becomes a normal way of being, and how you will then recalibrate what you think the human is, in terms of what those machines can do. I think that's one of the things, to be honest with you - even though people talk about that as part of the dehumanisation process, or becoming more machine like - that might end up being one of the means by which we actually incorporate the machines into our lifeworld as fully fledged agents, in the long term. Namely, we don't see such a big difference between them and us. I do think that's one of the things that one does need to watch for. Sherry Turkle who's an ethnographer at MIT, studies the way in which children interact with computers and so forth, and how they personify them and so forth. I think that's part of the issue here. The more that we interact with all this stuff, the more it normalises it. We see the difference between the machine and human is blurring from a standpoint of sentiment; emotional attachment; whatever.

Rushkoff: But even specific to his memory question, I mean maybe this is a question for you [pointing at Karasick]. When we got pencil and paper we started to make lists. That's the first thing we really did with them. We make lists, so we don't have to remember. Now we have an externalised list so our memory is freed up. In the list print world, we're still the ones who are responsible for putting information together in order to make associations and decisions. Now we're saying Watson has this ability to take the two pieces and put them together. It's no longer just memory, but the assembly. The comparison of two pieces; an intersection of data in order to draw conclusions. If we're depending on machines to do that, what is the higher order thinking that people are still doing?

Karasick: This is a pretty old one, you should ask Dan.

Fuller: Anything old is yours. [referring to O'Hara]

Karasick: No offence.

Rushkoff: Does it diminish our compositional ability?

Karasick: I always ask a couple of dumb questions. When we learned, we all bought calculators - or at least I did - it had a little square root button on it, and I don't think it made me less capable. It let me focus on other things. When I bought a cell phone, same thing. I actually hope that systems like this give us a little more time to be human, rather than less. 

Rushkoff: Right, but being human now involves comparing pieces of data in order to blah, blah, blah. Being human after we have machines to do that is going to be a little something different. That's sort of what he's arguing [pointing at audience participant 4]. What it means to be human - our primary functions - will change. 

Karasick: That's okay.

Rushkoff: I'm not saying it's bad, but it's interesting to try to figure out how. Kevin?

Kevin Slavin: So there's this story of this thing that happened in 2005 that probably half the people in this room know really well, and the other half don't know, and who knows. Anyway, it's around chess, which was basically the first target for understanding the threshold of domination in terms of man versus machine intelligence. Basically, chess was considered a solved problem. There will never be a human that's born, from this point on, who will be able to beat even a mid-level set of chess algos. In 2005, I played chess. I did an online tournament that was freestyle. They said, "Anybody can bring whatever they want to the table." They said, "You wanna bring strong algos? You want to bring 100 people? Whatever you want to bring, bring." The way that this story gets told is Kasparov is reviewing that match. Kasparov says, "Of course, two very predictable things happened. First of all, of course, the best algo beat the best chess player there. And of course, the best algo was easily dominated by a mid-level grand master playing together with a machine." Which is not an "of course" for most people - it's an "of course" for Kasparov. The thing that surprised Kasparov is that who won were two American amateurs with nine laptops. Kasparov talks about the fact that what actually dominated was the ability to coordinate complex and heterogeneous ideas among the humans and among the machines.

The question is: What is that? How do we explain that outcome given the premise of, let's say, this building, or everything that we're talking about today? Do we understand that as a threshold problem - that maybe five years from now, two amateurs with nine laptops won't work anymore? Or, is there something essential about the quality of human intelligence and this other set of things that I'm not going to call intelligence, but these other processes that does in fact dominate in a contest? How do we explain it?

O'Hara: I think it's a little bit like all your work that you did on high frequency trading algorithms. The assumption was that because it's all happening on Wall Street, on the stock market - the same stock market that people are trying to trade at normal speeds - it all happens within the same domain. Generally when this kind of thing happens - let's say if somebody brings along a couple of very, very high speed algos to help them do something - there is some kind of complaint that, "Hey, that's cheating." So you start to establish rules for different domains of behaviour. Whether it be for machines or for humans or for humans and machines such as we see in Pistorious. Pistorious is not the poster boy for human enhancement anymore, for very good reasons. You establish rules and divisions between this support or unsupported. Therefore it's not a threshold. Rather, you post facto erect different domains in which things can happen. Of course, you can bring along something. If you can sneak a machine like one of those in Nevada - they banned those machines that you can sneak in to cheat in the casino - it's perfectly possible to game the system. But then, our whole obligation is to then work to try and stop the system being gamed. It's not a threshold question - it's a domain one. 

Slavin: I'd actually like to continue a little bit with your question, because I don't think it was really answered. What it actually comes down to is: do you think that there are domains, that there are things that will always remain something that humans will somehow dominate? That if you put people together in some way, say you work with a number of people together and some machines, that's going to dominate over what you could build into a machine. Is there something that de facto somehow cannot be made AI? 

Rushkoff: So can people and some machinery beat Skynet? 

Slavin: Yeah, basically. 

Luke Robert Mason: The way this has been curated is that everybody in the audience is just as smart as the people on the panel. Please introduce who you are to everybody. 

Carla Gannis: I'm Carla Gannis, I'm the Assistant Chair of the Digital Arts at Pratts Institute. This is Peter Patchen, Chair of the Digital Arts at Pratts Institute. We've been working with Luke for quite some time, and I'm a big fan of you, and thank you so much. I'm speaking from the perspective of an artist. There's a Jeff Koons in the lobby. My question is, one: we haven't even been able to reverse engineer our own brains, and we can't understand our own programming on some level. What is your perspective on the machine that isn't functioning, isn't just analysing data? What about the machine that's doing something that's non-utilitarian, like the artist? A lot of artists, like myself, are working in the digital domain. I rely on the machine and its algorithms to produce my art. I have started to develop a symbiotic relationship with my machines. How do you feel about the machine who isn't thinking in a utilitarian way - or maybe I shouldn't use the word thinking - but isn't functioning in that way, and what are the possibilities in that domain?

Karasick: Do you want to go? [pointing at O'Hara] This is kind of your area.

Fuller: There's some sort of Plato's republic answer to that, probably.

Rushkoff: In some ways, we've been talking along this very Jeremy Bentham sort of perfection of society thing, and not about the quirkiness and strangeness of human experience, and how computers aid our consciousness and experience expansion rather than just our cognitive ability.

Fuller: But there are two issues, it seems to me that you're raising, that I take to be a little different. One issue is this symbiotic relationship you have with the machine. Of course, we can talk about cyborgs. You're becoming a cyborg, basically, in your artistic mode. The thing is, there's this issue of cyborg agency, and as it were, respecting what that is and giving it the credit it's due is one thing. Then there's this other issue that you're raising with regard to the machine's capacity to perform art by itself. I'm just wondering: why wouldn't you want that to be subject to something like a Turing test. Namely, if you cannot tell whether a machine did it or a human did it - and you had some fairly rigorous way of doing the Turing test, it wouldn't be one or two questions, it would be a very sophisticated type of thing. 

O'Hara: Would this Turing test be the question: is it art?

Gannis: It's really interesting because it seems like, throughout the 20th century, artists were trying to redefine themselves in relation to machines. Picasso in relation to photography, and in relation to physics. Later on, conceptual artists in the 60s who are starting to work in this way are distanced from emotion and are about a set of instructions. It gives a set of instructions to a team of assistants. It's really interesting that we've started to emulate the machine. What happens in terms of the artwork that's produced by a machine? Is it going to be in the lobby of a big agency or something, like the Jeff Koons downstairs? Jeff Koons didn't make the art, a machine made it.

Fuller: If you look at the trends at the moment in the philosophy and the sociology of art, there is a tendency to be very radically conventionalist about what art is. Art is basically what people regard as art. The point is, in principle, this opens it up to the Turing test question. Namely, if people take it to be art and treat it as art because it's passed certain kinds of criteria - and you can set the criteria as you will - then it is art. That's in fact how we deal with it as human beings, at the moment. That's how we deal with it as human beings. The substance doesn't matter here. With human beings, we already do a kind of Turing test for art. 

Rushkoff: But hang on a sec, because we may have the medium and the message reversed here just a bit, depending on what you think is the purpose of art. If a machine can make a great piece of art, and it's great, and I don't know that it's not a human, and it affects my little neurons and all of that, art has served a purpose of aesthetic manipulation of my senses and made me feel a certain way. But, if the purpose of art is to serve as a medium between me and another human being, then we have to think about it differently. 

Fuller: That's a different issue, you're right. That is a different issue.

Gannis: There are artists now who are starting to develop or produce works where they're thinking about computer vision. They're developing works for computers instead of for humans.

Fuller: There you go, there you go. Exactly. You're right on the money there, that's right.

Gannis: Right, but does the computer want it? 

Rushkoff: Right.

Conor Russomanno: Hi everyone. My name is Conor Russomanno. I'm co-founder and CEO of OpenBCI, which is an open-source company dedicated to, essentially, understanding the brain. I also teach at Parsons so I have a little bit of that art essence in me. Carla stole about half of my question but I'm going to bring it back up. It seems as though there's somewhat of a consensus here that Watson is not a thinking machine, but rather a highly specialised machine for answering complex questions. I want to ask the question, is this truly artificial intelligence? Are we considering that artificial intelligence? It's curious to me that we haven't talked more about the notion of reverse engineering human intelligence, and instilling that into artificial intelligence. This question is mainly directed at Michael, but I'd like to also hear from Martine as well. What is the value in reverse engineering human intelligence before embarking on this grand challenge of building, or creating, or designing artificial intelligence? Do you think that's necessary? Can the ultimate AI be created without fully understanding human intelligence?

Karasick: Yes, yes you can. 

Russomanno: Are you, Watson, putting research into reverse engineering into the human brain, before...

Karasick: No you don't, you don't need to do it. I hate questions like that. Let me tell you why. Maybe I'm just a little too pragmatic. I don't get wrapped around the axle of whether it's an artificial intelligence or not. What I tell people is that we built a computer programme - a very sophisticated one - using a lot of technologies associated with the field of AI that does something useful, and interesting, and valuable. There is a highly charged debate in the field, as per your question. I just find that question distracting, sorry. Now Martine, say something interesting. 

Rothblatt: One thing that your question brings to mind for me and continuing on the Canadian theme - everybody's saying something Canadian every few minutes. My good friend, Steve Mann, who is a Canadian, has come up with a very helpful model for dealing with human interaction with technology. It's this model called sousveillance - spinning off of the concept of surveillance. His idea is that a somewhat egalitarian society is based on the fact that whatever I can observe you, you can observe me. From the tribe to the nation - we've respected that ethic in the vast majority of instances. That's why the "peeping Tom" is such a bad person - because one person can observe and the other person doesn't know it. 

Then you begin to get into all of the surveillance technology that's out right now. Professor Mann's point of view is that there's nothing wrong with surveillance per se, so long as whoever is being surveyed has the right to survey the surveyor, which is what he calls 'sousveillance'. If the city of New York wants to have cameras all over the place, no problem, but anybody who is a resident of the city of New York, or happens to be in the city of New York, then, should have a right to go into the surveillance centre and survey the people who are surveying us.

Now what does this have to do with your question? It seems to me like all of this Watson type of technology - all of this AI type of technology - is actually a microscope or telescope into all of us. That idea is made very explicit with the concept of big data which is people can see that somebody is looking at all of us. Not looking at us with an optical lens, but they're looking at us with a digital lens, if you will. I think the sousveillance point of view - which would be my point of view - is that that which is looking into us is only the right to do that as subject to an obligation that the looked at gets to look back into it. If somebody wants to use my data, I have the right to look at all of the data that's being acquired in a way that means something. 

Now, finally coming around to your question. I think it makes a huge difference if a - no offence - dumb AI technology like Watson is looking at me. I would like to know that. I would like to have the ability to see in a way that's understandable to me, how Watson is looking at me. Or, if a much more souped up future Watson based on reverse engineering of the human brain - something that even if it's far away from consciousness is nevertheless tuned like a human brain - is looking at me. If this reverse brain engineering effort gets into computers - and I personally think that's all but a foregone conclusion - then it's important that it be labelled as such. That's kind of much more important than a GMO label. Human based observation organism, or digitalism, and that all of us know that and have a right and an absolute empowerment to see everything that's being observed about us. To understand exactly what parts of the human brain have been reverse engineered and embedded into that technology.

Fuller: Let me answer this in a slightly different way. I don't actually think reverse engineering of the brain is really necessary for artificial intelligence to make progress. If we ever regard artificial intelligence as its agents and so forth, it'll probably have nothing to do with whether or not they have anything like our brain composition or organisation or anything like that. I'm okay with that, but I do think there is an interesting aspect of the human brain. That's its energy efficiency with regard to the computational power that it has. That, it seems to me, is a really interesting problem. Namely, the amount of efficiency of what can be done - because when computers do try to replicate some of this, it takes an enormous amount of energy to do. There is a kind of interesting core problem about the brain with regard to energy efficiency. Especially if we're concerned about thinking we're going to have this whole world powered by computers and we're worried about the environmental crisis with energy and so forth...brains are very efficient. Brains are very efficient. If we can ever figure out how that happens, that would be a really interesting thing to solve. That's the way I would look at the brain issue.

Rushkoff: It's a good point, and we have to end actually. It's a good answer to end on, and a question to be left with. Which is the more efficient and productive path: to create smart machines or to try to make people smarter?

Previous
Previous

Future Human

Next
Next

Future Music