Radical Technologies – with Adam Greenfield

27 July 2017 | LIBRARY London, London, UK

Virtual Futures presents leading technology thinker Adam Greenfield on his field manual to the Radical Technologies that are transforming our lives.

We’re told that innovations—from augmented-reality interfaces and virtual assistants to autonomous delivery drones and self-driving cars—will make life easier, more convenient and more productive. 3D printing promises unprecedented control over the form and distribution of matter, while the blockchain stands to revolutionize everything from the recording and exchange of value to the way we organize the mundane realities of the day to day. And, all the while, fiendishly complex algorithms are operating quietly in the background, reshaping the economy, transforming the fundamental terms of our politics and even redefining what it means to be human.

Having successfully colonized everyday life, these radical technologies are now conditioning the choices available to us in the years to come. How do they work? What challenges do they present to us, as individuals and societies? Who benefits from their adoption? In answering these questions, Greenfield’s timely guide clarifies the scale and nature of the crisis we now confront —and offers ways to reclaim our stake in the future.

In coversation with Luke Robert Mason (Director, Virtual Futures).


Transcript

Luke Robert Mason: For those of you who are here for the first time, the Virtual Futures Conference occurred at the University of Warwick in the mid-90s, and to quote its cofounder it arose at a tipping point in the technologization of first-world cultures.

Now, whilst it was most often portrayed as a techno-positivist festival of accelerationism towards a posthuman future, the “Glastonbury of cyberculture” as The Guardian put it, its actual aim hidden behind the brushed steel, the silicon, the jargon, the designer drugs and the charismatic prophets was much more sober and much more urgent. What Virtual Futures did was try to cast a critical eye over how humans and nonhumans engage with emerging scientific theory and technological development.

This salon series—and it has been a series, we’ve been running for it for about two and a half years now—completes the conference’s aim to bury the 20th century and begin work on the 21st. So, let’s begin.

Luke Robert Mason: For many in this crowd Adam Greenfield needs no introduction. He spent over a decade working in the design and development of network digital information technology and his new book from Verso Radical Technologies, a field guide to the technologies that are transforming our lives tackles almost every buzzword that’s been forced down our throats by the so-called digital gurus and innovation directors over the last six months.

But unlike those evangelists, Adam confronts these problematic promises with a fresh and critical voice. From the smartphone to the Internet of Things, augmented reality, digital fabrication, cryptocurrency, blockchain, automation, machine learning, and artificial intelligence, every technology is deconstructed to reveal the colonization of everyday life by information processing. This book is one step in revealing the hidden processes that occur when the intentions of designers are mutated by the agency of capital. And anybody who’s joined us for our event with Douglas Rushkoff and Richard Barbrook knows that this may to some degree be a continuation of that discussion.

So in an age where our engagement with technology is one of unquestioning awe and wonder, when we find out about each new advanced tool through the language structured by the PR team, and where the commercial news outlets have to sell us the future, this book is an essential read. So to help us better navigate the future, please put your hands together and join me in welcoming Adam Greenfield to the Virtual Futures stage.

So Adam, what are the radical technologies? What do you define as the radical technologies and why did you select this particular set of technologies?

Adam Greenfield: That’s a great question. So, do you know who Verso is in general? Do you have a sense of who Verso is? Yeah, I figured you probably did. No, I see one shaking head. Verso likes to represent themselves to the world as the premier radical publisher in the English language. So they’re forthrightly left wing. They think of themselves as a publishing house of the left. And you know, for all of the different perspectives and tensions that are bound up in the left I think they do a pretty good job of representing that tradition.

So in the first instance it makes a fair amount of sense if you’re going to confront a title called Radical Technologies from an avowedly left wing publishing house, you would be forgiven for assuming perhaps that the intent of the author is to insinuate that these technologies have liberatory, progressive, or emancipatory effects when deployed in the world.

And I don’t actually mean anything of the sort. I mean that these are radical in the truer sense, in the “original” sense. In, if you will, the root sense of the word “radical,” which is that these are technologies which confront us at the very root of our being. They’re not merely add-ons. They’re not merely things which kind of get layered over everyday life. They’re things which fundamentally transform the relationship between ourselves and the social, political, economic, and psychic environment through which we move.

And it wasn’t very hard to identify the specific “technologies” that I wanted to engage in the book because you know, as we’ve already established these are the ones that are first and foremost in the popular culture, in the media right now—literally. And this is a torment and a torture for somebody who’s working on a book that’s intended to be kind of a synoptic overview of something which is evolving in real time. Literally every day as I was working on the book, I would open up my laptop and there would be The Guardian, there would be The New York Times, there would be the BBC with oh you know, cutting-edge new applications of the blockchain beyond Bitcoin. Or driverless cars are being tested in Pittsburgh. Or indeed somebody whose Tesla was equipped with an autonomous piloting device was actually killed in a crash.

So I am profoundly envious of people who get to write about settled domains or sort of settled states of affairs in human events. For me, I was dealing with a set of technologies which are either recently emerged or still in the process of emerging. And so it was a continual Red Queen’s race to keep up with these things as they announce themselves to us and try and wrap my head around them, understand what it was that they were proposing, understand what their effects were when deployed in the world.

And the additional challenge there is that I’m kind of an empiricist. I mean, one of the points of this book is to not take anything on faith. Do not take the promises of the promoters and the vendors and the people who have a financial stake in these technologies on faith. And neither, take the prognostications of people who’re inclined towards the doomy end of the spectrum on faith. Do not assume anything. Look instead to the actual deployments of these technologies in actual human communities and situations, and see what you can derive from an inspection of those circumstances. And the trouble is that we don’t have a lot of that to go on. So that’s the mission of the book.

Mason: So the thing that to a degree unites all of those technologies, all the things you speak about in the book, is something that you’ve called the drive for computation to be embedded into every single aspect of the environment. You also call it the colonization of everyday life by information processing. Could you just explain that core thesis?

Greenfield: Yeah, sure. I guess in order to do that concretely and properly I have to go back to about 2002. I was working as a consultant in Tokyo. I was working at a shop called Razorfish. And Razorfish’s whole pitch to the world was everything that can become digital will. That was literally their tagline. Very arrogant shop to work in. Everybody was just suffused with the excitement of the millennial period and we all thought that we were like, so far ahead of the curve and so awesome for living in Tokyo.

And frankly, after September 11th of 2001 I was bored to death in my job and I was really frustrated with it. Because that was a moment in time in which everybody I knew kind of asked ourselves well, what is it that we’re doing? Is it really that important? It was a real gut check moment. Everybody I knew including myself, we all asked ourselves you know, we live in times where everything that we aspire to, everything we dream about, everything that we hope for, everything we want to see realized in the world, could end in a flash of light—in a heartbeat. So, we should damn well make sure that what it is that we’re doing on a day-to-day basis is something meaningful and something true.

And at that time I was mostly involved in the design of the navigational and structural aspects of enterprise-scale web sites and I had done about fifty of them for like Fortune 500 clients. And I hated the work and I hated myself for doing the work.

And so I asked myself what comes next after these web sites. Surely this cannot be the end state for the human encounter with networked information technologies. And I asked the smartest people around me you know, “What’s next after the Web? What’s next after the ecommerce sites that we’re doing?”

And given that it was 2002 in Tokyo, everybody said mobile. Everybody held up their little i‑mode devices and they said, “This green screen with the four lines of type on it, that’s the future.” And I couldn’t quite believe that we were going to force everyday life with all of its texture and variability and wild heterogeneity, that we were going to force all of that and boil all of that down to the point that it was going to be squeezed to us through this aperture of this little green screen with its four or five lines of text.

And I was just not particularly satisfied with the answers I was getting. And one person said something different, a woman named Anne Galloway. She said to me, “Actually, there’s this thing called ubiquitous computing. And as it happens there’s a conference on ubiquitous computing in Gothenburg in Sweden in about three weeks’ time. And it’s a little bit late but why don’t you see if your company will pay for you to go there and fly there and check it out and see what’s going on.” And and so I trusted her and I said you know, she’s onto something here. This ubiquitous computing project feels like the future.

Now, what was ubiquitous computing? It was the name for the Internet of Things before the Internet of Things. It was essentially the attempt to literally embed sensing, transmission, display, storage, and processing devices into every fabric, every physical component, every situation of everyday life. All of the buildings, all of the vehicles, all of the clothing, all of the bodies, all of the social circumstances. It was a very aggressive vision.

It was predicated on Moore’s Law. It was basically the idea that these computing devices are getting so cheap that we can essentially scatter them through the world like grass seed. We can treat them promiscuously. It doesn’t matter if some percentage of them fails, because they’re so cheap. We’re gonna put processing into everything. And we’re going to derive knowledge about the world, and we’re going to instill analytics on top of this knowledge, and we’re going to figure out how to make our lives finally more efficient. We’re going to realize all of our hopes and dreams by capturing the signals of the activities of our own body, of the dynamics of the city, of the wills and desires of human beings. And by interpreting and analyzing those desires, we’re finally going to bring harmony and sense to bear in the realm of human affairs. That was ubiquitous computing circa 2002.

Mason: But then the reality was we didn’t discover shit. All we found was that this ubiquitous data collection was being used against us. We were the form of media that was being consumed, almost.

Greenfield: You anticipate me. That’s absolutely correct. You know, we were the product, it turned out. But that wasn’t clear for another couple of years yet. It didn’t really get— I mean, maybe I’m just very stupid and maybe it took me longer to figure out what I ought to have.

But that didn’t actually become clear to me until around 2008, right. 2010, even. There was something else that happened in the interim, which was kind of the last moment of hope that I myself personally remember having around information technologies. It was June 29th, 2007. It was the launch of the original Apple iPhone. And in this single converged device, I thought was the realization of an awful lot of ambition about making information processing human. I was still…I still believed that in those days, as recently as 2007. So as recently as ten years ago I still believed that.

And I went to work at Nokia in Finland to realize a competitor to that device. I was so inspired by that that I thought you know, that’s great for the first world. That’s great for the Global North. But Apple is really only speaking to a very limited audience of people in the relatively wealthy part of the world. Nokia is where the future is. Nokia at that point had 70% of the Chinese market share in mobile devices, 80% of the Indian market share in mobile devices. And I thought this is where we’re going to take all of these ambitions and force them to justify themselves against the actual circumstances of the lives and conditions that most people on Earth experience. I had a lot of hope about that. And as it turns out, that’s not what happened.

We were told that fishermen in East Africa would use their mobile devices to find out about market conditions and the latest available spot prices for the fish that they were about to dredge up out of the sea before they went to market. We were told that canonically, women would use this to learn about family planning and take control of the circumstances of their own fertility and increase their agency vis-à-vis their own communities. We were told that the canonical queer kid in Kansas was going to find other people like themselves and not feel so isolated anymore, and not feel like they were just one in a million that was arrayed against them—that they were going find solidarity and life and voices that resembled them.

And it is possible that all of those things happened, anecdotally, on a small scale. But something else happened in the meantime. Which was the capture of all of these technologies and all these ambitions by capital.

Mason: Well that was going to be my next question. If 2008 was the day the Internet died I mean, what was driving the obsession up to that point? What was driving the obsession to collect this data, to make everything ubiquitous? The obsession to model the world. I mean, were these done with very kind of egalitarian viewpoints and just, capital happened to get involved and cause the mess that we’ve had over the last sort of six years?

Greenfield: In retrospect I want to say that those were the last years of the Enlightenment. I really do. It’s a pretty big claim but I think that the technologies that we attempted to bring to bear in those years were sort of the last gasp of Enlightenment thought. I mean think about it for a second, right. The idea that with this device that each one of you I assume has in your pocket or your hand right now, it gives you essentially the entirety of human knowledge, instantaneously, more or less for free, on demand, wherever you go. And you can do with it whatever you will. How is that not a realization of all of the ambitions that are inscribed in the Enlightenment project? It’s really something pretty utopian to me. And a fact, right. It exists now.

But we forgot to disentangle some things. I mean you know, much of this was done with again the best intentions. If you look back at John Perry Barlow in the Declaration of Rights of Cyberspace. If you look back at— Again, the Californian ideology that suffused the early years of Web and web development. The move towards openness, the move toward standardization. All of these things were done with the deepest dedication to the democratization of access to information. And if you think about for example the slogan of the Whole Earth Catalog you know, “access to tools and information,” again this was something that was realized in the smartphone project, and delivered to people by the hundreds of millions.

The trouble is that as I say in my presentations, something else happened. And it wasn’t the thing that those of us who were invested in making this happen imagined or actually believed would happen. It wasn’t any kind of emancipation except perhaps the kind that Marcuse would’ve called “repressive desublimation,” where all of these things that people had thought were unsayable in public were suddenly validated by their peer groups or suddenly validated in their echo chambers. And all of a sudden the most antidemocratic, the most reactionary sentiments became expressible in public. So in a sense we got what we asked for, but it wasn’t what we expected that it was.

Mason: Do you think there’s a degree of mid-90s retrieval in the technologies such as blockchain? I mean these guys, the evangelists of blockchain say that they’re going to build Web 3.0 and it’s almost as if they forgot that was John Perry Barlow’s original mission, the decentralized Web. And these guys want to build a decentralized web but 50% of them are very young kids—my peers—getting into the cryptocurrency trading and actually forgetting what that underlying technology could potentially do, or do you think we’ve already lost when it comes to blockchain?

Greenfield: Well… [laughs] I don’t think there’s 90s retrieval going on in the blockchain so much as a direct line of continuity from a 1980s project. People in this crowd— I’m reasonably familiar the people in this audience will— Raise a hand everybody who’s ever heard of the Extropians. Oh my goodness, none. No!

Mason: My first interaction with an Extropian was Max More.

Greenfield: [laughs]

Mason: So he was the transhumanist philosopher and I met him at 18 years old in a hotel room in London.

Greenfield: I’m so sorry.

Mason: And he told me that I could ask him any question apart from about the time they cryogenically froze his best friend’s mother. So this was the Extropian philosophy, and a lot of those guys went and became CEOs of cryonics companies and wanted to live forever. I mean, there was that corruption within what happened. The philosophy never matches the execution and I wonder why?

Greenfield: Except, except, in the blockchain. So let me let me explain to you who I think the Extropians are. This is a beautiful vignette that illustrates something about it. These were technolibertarians in, but not primarily of, the Bay Area in the 1980s. They were hardcore Randians. They were hardcore believers in individual sovereignty. They thought of the state as an absolutely unacceptable intrusion on the affairs of free, sovereign individuals. They thought that the only valid relations that ought to exist in the world were relationships of contract between free, willing, consenting adults.

And like other libertar— Are there any libertarians in the audience that I’m going to offend terribly by making fun of? No? Good. Okay. Because I think this is fundamentally an adolescent and specifically an adolescent male worldview. It’s a view that suggests that I’m gonna do whatever I want and Mommy and Daddy can’t tell me that I can’t. And there’s something kind of like, pissy about it.

But these were people who would swan around the Bay Area in ankle-length leather trenchcoats. They gave themselves names like Max More, because they were all about the positive future and your know, our positive aspirations in that future. They believed in the absolute unlimited ambit of human progress. And they would give themselves… You know, they had acronyms like SMILE which was… What was SMILE? I’m forgetting this. But it was something about life extension was the “LE,” right. Ah! Yeah, smart drugs, intelligence amplification, and life extension. And they thought they were going to live forever. They literally thought they were going to live forever and of the ways—

Mason: They still do.

Greenfield: Yeah. Yeah. And one of the ways that they thought they were going to do this was by cryonically freezing themselves when they thought they were about to die, until nanotechnology had advanced to the point that their bodies could be resurrected, their personalities could be downloaded into the newly-revivified bodies, and they were going to go on and live immortal lives in the paradise to come that was realized through technology. These people really believe this stuff. And they were mostly, and rightfully, forgotten. Because this philosophy— You’ll forgive me, I personally believe this philosophy is a joke.

Except a couple of them went more or less underground and set about building a part of this vision. Not the cryonic part. Not the smart drugs part. Not the infinite intelligence expansion, or the bush robots, or the Dyson spheres around the sun. Or the Computronium. They set about building the financial infrastructure that would be required by a universe that was populated by sovereign individual immortal entities.

And that’s how we get the blockchain. We literally get the whole infrastructure of the smart contract, and smart property, and the calculational establishment of trust and the whole trustless architecture and infrastructure of the blockchain from people who didn’t believe that the state had—or any central authority—had any rightful business interfering with our affairs. So they built an infrastructure to substantiate the way of life that they believed in. And it worked.

Mason: The crazy thing is I don’t think the cryonicists are there just yet. I don’t think they’ve even discovered blockchain. The funny thing about a lot of the Extropy folks you talk about is, they’ve got a chip on their shoulder about the fact that they didn’t make a bunch a money in what happened in the 90s. And Kurzweil then took their Singularity term and made it marketable, and now Elon’s running around and Peter Thiel’s running around doing a lot of the stuff that they prophesized but they don’t get the credit for it. And they’ve got a weird sort a chip on their shoulder. There’s a lot of quiet blogs on the dark corners of the Internet where they go, “We said that in the 80s but you know, these guys are building it. Screw them.”

Greenfield: And honestly if I were Max More and Natasha Vita-More, his partner, I would feel the same way. They were. They were saying these things before Peter Thiel thought to infuse his veins with virgin blood. They were saying these things before…yeah, before Elon came around to say that he got verbal government approval for a vacuum-evacuated tube beneath Washington DC. Yeah, they were. Whether it’s credit or blame that they’re looking for they deserve it.

Mason: Well we’ll leave it at that. I do want to go back to blockchain, though. So do you think it’s a get rich quick scheme at the moment for cryptocurrency traders? Or do you think perhaps just maybe there’s something more hopeful there? Can we build the decentralized web that John Perry Barlow had— I mean [inaudible] the blockchain folks and the pain with speaking to them is they so desperately want to be taken seriously like the Web 2.0 folks. Well, they call it Web 3.0 but they borrow the language from Web 2.0. So they call their apps DAPs—

Greenfield: DAPs.

Mason: Decentralized Apps, which is the most fucking stupid term I’ve ever heard. Like, “Yeah, we’ve got a DAP!” I’m like what the fuck is a DAP? It’s a decentralized app. They’re trying to make it look, sound, and market it like Web 2.0…

Greenfield: You know, I didn’t know when I came here that I was going to be in such you know…comfortable— This is like you know, we’re having— I hope somebody in the crowd really radically disagrees with the opinions that we’re expressing up here. Because that’s the only way this is ultimately can be of value for anybody.

Mason: Alright, sorry.

Greenfield: Because if we agree with each other on—

Mason: So I can make a lot of money off of ether! What’s wrong with Etherium? It went up $120 after the crash last week, but.

Greenfield: Okay, so the thing about Ponzi schemes is that the people who’re invested in them believe in them, right? It’s entirely legitimate from their perspective. Any multi-level marketing organization relies after the first couple of people on people who are true believers. And they propagate the value framework of the multi-level marketing organization or the Ponzi scheme out into the world. And they’re very— You know, like any other religion, we get invested in things. I mean I’ve probably got things that I’m invested in that you could confront me with objective evidence that I was wrong, and it would only reinforce me in my insistence that I was correct. Because that’s the way the human psyche appears to work. We now know this. That you can’t argue people with logic or reason out of a position that they haven’t gotten into by way of logic or reason. And the secret is that most of the things we believe we didn’t arrive at rationally.

So a lot of the enthusiasm for blockchain is being propagated by people who are invested in it. And to me the interesting question is why are they invested in it? What vision of the future are they trying to get to? There are… The most heartbreaking thing for me is the people on the horizontalist left who are really invested in blockchain psychically because they think it will realize the kind of utopian left anarchist future. Which is a future that I personally… You know, my politics are…you know, libertarian socialist. Or you know democratic confederalist. Whatever you wanna call it, it’s horizontalist, you know, all that stuff.

So yeah, do I want to believe that blockchain can make that happen? Of course I would love to believe that. But I’ve done just enough digging to find out that the odds of that happening are not terribly great. And if you want to achieve those goals… Goals of confederalism or municipalism or horizontalism or participatory democracy. Much better off trying to realize them directly rather than automating the achievement of that goal by embracing blockchain technology.

Mason: So what do you mean realize them directly?

Greenfield: It’s not going to be nearly as sexy. But I mean having neighborhood councils, neighborhood committees. Affinity groups that you work in. The most amazing thing to me at this time is to look at the real-world examples of confederalists and municipalists who are making headway in the world, who aren’t basing their actions and their efforts on utopian technologies but are actually going out and doing the hard work of organizing people. Almost as if it were the 1930s, right.

Of course, are they using their smartphones? Yes. Are they using you know, Telegraph? Yeah, of course they are. Are they using text messages and Google Docs? Are they using cloud-based applications to suture people and communities together, of course they are because we’re not in the 1930s and we do have tools that we didn’t use to.

But the real hard work is the work of retail politics. It’s the work of engaging people eye to eye, directly, and accounting for their humanity, their reality, their grievances, their hopes, their desires. That is not something as yet that I can see being instantiated on any infrastructure, blockchain or otherwise, and having the same kind of impact in the world.

Mason: Firstly, you’ve written a lot about the city and I want to go back to IoT, the Internet of Things. So you said you were seeing it in 2008. I was seeing about 2012, the excitement over smart fridges which seems to repeat itself every three years ad infinitum. And we never got it. And yet there’s still a drive towards this thing called the smart city. But with things that are happening in the UK, specifically with the NHS hack I mean, are we thinking about the cybersecurity implications of networking an entire city?

Greenfield: No, we’re not. And the reason is, as I say in the book, as I argue in the book, it’s an artifact of business model. And here again is why it distresses me specifically that capital captured the Internet of Things. When you go to… Oh what’s the name of the big British chain, Cellphone Warehouse or whatever. The one that’s on you know, Tottenham Court Road.

Mason: Carphone Warehouse.

Greenfield: Fine. Yeah. Okay. You go in there and you buy a webcam, right. And that webcam may be ten quid at this point. The fact that it was engineered so that it could be delivered to you at ten quid and the manufacturer and the vendor still are still going to be able to make a profit on it means absolutely no provision for security could be incorporated into that device. It’s simply cutting into somebody’s profit margin—it will not happen. And so the technical capability exists to provide each one of these devices with some kind of buffer against the worst sort of eventualities. But for reasons of profit, that hasn’t been done.

And so you can go there, and you can buy a webcam, and you can slap it up in your nursery or in your living room or in your garage. And odds are that unless you’re very thoughtful, very knowledgeable, you know what you’re doing, you read the manual and you configure the thing properly… You know guess what, there are search engines that are going to automatically search the Internet for open ports for cameras that are speaking to the Internet through that port and that don’t have a password or have the default password securing that feed. And you know, literally somebody 8,000 miles away can search for open webcams and find them. And we’re talking about webcams that are looking onto baby’s cribs. We’re talking about webcams are looking onto weed grow ops. We’re talking about the back offices of fast food restaurants. You name it. It’s out there.

And the reason that you can see all of these things from the safety and comfort of your room is that the manufacturer—probably in Shenzhen—you know, they’re making two or three pennies on each one of these cameras sold. If they had bothered to actually engineer it so that it could be secured, that profit would have evaporated.

And it’s the same thing… You know, there’s always this motive. Wherever you look in the Internet of Things you crop up against this. And frankly I’ll be very honest with you, I wish this weren’t so. It is actually boring for me at this point to open up the paper and see the latest example— You know, everybody over the last couple of days, everybody’s probably seen the thing about Roombas. Have you all seen the thing about Roombas now? You know what Roombas are doing?

Everybody loves Roombas because they’re seen as being these harmless robots that kind of humbly vacuum your home. It turns out that Roombas by definition and in order to do what they do have the ability to map your home in high resolution. And now, in search of another revenue stream, the vendor of Roombas is selling that information or is-excuse me—contemplating selling that information to the highest bidder.

You didn’t know when you put that little hockey puck thing down to vacuum up the cat hair in your house that you were mapping every contour of your existence in high resolution and selling that to somebody. And oh by the way not deriving any financial advantage from that yourself but giving up that financial advantage to the vendor in the third party. You had no idea. You were never asked for your consent. You were never notified. But that’s what’s happening.

And I promise you it is no fun at this point to be the anticapitalist Cassandra who sits up here and says, “Guess what you guys. This is what’s going on.” Because people are like, “Ah, God, you again. You again. You’re so… You’re no fun. Why won’t you let us have our robots? What’s wrong with having a webcam in the house?” And I’m like, fine. If you don’t mind the idea of a hacker in Kazakhstan looking into your kid’s playroom at will, be my guest. But, I wouldn’t do that.

Mason: There’s the micro scale of the home, but there’s the macro scale of the city itself, and there’s a lot of excitement around autonomous vehicles and self-driving cars. And some of the most troubling stuff that I’ve seen written is, when all of these cars are connected, because whether it’s driven by human a or it’s driven by a machine every single one will have to have a beacon (at least in the UK policy currently) to identify where it is on the road, the ability to take control of those cones is opened up. And we won’t have just one London Bridge event where we have someone careening a truck that they hired into a bunch of people. We could have sixteen simultaneously done by a truck that was driven by someone who had no agency over the fact that that was going to go kill people.

My issue is cybersecurity on wide scale, why are we not there yet? Why are we not just running petrified from a lot of this IoT stuff going, “Are you fucking kidding me?”

Greenfield: Because you’ve already answered the question. I mean, as a matter of fact it would be easier to do it by powers of two simultaneously, right. It would be easier to do sixty-four trucks simultaneously, or 256 trucks simultaneously. Because they’re all the same standard model and they all have the same security package, right. You can capture multiple cameras at once because they don’t have security on them. I promise you that there’s going to be a vendor of automobile networking that is going to have a similar lack of attention to detail, and it will simply be easier to do it all at once.

Why are we not running screaming from these things? Well…we believe in the future. And we believe that the future is going to be better. And we believe… I mean…putting the question of terrorism to the side, why is it that we never talk about autonomous public transport? Why is it that when we imagine the driverless car, the autonomous vehicle, we always imagine it as simply the car that people own now but without a steering wheel?

Mason: Because of the manufacturers. Nissan are fucking terrified that nobody’s going to buy cars.

Greenfield: And they’re right.

Mason: Yeah. And the insurance companies are even more petrified. If you can prove you’re never going to have a crash, buy a viva.

Greenfield: No, you’re right. You’re right. So again, this is kind of a drumbeat that I’m sure gets tiring for people. Capitalism is the problem, right. Capitalism is the ultimate framework in which our imaginary is embedded. And we have a really really hard time seeing outside that framework and saying well, maybe these things could be collective goods. Maybe these things could be municipally owned. Maybe these things don’t have to replicate all of the mistakes that we’ve made over the last hundred years. Wouldn’t that be amazing?

The trouble is that— You know, it’s the most enormous cliché on the left. It is easier to imagine the end of the world than it is to imagine the end of capitalism. Like this is such a cliché that it’s like one of these inspirational quotes on Facebook. Nobody’s quite sure who said it originally and there are multiple people who’ve— You know, Abraham Lincoln probably said it. And we need to begin urgently imagining what that looks like. Because if we don’t we’re never going to be able to imagine a place for these technologies in our lives that responds to the most basic considerations of human decency and the kind of world that we want to live in. It’s that simple.

And if you don’t already agree with me, I certainly don’t expect convince you tonight. This is simply my opinion. But it is…you’ll forgive me, it’s an opinion that is bolstered by a depressingly consistent litany of evidence over what is now fifteen or twenty years. Every single fucking time we seize on a technology that looks as though it might be used for something interesting, something outside the envelope of everything that we expect, everything that we’re accustomed to, it gets captured and turned back—and in amazingly short periods of time. Like, one of you is going to have to do better. You’re going to have to go out there and rip this envelope of constraints to shreds and imagine something that doesn’t look like everything that we’ve already been offered. Because otherwise it’s just going to more of the same over and over and over again. And you know, I’m old now, right. I don’t want to live the declining years of my life in an environment where I’ve seen this all before and it’s all— You know, somebody come at me with something profoundly new and different and I will be the very first person to applaud you.

Mason: I just wanted… From the floor I mean, who still believes in the future? Welcome—

Greenfield: One hand. Yay!

Mason: Welcome to Virtual Futures. We found the others. And we’re all on God knows what. I mean, we spoke a lot about depression in the last one.

Greenfield: You want to kick the mic into the crowd and see what happens?

Mason: I do, but before we do that I have one other question which I think—and let’s jump— Should we just embrace the accelerationist’s thought. Should we just go, you know what? If capital is the thing that’s driving this all, let’s just accept it. Let’s run for it. Let’s accept that humans are just here to train the machines to take over when we finally are killed off by them or we no longer have the biology to survive the environments we’re in because we fucked it up? And it would be okay for some of the humans because those would be the guys who fly off to Mars and have their own little species—their subspeciation planets there. I just wonder, should we embrace the accelerationist’s viewpoint, and should we allow some humans to just subspeciate, or aspeciate?

Greenfield: Uh, well…you’re all welcome to but I can’t, and I couldn’t bear myself if I did. Because honestly? Accelerationism feels to me like a remarkably privileged position. It’s something that people who are already safe and comfortable can say “throw caution to the winds; let it all fly,” right. You can say that if you’ve got a roof over your head and food in your belly and healthcare for the rest your life. It’s easy to say that.

If you’re any closer to the edge than that— If you have any real amount of what we now call precarity, fear, in your life. If you have fear in your belly because you’ve watched the people around you struggle with their health, or their mental health. If you’ve been touched in any way by the economic downturn that’s kind of taken up residence in our lives since the introduction of austerity. If you perceive yourself to in any way not have been advantaged by the past forty years of neoliberal hegemony across the Western world, it’s impossible to embrace accelerationism if you have a beating heart and anything resembling a soul. It’s my own personal opinion. I hope I’m not insulting any of you. But that is— You know, accelerationism to me is an abdication of responsibility for the other human beings you share the planet with, and also by the way the nonhuman beings and sentiences that you share the planet with.

Luke Robert Mason: So on that note, whether you believe in the future or not we are going to throw out to audience questions. We’re gonna see if this might work. So we’re going to hand this mic around. We’re so understaffed it’s incredible, so if anybody wants to run our mic that would be great. or we could work as a collaborative unit and pass this mic between folk—

Adam Greenfield: We could make it happen. I’m sure we can make it work.

Mason: Or sometimes we have to just grab mics off of people. By the way, a question has an intonation at the end. So if you have any questions..

Greenfield: Oh right. Yeah no, that’s a really really good point. I do a lot of talks where people make reflections. I’m sure you’ve all got fascinating things to say but I would love to hear those things afterwards over a beer? And right now it’s literally for questions that we will attempt to answer. If you have a reflection to make, maybe the time for that is later on.

Mason: Wonderful. Any questions?

Matthew Chalmers: Hello.

Greenfield: Howdy. What’s your name, man?

Chalmers: My name is Matthew Chalmers. I’m an academic from the University of Glasgow.

Greenfield: There you go.

Chalmers: There you go. And I just came—I just walked out now from at a meeting in Her Majesty’s treasury where there are people from government trying to find out about distributed ledger technologies and what they might do about it. They’re skeptical but interested, and they’re being hit by this wave of hype. And I was one of the people they throwing rocks. Because I think the hype is just going to become totally overblown.

Greenfield: You’ve been throwing rocks since I’ve known you.

Chalmers: Why change the habit of a lifetime? So I wonder whether Adam and the others would like to… What would their message be to the people from the Justice Department, and the treasury, and the banks I just talked to. Because it was really freaky.

Greenfield: I would love to pick your brain over a beer as to what that meeting looked like, to the degree that you’re comfortable sharing it. You said they were skeptical, and that’s fascinating to me. Like, I assume… My default assumption is that those people are not stupid. And they have a certain ability to know when they’re being pushed into a corner. But they don’t always have the tools to resist that. And so my question to them would be what is it that people are asking of them? Why is it that distributed ledger… Which is not identical with blockchain—we need to be very careful with the terminology here. But what is it that they hope to achieve with a distributed ledger? And are there not possibly other ways of achieving those ends that don’t involve the transition to an entirely new and unproven technology? That would be— I mean, yeah seriously. I mean like, I’m jealous that you got to be in that room; I’m grateful that it was you in that room.

Chalmers: I wasn’t the only one.

Greenfield: I’m sure. But I think that… You know, dare I hope that— I’m knocking, you can see knocking the chair here instead of knocking on— Here’s wood; knocking on wood. Dare I hope that we have been burned enough at this point and we have plenty of case studies to point to where some multi-billion or tens of billions of pounds of investment was made in the technology, and the technology vendor turned out to not have the best interests of the public entirely… Dare I hope? I don’t know. It’s an amazing circumstance to think of and I would love to catch up with you more afterwards and find out what that conversation went like.

Mason: Any other questions?

Greenfield: Say your name, please.

Mason: Also, if anyone wants to earn themself a beer, I really need someone to run that mic. So if anybody can help, that would be great. Sorry.

Audience 2: My name is Jaya and I am writing a PhD on blockchain technology. And I would love to also who hear more about what happened in that meeting. My question is not about blockchain, though.

Greenfield: Thank you.

Audience 2: I’m more curious about… The conversation that the two of you were having was very much kind of focused on accidents and potential security problems with digital technologies. And usually when that framing happens, it kind of turns the problem into just another problem for technology to solve as in okay, there’s a security problem. Let’s get some cryptographers involved, let’s get some— You know, it’s another problem to be solved by more technology.

So I was wondering if there’s a different kind of angle or some other kind of aspects of the critique. I mean, you mentioned a kind of general critique of capitalism, which sounds fantastic—

Greenfield: Pretty broad.

Audience 2: —and makes sense. But I was wondering like, some of the more specific angles that you cover in the book.

Greenfield: I do wonder, you know… In the 1960s, and I’m going to forget and not be able to cite this appropriately. But there was a body of thought in what was then called human factors research about normal accidents. And you can you can look this up right now and you can find the canonical paper on normal accidents. But the idea was that in any of the complex processes that—and I think that the canonical example here was a nuclear power plant. That any of the complex processes that we’ve installed at the heart of the ways in which we do life under late capitalism at this point in time…accidents aren’t accidents. We can expect that our processes are inherently braided enough and complicated enough and thorny enough and counterintuitive enough that errors will arise at predictable intervals, or at least you know, predictably.

And I thought in the seed of that was something profound and not merely amenable to technical resolution. Because as I understood it, the point of that argument was to say not to slap a quick technical fix on a system that you know is going to throw errors at intervals. But in a sense to redefine processes around what we understand about who we are and what we do and how we approach problems. It isn’t simply to build backups and cascading redundancies into complicated systems, it’s to accept that we make mistakes.

And I think it’s that acceptance of human frailty that I found particularly radical and particularly refreshing. That ultimately any of our institutions are going to be marked by…you know, it’s no longer done to say “human nature” so I won’t say human nature. But anything that we invent, anything that we devise, anything that we come up with, is going to be marked by our humanness. And instead of running from that, it might be best to try and wrap our heads around what that implies for ourselves, and to cut ourselves a goddamn break, you know, and to not ask that we be these sort of high-performance machines that are simply made out of flesh and blood but that are slotted into other networks of machines that don’t happen to be made out of flesh and blood.

I thought that there was a hopeful moment in there that could have been retrieved and developed. And I think frankly that there still could be. I think that most of what gives me hope at this point are processes which are not at all sexily high technology but are precisely about understanding how people arrive at decisions under situations of limited information and pressure. And I think that’s why I got involved in what was then called human factors in the first place, was because the world is complicated, and it is heterogeneous, and there’s not going to be any critical path to a golden key solution to any of this. We have to work at it together, and it’s a process that is painstaking and involved and frustrating—oh my God is it frustrating. And to my mind, the more that we understand that, and the more our technologies inscribe that lesson for us in ways that we can’t possibly miss, the better off we are. Is that…a reasonable answer? Groovy.

Mason: My flip side to that is we can’t prepare for it, and we need the catastrophe to occur. So philosophers have been arguing about the trolley argument with regards to self-driving cars for God knows how long. We won’t give two shits until a car actually kills someone, and so blood is actually spilled. And with critique of the Extropy folks, they thought they were going to get their living forever futures without anybody dying. If you’re going to experiment with certain types of medical technology on individuals to help them live longer, then you’re going to have to experiment on human individuals eventually, and there will be mistakes. The history of science shows us that.

Now Professor Steve Fuller, who we’ve had here a lot at Virtual Futures, has argued that maybe the only way to actually make some of these crazy visions possible is that we sign up for our humanity. In the same way in the 1930s you signed up for queen and country to go to war, you’d sign up your humanity and you’d go and get your weird biotech experiment to see if it made you live longer, because if it did you would be a pioneer for the future of humanity. And if you died, well you died in the service of the future of the human race…whether we’d ever get there or not.

Greenfield: I think you make a really good point, though, which is that when the Extropians did have, literally, their heads cut off and frozen in liquid nitrogen and they entrusted their heads to these repositories that they thought were going to last for 10,000 years, the holding company went bankrupt and defaulted on their electric bill. The electric bill on the coolers wasn’t paid. The coolers were shut off by the electric company. The facility reached room temperature. The coolant leaked out of the vessels and the heads rotted.

Mason: You know their solution for that?

Greenfield: No pun intended, go ahead.

Mason: Yeah. They want to send them to space.

Greenfield: [laughs] ‘Course they do.

Mason: They need more space to bury dead people, so the coldest vacuum is space, so why don’t you just have them orbiting—

Greenfield: So, but…

Mason: —ad infinitum? I’m fucking—I’m serious.

Greenfield: I believe you. I completely believe you. But the point is that human institutions you know, they’re not transhuman, they’re not posthuman, we’re all too human, right. We go bankrupt and we don’t pay the power bill. And then the power company cuts off the po—this is what happens. The space launch system you know, somebody transposes something that was in metric to Imperial, and the capsule that was supposed to orbit in a comfortably tolerable environment and keep your head frozen for ten million years is launched into the sun. Who knows?

The people who believe these things believe in the perfectibility of things which have never in our history ever once been perfect before, and they’re betting everything on that perfection. And I find it touchingly naïve and childlike. But as a political program culpably naïve and to be fought with every fiber of my being.

Mason: Is the other piece of your book the thing that unites those technologies? There’s a drive for optimization. Whether it’s the city, the human, or anything else in between.

Greenfield: I hope again I’m not insulting any of you. None of us in this room are optimal. Like I’m not optimal—I’ll never be optimal. I’ll never be anything close to optimal and I’m not sure I would want to be optimal. You may have different ambitions and I wish you the best of luck. But I think it’s going to be a rough road.

Mason: I found someone who’s kind enough to run this mic, thank you ever so much.

Greenfield: You’re not yourself asking a question?

Mason: Thank you.

Audience 3: Well…

Greenfield: Say your name.

Audience 3: My name’s Tara. Hello. On that note with optimization, could you not say that it’s somehow linked to capitalism? That you’re always chasing this goal that you can never achieve and we’re now bringing that to ourselves with, physically you’re meant to follow and you know, you could say the same thing with the sort of gym craze that everyone seems to be going through as a cure for finding this optimal being.

Greenfield: Yeah. I think that capitalism is almost too easy a bugbear, though. Because the desire to optimize or to perfect is older than capitalism. And it’s almost as if it has vampirized capitalism to extend itself. That logic of wanting to perfect ourselves, to measure ourselves against the gods you know, it’s not new. And it’s not shallow, either. I understand why it exists.

But the fact of the matter is that when we go to the gym—I go to the gym. You know, I will spend ninety minutes tomorrow on an elliptical machine. Why will I spend ninety minutes on an elliptical machine? Well, because I want to be fitter. Why do I want to be fitter? I want look better in my clothes. I want people to think that I’m more attractive. I want people to think that I’m more attractive so that they are more likely to want to invite me to things because my financial future depends on me being invited to things.

I mean, all of these… You know, these things are not innocent. And the motivations and the desires that we recognize in ourselves aren’t there by accident. And I’m not going to say that they’re always 100% there because of you know, capitalism—that’s kinda shallow. But they’re invidious. And what I would ask is that we each have the courage to ask of ourselves why it is that we feel that we need to be like some gung-ho NASA astronaut of the 1960s, “kick the tires and light the fires.” Why is it that we feel called upon to operate in these high-performance regimes when we’re after all simply human.

Mason: I think [alternative?] failure of the Extropians… So the morphological freedom thesis was we’re going to be stronger, better, faster, more optimized. The thing they forgot is in actual fact that doesn’t make us better as an entire species. The thing that we should do is embrace difference.

Greenfield: Yes.

Mason: It wasn’t survival of the fittest, it was survival of the mutant, the individual, and the animal that actually survived. The weird ways in which the environment would manipulate them. And it wasn’t the fittest ones that survived. And I wonder if we embrace difference instead of driving towards optimization that we’d have a more interesting experience. Or, will it go fully the other way and we subspeciate and we will have those guys who go off-planet and the rest of us will be left here.

Greenfield: Yeah, I think hitting on something true and real and interesting. Before Boing Boing was a web site it was a fanzine. And I think its tagline was something like “Happy facts for happy mutants” or something like that. And the happy mutants part was important, right. It was the idea that we weren’t going to be constrained by the human body plan. And that we were going to invent or discover or explore new spaces. Like not merely new expressions of self, new genders, new identities, new personas, new ways of being human, new ways of being alive.

And that was startlingly liberatory. It really was, um… You know, in 1985 or so that felt like something worth investing in, and something worth betting on. And I think it is sort of a failure of the collective imagination that we now interpret freedom to mean essentially the freedom to oppress and exploit other human beings, and the nonhuman population of this planet. Because it did at one point mean… Every single time I see somebody who still like, they’re body hacking or they’re putting a chip into their wrist or something like that, I have mixed feelings. Because on the one hand I see the last surviving note that somebody’s hitting of something that was much bigger and more hopeful, and I also see the totality of the ways in which that’s been captured and turned against the original ambition. That’s a melancholy and a complicated feeling. But you’re right. I mean maybe there’s something in that to be retrieved and brought forward to the present moment.

Mason: We can only hope. Another question.

Audience 4: Hey, my name is Henri. I’m French but I’m sure you’ve already heard that. Anyway—

Greenfield: [laughs]

Mason: What a wonderful opening.

Greenfield: Well done, yeah.

Audience 4: So we know robots are taking more and more jobs everywhere. And there’s a belief that creativity’s one of the only sectors that won’t be touched by automation. But do you think that robots can be creative? And if yes does that mean we’ve reached a kind of singularity?

Greenfield: So I don’t believe in singularities, right. Bang. So let’s dispense with that.

Weirdly enough, though, there’s some tension between the two parts of my answer. I think the Singularity is a human ideology. I think it doesn’t correspond to the nature of nonhuman intelligence. I do think nonhuman intelligences are capable of being creative.

And let me not specifically for the second talk about machinic intelligences. I think that we know, by analogy to other forms of nonhuman intelligence that are capable of creating…using the world as an expressive medium, that you cannot tell me that the informational content of whalesong is all that it’s about. You cannot tell me that birdsong is simply about conveying information. It is a presentation of self. It is an embroidery on the available communication channel, and there is pleasure that is taken in that act. So I would interpret that—birdsong, whalesong, communications of animals in general—as expressive and creative acts. Right here, right now, without even having to think about machinic intelligence.

So, do I believe that we will—relatively soon—arrive at a place in which algorithmic systems are generating semantic structures, communicative structures, expressive structures for their own pleasure? Or something indistinguishable from pleasure—yeah, I do. I absolutely do. I do not think that creativity is the last refuge of the human. I think for all that I am in many ways a humanist in the old-fashioned way, it’s very difficult for me to draw any line at any point and say this is the unique thing about humans that nothing else in the universe is capable of.

And as a matter fact, what converted me to this position was in fact an attempt to do that, was the attempt to find something uniquely and distinctively human. And you know, if you have any intellectual integrity at all, if you go down this path you find pretty quickly there’s nothing that we do that other species don’t do. There’s nothing that we do that other complex systems in the universe don’t do. Very very very little, it turns out, is distinctively human.

So yes I do believe in relatively short order we will be confront— If in fact they don’t already exist and we’re just simply not perceiving them, in the way that an ant doesn’t perceive a superhighway that’s rushing past its anthill, right. It is possible that these expressive and communicative structures are already in existence at a scale or at a level of reality that we do not perceive.

But even putting that possibility to the side, yes I think that we will invent and create machinic systems which will to all intents and purposes realize things which we can only understand as art or as creative or as expressive. And then the question becomes what rights does our law provide for those sentient beings—because they will be sentient. What space do we make for them that is anything but slavery? And how do we treat them that is in any way different than the way that we treat people at present?

You know Norbert Wiener, in I’m going to say 1949 and somebody will Google this and tell me that I’m wrong. But one of his first works of thought in cybernetic theory was called The Human Use of Human Beings. And I come back to that framing a lot. It is about the use of things that are regarded as objects, and not things which accorded their own subjectivity, their own interiority, their own perso—their own being. And I think that we’re going to have to confront that in our law, in our culture, and in our ways of interacting with one another sooner rather than later.

Mason: Any other questions?

Audience 5: Hey, I’m Matt from Scotland.

Greenfield: Hey. What’s up?

Audience 5: You mentioned you want to see the end of the capitalism, and I’m all for it. I actually want to work on that. Do you have any ideas for me?

Greenfield: Yeah. I do. Öcalan. The founder of the PKK in Turkey, he wrote a book called Democratic Confederalism, go read it. Great book.

Audience 6: Hi, I’m Simon from Brighton.

Greenfield: Hi, Simon.

Audience 6: What’s your view on how employment’s going to be affected over the next twenty years by all of these changes we’ve been talking about?

Greenfield: Yeah, oh God.

Audience 6: Sort of how the Extropians were going to have their futures without having to die, will we get our futures and still get to keep our jobs?

Greenfield: I think we need to accept that our language around this stuff is braided and interwoven with assumptions which are no longer tenable. So, what is a job? A job is a thing that we do during the hours of our days that is remunerative to us and that generates value for the economy. And that somehow most of us are expected to have as a consequence of being adults, in a culture that expects full employment or something close to full employment. And in which a metric of the healthy functioning of the economy is that there is something close to full employment of human beings.

And I think that all of those assumptions are becoming subject to challenge if they haven’t been challenged already. So the notion that a job is a thing that you go to is already you know, it’s already been exploded and disassembled by the past thirty or forty years of experience. Like, we have tasks now rather than jobs. We no longer—

Audience 6: [inaudible]

Greenfield: Gig economy, absolutely. That was the first assault on these ideas. But then comes the idea that there are tasks which automated systems can perform at much lower cost than human beings. And particularly if we accept the thesis that I’ve just argued to the gentleman who asked the previous question before one, that there are very few tasks in the economy that cannot ultimately be performed by machinic systems, right.

Like, I used to make this argument to like ad agency people. And they would say, “Oh you know, a guy who puts together cars on an assembly line yeah, that can be automated away. And a nurse. Well, the job of a nurse can be automated away. We’ll find people to wipe the butts of people in nursing homes and robots will do that and algorithms will do the rest. But I’m the creative director of an ad agency, and you’ll never automate away the things that I do. The spark of creative fire that I bring.”

And I’m like dude, do you understand what a Markov chain is? And do you understand how I could take the whole corpus of 20th century advertising and generate entirely new campaigns out of what worked in the past. So there’s very little that I see, again, as being beyond the ability to be automated. And I think that when that happens, we really really have to wrestle with the idea that the assumptions upon which the econometrics that a healthy economy is assessed on are misguided. That the whole notion of economic growth, the whole notion of the wise stewardship of a nation-state being one that’s coextensive with economic growth, that is expressed in something close to full employment, we need to devise systems that replace all of that because it’s all on its way out.

At this point most people talk about UBI. They say the universal basic income is going to save us all. And I say well that’s great. I love the UBI. But surely you’re talking also about the UBI in a context of universal healthcare, and the right to housing, and you know, the right to shelter, aren’t you? Because if you’re not, the UBI will wind up getting siphoned back off of people in the form of user fees for services which used to be provided by the public and are now suddenly privatized. If we simply have the UBI in the usual neoliberal context, we haven’t really gotten anything at all.

So, jobs, the economy, employment…hobbies. You know…craft. I mean, all of these terms have been defined in a context in which all of the assumptions that govern that context and no longer tenable. How do we begin to be human in a time when none of these things are any longer true? I have some ideas but I don’t have any answers. All I have are my own instincts and the things that I’ve learned. And all you have your own instincts and the things that you’ve learned. And together all that we have is our collective sense of what we’ve seen happen when automation happens. As I say elsewhere—not in the book—we’re entering a post-human economy, and a post-human economy implies and requires a post-human politics. And we now have to discover what a post-human politics looks like.

Mason: I want to quickly return to UBI. So there’s been two…these key ideas that I’ve heard which are quite attractive with regards to how UBI would actually work. One is the ability to sell our own data. So…I hate to return to blockchain but the idea that you’re about to produce a whole bunch of new data that shouldn’t be taken by the stacks. So you can produce genetic data and neuro data. And what do platforms want? Well, they want attention data and that’s neuro data. So if you can store that data locally and then sell it or micro-sell it back to the platforms that make the money off us in the first place, that’s a way to turn basically bring in a small amount of income every time you’re sitting there searching through Facebook, ie. the advertisers pay us to watch their thirty-second bits of rubbish.

Or the second one is we just need to come to terms with the fact that the employees of the sorts of companies who actually advocate the UBI such as Google, you see the very young employees going, “Yes, UBI’s a great idea!” But they forget they work for the company that’s not paying their taxes in this country, and the tax money would be how to make UBI happen in the first place, so should they not be more accountable to actually turning around to Amazon or Google or wherever the hell they work and go, “Jesus, I’m gonna need this UBI. Fucking pay your taxes.”

Greenfield: If corporations paid their taxes we wouldn’t be talking about UBI, period end of sentence.

Mason: Yeah.

Greenfield: Yes, yes. Absolutely.

Mason: So the second one’s more tenable.

Greenfield: Yeah. I mean, let’s dispense with that first, dystopian vision.

Mason: Right.

Greenfield: Let’s simply say that in the United States at least, if corporations paid their fair share of the taxes, you could afford the welfare state and a whole lot more. You could afford basic infrastructure. You could afford decent quality of life for every single human in the country and a whole lot more besides, and that’s just the United States. Corporations should pay their damn taxes.

Mason: That didn’t get a round of applause. I’m slightly concerned now. Any other questions?

Greenfield: Let’s have this be the last question, if that’s okay.

Audience 7: A lot of pressure. Thank you.

Greenfield: A lot of pressure on my bladder, particularly.

Audience 7: Oh, okay. Then I’ll make it short. I want to talk a little bit about (I’m Pete, by the way.) about the market and maybe the taste of people who use technology. I’m thinking particularly about augmented reality. Let’s take Pokémon GO as an example, which everyone’s a little embarrassed about now. Candy Crush has outlived Pokémon GO. And that maybe in terms of the market, there isn’t the taste for the future that transhumanists want. People just want to play Candy Crush and wait for death. And because of that, we’re actually defended against certain types of dystopia.

Greenfield: Bless you. I’m so glad somebody use the word taste. So one of the things that happened when we did successfully democratize access to these tools, services, networks, ways of being in the world, was that we lost control of taste, right. I mean, when you had a concentrated decision nexus in the 60s, you could essentially impose high corporate modernism on the world, because there was a very very concentrated number of people who were making decisions that governed the ways in which every day life was to be designed.

And I gotta tell you, me personally I think high corporate modernism was the high point of human aspiration. Like, Helvetica to me is the most beautiful thing that’s ever been created. And the International Style, and monochrome you know…everything is to me the epitome of taste. But it turns out that 99.8% of the people on the planet disagree with me. And that they would rather reality be brightly colorful, animated, kawaii, happy, fun, you know…literally animated pieces of shit talking. And that they express themselves to one another by sending themselves animated images of pieces of shit with eyes stuck into them. This is just a neutral and uninflected description of 2017, right.

Audience 7: [inaudible]

Greenfield: Well, okay. But you know, the thing is that um…

Audience 7: It makes people happy.

Greenfield: It makes people happy and who am I— There’s not a damn thing wrong with that. That’s ultimately where I’m going, is that it turns out that if what I’m arguing for is a radical acceptance of what it is to be human, it turns out that we like Dan Brown novels. And it turns out that we like anime porn. And it turns out that we spend a lot of time in spreadsheets, right. This is what humanity is. It’s not…what I would like to believe that we are but that is what humanity is. And if I’m arguing for radical acceptance of that and a radical democratization of things, I have no…I have no choice but to accept that.

Now, what I can do is ask, and I think it’s fair to ask, why people want those things. And why people think that these things are funny. And why people think that these are expressions of their own personality. How is it that we got there—this is the cultural studies student in me. How is it that these things became hegemonic? Why is it that we internalize those desires? Why is it that we interpret this as some kind of um…why do we think that these are expressions of our individuality when literally seven billion other people are doing the same thing? And for that matter, why do I think of the way that I’m dressed as an expression of my individuality when there are one million people who are doing the same thing.

These are deep questions. But I think that the only ethically tenable thing is to accept that taste is a production of cultural capital and that the taste that I particularly appreciate and enjoy was never anything but an infliction of a kind of elitism on people who neither wanted nor needed it.

love brutalism. We see what happens when brutalism is the law of the land. I love Helvetica. You know, I love that stuff. I do. It makes me…it makes my heart sing. But it’s not what humanity wanted. I guess…

I am literally bursting. So do you think we could end it there? Will you forgive me if we do? Nobody does this.

Mason: Before I return this audience to their spreadsheets of anime porn and put you out of your misery, I was going to ask you, if you can do it really really quickly…really quickly, how do we build a desirable future? Or should we just wait for the imminent collapse?

Greenfield: No, no. I think we get into the streets. I think we do. I think we get political. I think we get involved in a way that it’s no— I would say up until about three, four years ago, I would have said that it was no longer fashionable. Thankfully it’s becoming fashionable again to be involved in this way.

I really do think that the emergence in liberated Kurdistan of the YPJ, the YPG, these people are the most realized human beings of our time. They are doing things which are amazing and they’re doing so on the basis of feminism and democratic confederalism. It’s fucking awesome and inspires me every day of my life, and if they can do that under the insane pressures that they operate under, we can do that in the Global North in the comfort of our own homes.

Mason: Great. So, Adam doesn’t have an optimized bladder, so we’re gonna finish here. Radical Technologies is now available through everything apart from Amazon. It’s available through Amazon but I recommend you buy it from Verso.

Greenfield: If you buy it from Verso, literally it’s 90% off today. They’re having a promotion. Ninety percent. Go pay like 10p for it, today. It’s awesome. It’s a good book. Buy from Verso.

Mason: So, very quickly I want to thank the Library Club for hosting us. To Graeme, to the gentleman here…I don’t even know him. To Dan on our audio, and everybody who makes Virtual Futures possible.

Greenfield: And Sophia with the microphone.

Mason: Sophia, thank you for the microphone, for actually… We’re a skeletal team. And if you like what we do, we don’t make any money. So please support us on Patreon and find out about us at “virtual futures” pretty much anywhere.

And I want to end with this, and it’s with a warning and it’s the same warning I end every single Virtual Futures with—and it is short, don’t worry. And it’s this: the future is always virtual, and some things that may seem imminent or inevitable never actually happen. Fortunately, our ability to survive the future is not predicated on our capacity for prediction, although, and on those much more rare occasions something remarkable comes of staring the future deep in the eyes and challenging everything that it seems to promise. I hope you feel you’ve done that this evening. The bar is now open. Please join me in thanking Adam Greenfield.

Greenfield: Thanks, Luke. That was awesome. Thank you. It was awesome. Cheers.

Transcript provided by OpenTranscripts. Please support their work on Patreon.

Previous
Previous

Radicals – with Jamie Bartlett

Next
Next

Anticipating the Unknown