Narrator: We find ourselves in a moment of extreme uncertainty. Misinformation fueled by fake news and deepfakes threatens the foundations of our democracy. In many ways, it feels as though we're living in an episode of Black Mirror, that the future is dystopian and there's nothing we can do about it. But what if we could use visions of the future as an opportunity to inform and inspire and empower citizens. Project Immerse is a digital literacy initiative that takes the form of an immersive anthology series. Told through pervasive Web based tools the experience drops participants into a number of playable stories designed to demystify the technology behind fake news and deepfakes.
Douglas Arellanes : We're here with Lance Weiler. Lance is a longtime innovator in the area of interactive storytelling. Lance, you're based in New York, right?
Lance Weiler: Yes, I'm at Columbia University in the city here.
Douglas: Excellent. And what is your title again at Columbia? You're head of a lab right?
Lance: Yeah, I'm the founding director of the Columbia University School of the Arts Digital Storytelling Lab. And I'm also a professor of practice in the School of the Arts, appointed in film and theater.
Douglas: Excellent. And your work on your own projects are in these areas of interactive storytelling and using new technologies for narrative, right?
Lance: Yeah. The work that we do is we explore new forms and functions of storytelling forms being things that are, you know, using emerging technologies like AI, VR, AR, the Internet of things and functions being using storytelling for healing and learning and mobilization and entertainment.
Douglas: And what is it… How did you get started in this? I mean, you've been working on it for quite a while, right?
Lance: Yeah. I started way back where I did the first all digital release of a motion picture, to geosynchronous platform satellites and then downloaded onto hard drives and that was in 1998. So that became the first all digital release of a motion picture on that started me on a path of thinking more about ways that I could interject technology into what I was doing, you know, and the way that I could intersect kind of code into my work in regards to, you know, just pushing at the edges of what was possible. I became more and more interested in this notion of how stories could spill off screens and into the real world and back again.
Douglas: Excellent. And you're you've been working most recently with AI as one possible area of exploration. Can you talk a little bit about what you've been working on recently?
Lance: Sure. Well, I've been working with AI for a number of years. One of the first projects that we did that made use of Artificial Intelligence was a project called Frankenstein AI, which came out in 2018 celebrating the 200th anniversary of Mary Shelley's Frankenstein. And we felt that Frankenstein would be an interesting metaphor for Artificial Intelligence in the sense that, you know, the core themes of that piece are really kind of about you creating something into giving out of your control, right? And so we thought it would be interesting to kind of explore our relationship to AI. And that piece was really about this notion of how do we have more inclusive design practice or around Artificial Intelligence? And how do we deal with bias within algorithms? And so we ended up creating this. this piece, it went on to go to Sundance that year, in 2018 and then also went to IDFA Doc Lab in the first form it became an immersive theater performance piece in three acts, and IDFA became an AI dinner party.
Douglas: Your new piece also Project Immerse is how we got to cross paths because of my work with the Mozilla Festival. Project Immerse is something that you've been working on for a lot of 2020 right? I mean, how long you've been working on that project?
Lance: We started Project Immerse probably back in May and Project Immerse is really interesting… It's almost a kind of a Black Mirror anthology, in a sense, but instead of being ultimately dystopian and kind of depressing, like Black Mirror is, which is wonderful program, but it definitely takes you to you know, it's very… it's… an interesting perspective on emerging technology, we thought it would be interesting to kind of build out an anthology. series that was also kind of looked at digital literacy and also looked at apathenia, this notion of how we look for patterns and things, and through those patterns we start to shape stories, and the underpinnings of the piece involves some research around QAnon and looking at radicalization through that lens and it was fascinating to kind of train the AI on a variety of different data sets this time. This time it's a paranoid. kind of a paranoid thriller, deefake thriller, I guess you could say so. We trained it on seventies paranoid thrillers. We trained it on Wikileaks, a number of other available data sets that we would find that we're thematically aligned with what we wanted to explore.
Lance: And what it does is we use Web pervasive technology. So we built an experience that you go through in terms of… within Zoom and Miro, which is a white boarding platform which allows you to collaborate with others so you'll be in breakout rooms with different people, and you'll be navigating this canvas that's full of about 90% of the items that you come across. The collages per se that re in it are made with AI. So it's deepfake video, audio text music, you know. So you're interacting with these things and you're trying to figure out what happened in the narrative of the pieces about a teenager who buys a body cam off the Internet and then realizes that there's still compromising footage on it and somebody or something comes for it, right? And so that's like the pilot episode. And so folks realize there's a high school student, they find themselves in a quarantine classroom, interacting with other students, which are actually… Everything that you're interacting with, for the most part, is kind of deepfaking it. So you're interacting with bots, and then you find yourself in Zoom breakout rooms, sometimes with real people, other times, deepfakes or shallow fakes. And so it becomes this amazing kind of thing where you start to question what reality is on. You start to try to shape with those other people kind of a theory of what you have, what you think was on the body camp, what you think happened to this teenager and so that's a project that we're doing in conjunction with the Columbia University School of the Arts Digital Storytelling Lab with SIPA, which is the School of International Public Affairs and also Teachers College and the Brown Institute for Media innovation.
Douglas: What I wanted to ask next was how you see this developing? Because you mentioned Zoom and Miro as being sort of common tools that you're using, in addition to really sophisticated things like, you know, training your your own AI on, you know, on various corpuses. But how do you see this developing? How do you… where is it going?
Lance: Well, I think there's a couple of interesting things to look at, I think one is you find AI kind of intersecting with the arts and interesting ways. So I think in one sense it's augmenting creative process, and that could be anywhere in your workflow, you know, from pre visualization to automating different things that are happening when you're actually on set, you know, in terms of shooting, but then also in the postproduction process. What's interesting in terms of the way that we're using it is we're actually writing with AI, so it's almost like a William S. Burroughs cut up exercise where we see what the machine spits out. We look at it, we prune that data set, and then we push it back in, you know, put it back in, let it bake for a bit. Have it spit something out and look at it and say okay, well, that makes sense of that doesn't and we just have an iterative process.
Lance: It's changed the way that I'm thinking about the construction of narrative. It's much more nonlinear in the case of what we're doing with Project Immerse, but it raises interesting questions in terms of how I'm thinking about productions and what I do, because I work in film and television aswell, thinking about ways that I can weave artificial intelligence into that workflow that I have when I'm you know, when I'm thinking about writing all the way through to when I'm releasing something, and then I also think there's interesting things that were happening within Project Immerse in particular, where there's a certain thing when folks are coming in and we're using Web pervasive technologies that this particular instance, but in a certain regard, eventually we could create what's known as a state machine, so people could come in and it could automate the whole process. So when people were in a certain part of the site, it recognized that there was a concentration of people there and it released something. It released a video released audio. It changed colors. It faded to black. You know that we could we could look at the system clock and determine how long it had been progressing and then trigger certain things at different times. We could also pull metadata from what people were entering into the experience and bring that back and weave it into the storytelling. So I think that there's a lot of interesting opportunities with the way in which, you know, harnessing artificial intelligence can enrich what you're making. I think it's early days in it, but I think that that's exciting because I don't think that there's necessarily fully a grammar for it yet. But I do really believe that some of the great works of the 21st century will come from forms that embrace these emergent technologies, not only in the dissemination of the work, but also in the way we're thinking about and challenging ourselves to create it. And actually the producer.
Douglas: Excellent. Last question for you. We've got a group assembled here that are coming from national broadcasters of the European Union and elsewhere in the European Broadcast Union. And what would your advice be for them as they start to really adapt different work flows relating to AI. How would you… How would you suggest that they go about it?
Lance: Well, I think what's interesting is just through experimentation and dabbling with it. There are so many… one of the reasons, with Project Immerse, why we wanted to make it was there so many new tools that are available that are easily accessible, which in the case of deepfakes, creates challenges, right. It means that it's very easy to create a defect like I could make one in a matter of minutes with us here, you know, off my mobile phone, right? There's apps that allow you to make deepfakes. The moment that there was, a face swapping app, you know, Pandora was out of the box per se, right? So in the sense of thinking about it, I think just experimenting with it, and there's ways to do it without even having the code. You know, Runway ML is a tool, for instance, that you can use its use for artists who want to manipulate images and video and a whole bunch of other text using GPT two and soon three within that core application. But I think for anybody that wants to experiment with it, the key is experimenting with it, right?
Lance: The other thing to note is, you know, those formerly known as the audience. I think there's a major shift that's happening where a whole new generation is coming up very familiar with the fluidity of screens and how they interface with them and how they actually make things too, you know? So this idea that those formerly known as the audience are actually becoming collaborators and storytellers is something that I think, you know, I constantly keep in mind when I'm making work right. I think it's important to consider, you know, it's a factor that I consider right. I consider what does that mean to… where there's whole generations that it's very much… it's either lean in or lean back. But with some of the new generations, it's so in between that all the time and it's fluctuating between a variety of different screens. It's challenging, so I wonder how AI will be part of that process, not only in the potential creation of the work, but also in the personalization of work, too, in the way in which it could be delivered. And how is that kind of combined with the term of like the Metaverse? This idea that, like everything will start to, you know, become more fluid in terms of like how we interact with media, how we interact with screens. So whether that's through a VR headset or whether that's through augmented reality and we're out in the world or it's through a gaming engine in some way, I think you can look at early tests with it. Like No man's Sky was an interesting example generative world that was constantly being generated and people complain because they were like, I never saw anybody. I was out in space and I was, you know, exploring. But you know, like when I put my flag on the planet, I don't know if anybody will ever see it, and I never saw anyone else, so it's probably very realistic. The initial release was very realistic. So what probably, you know, space travel is like, but people wanted connection.
Lance: So I think what's interesting about what you'll see with AI is accelerating the way in which we're discovering. You know, there's already a lot of work within that in terms of, you know, algorithms and augmenting the discovery process. You know, I would argue that a lot of large technology companies are in fact AI companies, you know, I would say Facebook, Apple, you know, Google so forth and so on. So I think for broadcasters just trying to, you know, kind of wrap your heads around it, dabble in it, spend a little time kind of thinking about how could it enter your workflow and what might that mean? I think sometimes, you know, like, I'll give presentations and I'll say, Okay, I'll have a box on the screen, right? And then I'll have a little dot That's all the way over on the other side of the screen and I'll point to the dot and I'll say we're gonna be over here, right? So a lot of what I'm kind of talking about to the general audience might seem like it's kind of ahead or, or it might seem foreign, but, you know, Marshall McLuhan famously said that we kind of march into the future and we shape our business models through the rear view mirror. Right? And so I think the more that we can actually dabble and experiment with the technology and be open to how we use it means that we'll be in a better position moving forward. And that that creates not only on opportunity for improving the work that we make, but it also, you know, ultimately gives a competitive edge to what you're doing in terms of the programming that you're making.
Douglas: Fantastic. Lance Weiler thank you so much for spending some time with us here. Very much looking forward to seeing the premiere of Project Immerse at the Mozilla Festival. Thanks again!
Lance: Thank you, Doug. Nice to be here.