Pre-MozFest Interviews – Ian Forrester – February 2021

Jump straight to ...

  • The Future of Podcasting?
  • Deejaying and Stems
  • Binuaral and shifting sound fields
  • Variables
  • Adaptive Podcasting
  • Data Ethics
  • AutoScroll

    Catch Ian Forrester's Session in the Creative AI Space at MozFest in March – Secure your Ticket!

    Douglas Arellanes: We're here with Ian Forrester of the BBC, and he's gonna be one of the presenters at this year's MozFest in the Creative AI space. Ian, thank you so much for coming by or agreeing to this interview virtually. I do appreciate it a lot.

    Ian Forrester: No problem, thank you for having me.

    Douglas: You're in Manchester, right?

    Ian: I am in Manchester UK. Yes.

    Douglas: Okay. And that's where the BBC has an R&D center, right?

    Ian: Yes, R&D is split between London and Manchester, we do have other little bits, but those are the main centers.

    Douglas: Okay. And you're gonna be presenting a really interesting session at MozFest this year on the future of podcasting. And maybe you can explain some of the stuff that you're working on because when I came across it originally, it was just absolutely blown away by the idea of what you're working on. And maybe you can, explain it for the audience.

    Ian: Yeah, I always do a kind of quite a bad job when I listen back to the way I explain it. But, I would say there's two ways to look at it. So technically, what we're able to do is rather than a podcast right now you download the file and it just plays back. Now, the devices that were playing back these podcasts on have got so much more smarter. And that device has so many sensors, so much data, so much computational power that it could do a lot more, but we're not using it that way. So what we've been doing is going Right okay, let's deliver it as a podcast. But rather than a completed, flattened piece of audio, how about if we deliver the objects or the audio objects in pieces and then an instruction set so that it can then pull it together? So it's more like like, HTML on the web. It's like you deliver the web page, and then the Web page pulls in the JavaScript or pulls in the images and stuff like and because you're able to do this, live on the device, then you could do other things. Like this device is by a Pashto speaker because the Pashto speaker than use the the soundtrack which is… which is Pashto instead of English, for example. You know, stuff like that you could do. There's a lot more things you could do. But the important thing for us is that you know, what can you do from a storytelling point of view? And that's where the second point of view comes where… you know, if you look at the way that stories have developed over time what we've done is we've basically kind of moved to the published world and then we went to broadcast and we're doing this single story to everyone. And actually, if you look way back in the past, we used to able to to tell stories which adapt and change based on context, based on you know, all these different things. So one of the things we do if you are talking to young people, you most likely wouldn't swear. You know, but if you're talking to adults, you might swear to emphasize certain things and these are adaptions that we do, and they're very human.

    Douglas: You know, in some ways, it sounds almost like in deejaying where you have stem files. You know, multi tracks where you can just jump in between the tracks on the fly. Is that a rough idea of what it's like?

    Ian: It's so funny you say that. So stems is something that I've be very interesting in. So I also have deejayed quite a lot. So I'm holding a thing called a Pacemaker, which I DJ on, and I've deejayed at MozFest quite a few times probably about six or seven times. Yeah. So the thing about stems is really interesting because stems are, you know, kind of these different tracks almost. So one of the things you could do is you could use it like stems, or you could go one step further, which is that these are almost objects and they get called independently at any time and you could do things to those objects. You don't have to play it through. You could play at a slower speed or layer them up… You know loads of them side by side. So, for example, we got a demo where it will play up to 160 audio objects at the exact same time, you know, And you could do things to those objects as well. Say this object plays two seconds after this other object has started or five seconds after this one, you know. So that's that. The stems mentality is actually really interesting. And it's a good way to think about it, but we could do a little bit more than that!

    Douglas: One of the things that you were doing, in the skill-share that you had in December for the MozFest organizer's was that you were also doing things that were location dependent.

    Ian: Yes.

    Douglas: So maybe that's one of those things, right?

    Ian: Yeah. So this is what's interesting about all of this, right? You got this whole engine, which is, like a stem engine where you're adapting and changing the audio on the fly. And also we can do binaural audio. Not in real time. But if you got the recording binaural, we could play it back in binaural and you can start to put other layers on top of that. So we've not tried putting binaural on top of binaural and see what happens. But we could do that. That'll be easy enough to do. See how the shifting sound field, which is happening. But the other thing you could do is you can… then you've got the elements based on time of day, based on location, based on is the person's phone in a pocket or is the phone out in the open, is a phone on speaker phone? All those kind of things can suddenly adapt and change what you're hearing so you got kind of like unlimited possibilities and that's quite liberating. But also it's quite like… Where do I even start? You know, now the location, things, something that people always talk about. You know, it's kind of like road trip you're on your way to somewhere, and then it knows where you are. So then place certain things at certain times on that's easy, that's easily possible. But I think the important thing is to try and craft it all together, so it doesn't sound like it… All right? You just walk past the statue. The statue is di- di- di- da- nu- nu. You know, oh, then go to next one. It's kind of like, how could you craft it altogether so it seems seamless? The art of storytelling.

    Douglas: Yeah, well, one of the use cases, though that you you were talking about was like a radio drama, right?

    Ian: Yeah.

    Douglas: And so… Go ahead. Sorry.

    Ian: Oh, so there's a few use cases. And so radio drama was one that we've been thinking about so we're not going to do this, but, you know, if you look at the War of the Worlds original radio drama…

    Douglas: Orson Welles…

    Ian: Yep. That scared a lot of people and you can imagine something where not only is it playing this audio and it's playing… is doing this story, but it's also making references to things that are in your area or on your streets. You know, or based on the time of day as well, cause also can combine these things. Then you've got the kind of like, it's dark and they're walking down this road. And, you know, this is what's kind of happening in this road for example, stuff like that.

    Douglas: But wouldn't someone have to record all of that stuff in advance. I mean, that seems like a tremendous workload for a person in the studio.

    Ian: So, yes. So one of the things that it also does, like… I'm glad you remind me of this. It seems so obvious now, so it's kind of easy but, it does speech… I'm sorry text-to-speech. So it will say… So you could literally right in text in the podcast, and it will say those out out loud. You know, and depends on the speech engine on your phone it can sound very realistic. Or it can sound like Speech JS. You know? So, yeah, you could get something a bit more like a a Siri or Google Now voice or something like that. You get something that is kind of somewhat believable. And so no recorder needed.

    Douglas: Yeah, that would save a lot of time then. And, I mean, also open up a lot of different possibilities. Things like live feeds from, you know, with the weather or any number of different inputs.

    Ian: Yes.

    Douglas: So what is the software called, or what's the name of the project? I mean…

    Ian: Yeah. So it's called… used to be called Perceptive Podcast. There's a whole story why it's called Perceptive, but we now call it Adaptive Podcasting. So it's still in its early stages, I literally was just talking to a developer who is working on a version, which will then go on the play store as a beta. But yeah, it's still kind of making its way into the mainstream, but the key thing I think this is an important part is that all the stuff that we're doing, we're making to be open sourced. We're not going to kind of have it ourselves and everyone kind of like licensing it from us or or buying it from us. Part of the BBC agenda as a public service broadcaster is to enable all people to be able to create and explore their creativity? So one of the things that we're doing is we're working with young people and so for example – one of the things that we're interested in is sort of cases that we would think about. So, for example, pirate radio station, so pirate radio station exists in a certain area, I used to listen to when I was younger. You can imagine something where you listen to a podcast, and then during the day it sounds like a normal radio station, and then at night and in a certain location, then that's when it really opens up and you really explore what's really going on in this podcast, you know? So stuff like that, you could do.

    Douglas: It sounds like that's where also AI could really… or some kind of machine learning could really step in as well.

    Ian: Yes, yes, absolutely.

    Douglas: So the name of the session at MozFest is The Future of Podcasting is Adaptive, Open and Data Ethical. What's the data ethical part? What's that about?

    Ian: So it's really important to us, at the BBC – we got a bunch of projects, have a whole work stream which is very much about providing a more ethical stance on the use of data. So if you've got an application which can use your location, use the fact your phone is in your pocket, use the fact that you are walking, the fact you are, you're going on a certain bus, all those kind of things. That's not something that you want to share with the BBC. And you should never have to unless you want to. So as default, the application uses this data, but it stays on the phone. It never leaves the phone. It's really important to us that if you are using this locational data or using other kinds of data, then all that data stays on the phone and never gets shared. There's some limitations to this right. So, we've gotten a use case where you are on your journey to work, and so it would know where your work is and knows where you are. It knows that you're walking or you're in the car, on the train or whatever. It knows how close you are to work. To do that – we've got it, we've got the code for it – it needs to ping the Google GeoIP – Geo location services and then kinda say "I'm here, how far until I get to here?" Now that for most people is not a big deal, but that's something that we've chosen to not include. Now, when we open source it we'll go "Here is the code for that". But we've never included it in the application because we want to protect the users. They're not giving away this information, which is quite private and I know people find other things more private than others, but it's something that we've had to make a stance on. We want to focus on the storytelling and not on uses of data because, for example, if you were advertiser, that stuff is really gold.

    Douglas: Yeah, it's great, though, that you've built that into the architecture from the beginning, because also, that's something that others could possibly build on as well, to enhance privacy and trust rather than trying to fix it after the fact.

    Ian: Yes, I think that's the thing we've always tried to do rather than "Let's make this thing oh, we should think about privacy" it's like, "No, no, this is built in from the very beginning". So some of the experiments we've done before have started to, unpick some of this and it became very clear that if you're going to install the application on someone's phone that we really need to be not using that data and sharing that data elsewhere. I wouldn't want… I've got a phone… I got the app on my phone – on my own personal phone and my work phone. If I knew it was using data and sending it… even sending it back to the BBC, I wouldn't be that happy with it. I would want to restrict that. And as default the restrictions are it won't do that. Really important for us.

    Douglas: Excellent. So once again, the name of the session is The Future of Podcast is Adaptive, Open and Data Ethical. Ian Forrester of BBC R&D, thank you so much for this. This is absolutely wonderful. I'm really looking forward to the session, and seeing what the participants can can possibly also do do with it in the session. But thanks again for this and we'll see you at MozFest.

    Ian: Thank you very much!

    Read more: Creative AI at MozFest: Creatively Collaborate With Machines.

    Built with Hyperaudio Lite – part of the Hyperaudio Project.