Sessions (Creative AI) – Ian Forrester – Adaptive Podcasting - March 2021

Languages ...

  • English
  • Español
  • Português
  • Français
  • AutoScroll

    Catch Ian Forrester's Session in the Creative AI Space at MozFest in March – Secure your Ticket!

    Ian Forrester: I was I was just about to... I just about to ask if everyone was happy to be recorded. Uh, but yes, if you're... if you're not happy to be recorded, don't put your camera on. And obviously, if you say anything it will be recorded as well. So just just for you to know.

    Ian: Yes, I am Ian Forester. I am he him. I am coming from... is now 8:15 in the evening in Manchester. And I am coming from my house, like most people, because it's still the third lockdown in the UK.

    Ian: Okay, so, thank you very much for coming to this. I see there's quite a few of you, so I will post it again. Okay. So one of things I wanted to do I really want to go through the slides. I want to give you some demos, and I want you to ask some questions. If you feel comfortable asking questions out loud, put your hand up and I can then search for you in the grid or unmute or put the questions in the chat. Once we do that I'll also do the demos and stuff like that. I'll probably just finish up, and then I want you to do a little exercise in Miro. This is why I shared the Miro board. I'm sure some of you already looking at it. Don't mess with it too much I spent some time preparing that. But yes lets go. I'll just post the link one more time. Right. Okay, So I'm going to share my screen, make sure it's the right one.

    Ian: It is the right one. Okay. Yes, that's what I want. Okay.

    Ian: So thank you again for coming. This is The Future of Podcasting is Adaptive, Open and Data Ethical. Oh, I was meant to do subtitles for non English speakers. And I forgot how to do that. Sorry. I'm just gonna try and find how you do it. Do you happen to know of your top of your head do you? Anybody? Okay, well, I'm gonna do it in Google slides, then.

    Ian: So bear with me one second, because it actually works quite well.

    Partcipant: I believe in Zoom it's the live captioning at the bottom of the bar, if the option is there.

    Ian: Yeah, I couldn't see it. Let me just try again. Sorry. Oh yeah closed caption on recording.

    Participant: Oh did it show up?

    Ian: It's on there now. Sorry.

    Participant: Nice!

    Ian: Okay, so I think that should work now. I don't know if you're able to see it or not. I'm not going to sign in. I'm just gonna do close captioning and leave it at that, I think. Okay, I'm gonna start going. Otherwise we're gonna be here forever. And I want to make sure you get lots of time to ask questions. So bear with me. So slides... um is this one isn't it? No, that's the wrong one. Sorry. You'd have thought that... Oh, you're joking. What's going on? Oh, no, no, no, no, no, no. Is it crashed? Oh no it's not crashed. Oh, yeah. Oh, okay. Right. Sorry my machines gone all funny.

    Wrangler: Do you want me to pause the recording?

    Ian: Yes, please.

    Ian: Okay. So while I wait for my machine to start again or to restart.

    Ian: So this is Adaptive Podcasting and one of the things that we're trying to do is we're trying to...

    Ian: So I work for BBC R& D in the UK and one of the things we're trying to do is we're trying to change the state of podcasting, and I use podcasting loosely because it doesn't have to necessarily mean one time podcasting. I mean, podcasting as an idea.

    Ian: We're trying to change it to make it more... well to keep it open, so anyone could do it, but also adaptive. But anyone could do it and data ethical because one of the things that we see is that lots of organizations are starting to make podcasting preparatory to their own platform. So let's say, for example, Spotify have podcasts that are only available via Spotify. They're not available in another way. We feel that's a real shame.

    Ian: Another part of this is that a lot...

    Ian: The way that podcasts tend to work is they tend to be a piece of audio and you download the audio to your device and then you listen to it. And the only way they can understand if you actually are a listener is by the download itself because most time you can download the audio and then play it back in any player you want. But now there's... so, for example, let's take Spotify or apple iTunes or Apple podcasts or even the Google. Um, forgot what the Google the one's called now. Um, one of the things they are able to do is they're basically because the player is also owned by the company, and they're providing the podcast, and they can track how far you've got in the podcast, how much you're listening, all those kind of things.

    Ian: (I'm still waiting for my machine to to come back, but it's not coming back yet) Okay. Can you still hear me okay?

    Participant: Yeah we can hear really well.

    Ian: Okay. Great.

    Participant: And that background is super interesting.

    Ian: Okay, wait to get to the bit where we do demos and it's like it's still crashed. Okay so what we're trying to do is we want to move podcasting on, but we want to do in an ethical manner.

    Ian: Now, one of the things that a lot of the big companies are doing is that they are trying to inject adverts into podcasts. And so if you listen to a lot of the podcasts I listen to at least, they will use a thing called Acast, which automatically injects the advertising into the podcast audio at the server side. So when you download it, then you get the adverts automatically. And that can kind of work. You know, it reasonably works.

    Ian: But obviously, like most of the internet, they want to learn much more about us, so they want to not only kind of go. Oh, well you're roughly in the UK. They want to know exactly where in the UK you are. And so one of things that we want to do is to make sure that the user stays in control of their own data and are not being followed by advertisers.

    Ian: I'm hoping my machine comes back together in a minute. So annoying.

    Ian: Okay, so that's kind of where we're coming from. Now, in BBC R& D we've been working on this thing called, Object Based Media, now Object Based Media basically allows you to adapt media on the fly. And there's a great little picture that I can show you. Actually, if you look at the slides. You can then see where I'm at because I can see the slides in front of me.

    Ian: So one of things that we do is we're taking the actual media, which I kind of represent as Lego bricks. So if I remember, correctly, there are four Lego bricks, um, and they're different colors. And so they have a certain shape. But you imagine those are the media objects or the media sorry. And the important thing is the metadata.

    Ian: The metadata is the shape of the object, the color of the object, all those kind of things and that metadata wrapped around the object, around the media sorry makes an object. So basically an object and all the objects in this sense is media plus metadata equals object.

    Ian: Now, if you can deliver that object to a player that understands what to do with that metadata, you can do some more interesting things.

    Ian: I'm hoping the machine comes back. I don't know what's going on. It's very annoying. It usually comes back by now.

    Ian: So what we've basically created... what we've created is... we've created a player which understands what these objects are these audio objects and can read the meta data and can reassemble the objects on the fly.

    Ian: So, for example if you had four objects and they're kind of like object one, object two, object three, object four. Maybe during the day it's kind of like: play one, two, three or four, but at night: play object four first, then the object one, two and three. And that's the kind of thing you can do when you have control over the player and you have these objects or these audio objects which have the right metadata or have metadata, which is exposed to the player. That's kind of the shorthand of it. Now the interesting thing about the player or the app in this sense is that it can do more because the app is run on your phone. It can tie into the operating system of the phone and because we're using Android...

    Ian: hopefully you can still hear me?

    Participant: Yeah, you're good.

    Ian: Okay, cool. I'm just checking like literary it's gone. There's just nothing there for me. So I'm like checking.

    Ian: because... it's completely... completely off the top of my head...

    Ian: Because it's tied into the phone, it can tie into the APIs of the phone, and it can also tie to the data on the phone. And so, for example, um, like my example, I just used um, I can look. I can put an instruction set to... in the podcast saying if it's night and I can classify night as... or your phone will classify night as once the sun has set to until the sun rises then it would then do that change because I specified that this change would work only at night and during the day, which is, you know, from when the sun's up to send sun drops. Um, then do something else.

    Ian: Now you can specify all types of different things based on different aspects of data and sensors and...

    Ian: Grr I wish I had my slides here and I could show you.

    Ian: There are lots of data points and lots of sensors which you can use, so time is one that's very easy. Time also tends to be one that's less sensitive than others, but you can do things like... based on... based on your... on where you are so your location, and the location is set by the user.

    Ian: So if you say fine grain control that it couldn't find out which street you're on, or if you say kind of very fuzzy, it could be just you are in this area of the country, for example. So the user is complete control of what they're what data gets used. But the important part about all of this is that the data and the sensors and all that stuff is all happening on the phone only, nothing actually leaves the phone. There's no kind of call backs to the BBC or call backs to anywhere else. We're just delivering the experience. Then the experience is using that data on the phone and changing and adapting. And then nothing leaves the phone.

    Ian: That also has restricted some of the things that we can do. So, for example, one of the things that people keep on asking us to do is that: I would like to be able to... sorry the example that we always use is: I would like to be able to know, from... I'd like a podcast, which fits within my commute to work so it would need to know where your home is, where your workplace is. And that's quite easy, because, at least on Android phones Google understands that the place where you spend most of your time is home, and the place where you spend your second most amount of time is at work. Now I know that's not right for everyone, but that's what Google have determined. And that's what they'll use unless you put it in yourself.

    Ian: So ultimately it would know where your home and work is, but then it also needs to know where you are at the moment so if you're traveling to work. And also sorry, but there's a really nice API called the Activity API, which allows you to understand if a person is traveling by foot, by bike, by car, by public transport, by other means as well.

    Ian: I don't think horseback's one of them, but it can tell. Don't ask me how it tells it's probably one of the proprietary, magical things that Google knows. But because we're using a Google phone it can do that automatically. And so, as the creator of the podcast, all I have to do is say: Okay if the person is walking to work, then do X or change Y. If the person is in the car, then change this or change that. Okay, so in the case of going to work, the one thing you need to do to understand where you are is you would need to ping a Google service just to say you are here right now. How long until I get to this other point?

    Ian: Now because of that, we've we you know, we can't basically do that example. We've got the code for it, but we've not put it into the application because we really want to stick to this kind of ethical route.

    Ian: I wish this is working. Really Wish this was working.

    Participant: Can you see the images from us on Zoom? Because there might be for one of us to do the oh, if you can't see... because...

    Ian: I can't even change my slides or anything. I can move windows around, but the windows are all like stuck. So I can't, actually. So even... even my slide deck, I can't even move my slide deck, which is annoying. I can't see a chat, and I can't see anyone moving. Okay, so I'll carry on.

    Ian: So ultimately, we have got a platform an application which allows you to send audio objects to an application, for the application to then adapt and change things. Now, one of things that application also does is it also uses text- to- speech so it can also read stuff out. So in some of the examples which hopefully I might be able to show you, it will talk to you.

    Ian: It will use whatever the default is on your phone. So on my, um my work phone, it sounds really robotic. On my Pixel it sounds very nice. It sounds more like Google Voice. It's kind of very human- like.

    Ian: What else was I going to say? Oh, so, yes, you can also, we're using a technology called SMIL, which is an old W3C standard, which, if you remember Real Player and Quicktime it was the standard in those, and so you can layer audio on top of each other.

    Ian: You can also play things a sequence. So, like a playlist. But you can also decide exactly the... how long things play for. You can also do... We've just added the ability to do... slow things down, so you can really slow things down to like really slow speeds. You can also do a loop as well. So if you have a background soundtrack, you just wanted Lutes instead of playing a really long audio file. It can play the audio file, but over over again as long as you like.

    Ian: It also supports all audio files from MP3, Ogg, WAV, Opus. Um, MP4a you know, it will play all of them. Anything that your Google... anything that you're Android device will play, it will play. I'm trying tothink what else there is? I think... So okay, so this is what it can do right now.

    Ian: Now, the interesting part for us is what can you do with that? You know, there's a lot of levers and a lot of things you can do. And this is where I get into the origins of storytelling. And this is very much about trying to move away from that broadcast mentality of it's one story it goes out to everyone, everyone listens to the same thing.

    Ian: So we've been working with artists. And we're trying to work with young people as well to see what they were doing. So, for example, where the artists, um, is working on these things called lullabies, and she is... one of things about the lullabies... So she lives in an area called Bristol, a city called Bristol, which originally I'm from and she... that city, has a dock and lots of people kind of come through there. So lots of different nations have lived there, and they kind of have taken home there. And so one of things that she's been doing is recording lullabies of different people or different cultures.

    Ian: And one of things that you would like to do is to be able to wander around the streets of Bristol. It's a bit hard now that covid is going on but... And to be able to... based on where you are, the lullabies would change to fit where the you know... which culture is kind of most predominant there, for example. But one of the things that she's able to do also is that based on the language of the phone, you can automatically switch the audio.

    Ian: For example: So it's a bit like the transcripts. Not transcripts, like subtitles in that you can automatically switch it based on what the default is in the phone. So you don't have to kind of select and change these things. And that's also very important which I just remembered is that all the things that we're talking about are things that just automatically happen. You know, they don't...

    Ian: It's not like you have to go and select to do things or there's a choice and you have to go: Um, okay which is my choice? You know, which one do I want to do left or right? You know, it all automatically happens because the idea that you are especially when you listen to podcasts you just got it on the background. And you're going about your day and listening to it, you're not interfering with your phone. Or if you're on your phone, you're on WhatsApp or hopefully not, hopefully on Signal or something like that. You know, talking to your friends or whatever else you're doing, you're not, you're not kind of like messing with the podcast, which I know some of the providers are starting to play with the idea that you interact with the podcast.

    Ian: And I don't believe... I might be wrong in this. It's gonna be counted against me forever more now. I don't believe that people will want to interact with the podcast. They would want to have it playing and things to happen. So they can just carry on about their lives as they want.

    Ian: So normally, I'd play a demo. I can do a demo, but it's going to be off my phone now, which we just tried. Uh, yeah. Let me actually let me ask. I can hear you. So if you have questions, if you'd like to, unmute and ask me a question because I cannot see the chat, and I cannot see anything else except for a poor screen. So anyone would like to stop... stop me and, um, and kind of ask me question please do. Anybody?

    Partcipant: So Hello?

    Ian: Hello I can hear you.

    Participant: Hello? I'm Evi, I'm she/ her and it's great to be here. So I would like to refer back to this example that you mentioned about the lullaby podcast, that it detects the default language that is set in the phone. But does this application offer the user um, some sort of means of configuring this experience? So, for example, if I wanted to learn a new language and I wanted to practice it, I would not want to podcast in that language to automatically get translated into the one my phone said so So does the user have this kind of choice there?

    Ian: Okay, so right now, no. And it sounds like what you're trying to do is trying to do something slightly different or quite a bit different from what we're trying to achieve. I think that there would be the ability to then go, Okay. It's using the default language and my phone. I'm going to switch my language, or I'm going to do something different. But right now, and this is where I'm going to get to in a minute.

    Ian: What we've built is very beta, and it will be released as a beta. We don't... We're not trying to create a finished product because, if you create a finished product, everyone kind of wants to to do different things. And I know that Jay is on the zoom call. Somewhere I can't see him. But I know that he wants to do something different from other people. And so we ultimately just want to basically create a framework so that anyone can do what they want to do. Rather than try and create each one of these individual bits and pieces.

    Ian: We're also going to open source the whole lot, which is... which makes a lot of sense, because if you want to do what you're about to what you're suggesting to do, that you may want to have controls to say I want to override the defaults. So does that answer your question?

    Participant: Absolutely. I'm all clear now. Thank you.

    Ian: Okay. Cool. Anyone else?

    Participant: My head goes through sound installations that require no infrastructure. Like creating interactive sound experiences that are meant to be in sync with your environment are meant to take you on a journey which you control by moving around, but no infrastructure. Oh, my God, that's amazing.

    Ian: Yeah, it is. I mean, one of the things that we always get... is people when they first see it and hear about it they usually go: I want to build walks where, as I'm going walking then it would then kind of know how I'm walking, how fast I'm walking and will adjust based on my walking.

    Ian: It may ask me to stop at certain points, you know? Say, Oh, there's a fountain nearby. If you just stop for a minute and let's just sit and breathe, let's sit and let's just enjoy the kind of free time, other people walking around stuff like that" and it's something that can that can be done. This is exactly why we want to put it in the hands of as many people as possible because the example that we have and examples that other people have will be quite different. And we really want people to kind of like push the boundaries.

    Ian: But also kind of like share it with their friends. You know. So you can imagine creating a walk or a space that you only share with your close friends, rather than having to share it on a website and published the URL and everyone gets it and everyone's like: I don't hear anything" It's like it's made for your local community, for example.

    Ian: You know, I mean, one of the examples that we are... well I've been thinking about is you can imagine something like a pirate radio station, where during the day it plays the kind of like the pop stuff, and then at night, at a certain place it then plays the real underground music, because the only way you can hear that is you have to go there, you have to go at a certain time at night. That's kind of like the stuff that I think it's really, really fun. But I would like to get in the hands of young people so they can determine what is fun and what's not fun, rather than than myself.

    Ian: Any other questions?

    Ian: Any questions – anyone would like to unmute? If not, can you still hear me?

    Participant: Yeah.

    Ian: Okay. I just... I just tried to unplug my my external monitor to see if it made any difference, but it made things even worse.

    Ian: So hopefully if there is... I posted in the chat, and if someone could help me because I can't see the chat, there's a link to a Miro board, and I'm going to completely from memory now in the Miro board if you open it, there's a lot of stuff, but at the bottom, there is... there's some kind of like, kind of cone shaped things, and you'll see in the middle you'll see the different types of sensors, that we can access, the different types of data we can access and kind of other features, like being able to layer audio. One other thing that... and this probably speaks volumes for the last person is we can do binaural audio. You have to record it in binaural, but when it plays back, it plays back in binaural. Um, so it kind of really create that immersive...

    Participant: That's amazing!

    Ian: Thank you. So, yeah, you can create this kind of amazing kind of immersive experiences where people are able to, uh, walk around and it literally sounds like it's coming from right behind them, for example.

    Ian: So that's kind of thing we we can do. So one of the things I wanted you to do, and also I can't do this, but it's kind of put you into breakout groups. And then I wanted you to take with a copy of the sensors, the data and any of the extra functionality and copy them to the cones and come up with a story or an idea for a podcast. Because one of the things that I unfortunately cannot show you and I think is really important is that the the things that this can do are amazing, but it all comes back to storytelling, and it's no good if it just kind of... you're telling the exact same kind of stories. But with this new technology, one of the things that we are really interested in is what kind of news stories can you tell? Or what kind of stories which are going back to how we used to tell stories can you tell?

    Ian: So I'll just stop at that moment because I remember, I missed that bit. So as we went from... we used to tell stories around the campfire. We used to tell these stories, and it would be someone who'd tell the story, and that story would be adaptive. It would change. So based on the people's reactions, based on if people were falling asleep, based on if the fire is getting a little bit low, maybe there's alcohol. Maybe it's whatever there is, then the story would adapt and change. So there's a video which I recorded for the EBU, the European Broadcasting Union, where I talk about how we used to tell stories around campfires and how based on all these things we would do things like tell ghost stories.

    Ian: And so the ghost story would... if I knew... if you were all my friends, you know, we all kinda grew up in Bristol, and so we would all... I would make a story about how there was this kind of one place we used to walk past every day on the way to school. And it was always a little bit eerie, and we were kind of a little bit unsure about it. And then one day we walked past, there was a noise, and we didn't know what it was, you know? And then we walked past at night, and that noise started again. And, you know, we started to make its kind of really immersive kind of story. And this is the kind of thing you can do when you have a platform which is adaptive, where the objects can change and we can do things based on location, based on time of day, based on is the phone in your pocket? For example. You know, that makes a big difference. Is the phone being played out on speakerphone vs on headphones? You know, those kind of things can all adapt and change the story.

    Ian: So I was hoping... I forgot the name of my co host. I'm glad I made you co host. Can you hear me okay?

    Wrangler: I can.

    Ian: Okay, it might be worth if you can make sure that the Miro link is in the chat. And then... and then if you can split people... I don't know how many people there are, as far as I can tell there's probably about four people on the call. But if you can split the group into random, into about three or four groups, breakout rooms, and then I at least... if you go and go off and think about it, make sure you all have the Miro boards. There's this example on the left hand side. Sorry go on.

    Participant: No. Yeah, the example. I was just looking at the part of Miro board for the activity is the example is super clear. The instructions were really clear. It sounds like a fun activity to just try. Yeah.

    Ian: Okay, So if you can all kind of come together and do that, at least I can try and restart my machine or something like that and try and come back into... and we can have a... kind of... see what you guys have come up with. And we can have a little chat, and hopefully then we can have a bit more Q and A before the end of the session. How's that sound?

    Wrangler: Okay. Ian, can you make me host before you do that?

    Ian: I can't. I can't do anything right now.

    Wrangler: Oh, OK.

    Ian: I can't see anything.

    Wrangler: If you leave the room, I will quickly ask for help for the stuff and I'll be back.

    Ian: Allright if anyone else has any questions, please do... please do... do say now because, um yeah. Oh, boy.

    Participant: I think I have a question. My name's Stella from Pittsburgh, Pennsylvania. Thanks for the awesome presentation so far.

    Ian: Thank you.

    Participant: I want to talk. How about, like, data in transit? So let's say someone was going to go out for a walk and maybe they wanted to download a podcast on their WiFi, so they didn't use data while they were travelling around. How might the system account for some of the centers and variables while they were out? If they didn't have data turned on, so would there be a default layer that they would experience as a story or, you know, for the cache in chunks and what the device would store locally if they didn't have a data connection at the time?

    Ian: Right. Okay, so yeah, sorry. I maybe didn't make this very clear. And this is... this is one of the questions that people always ask. Is that... So when you download the podcast or download the objects, that's it. So it's like a podcast. It doesn't matter about data.

    Participant: That's really interesting. So that yeah, I was kind of curious about that. Thank you.

    Participant: I was not just going to ask, as a follow up to Philip's question whether and it can function properly while the episode isn't fully downloaded, you know?

    Ian: No. You have to... It's like... it's like more... it's like a podcast. You have to download the whole thing. There's no streaming involved because if it was streamed and this comes back to the question before then there'll be a way of kind of going from a broadcaster or from service providers' point of view. We're streaming to this device. This device is IP address has changed or is connected to this WiFi.

    Ian: Or... it could be a way of... there's ways to track users via those... those means. And we do not want to track the user. We just want to provide the experience and then fully... the user to have the experience and enjoy it. And that could be anywhere. They don't have to be connected to the WiFi. They don't have to be connected to... They can turn off their... all their data, you know, it'll still work.

    Participant: Hi Ian. I've found the presentation really interesting so far.

    Ian: Not as interesting as I found it!

    Participant: Even with the technical difficulties, like you're doing an amazing job so far, mate. Honestly. I was just wondering, how do you envision the roadmap for this project going forward? Like, do you have any sort of estimated timelines or anything of that sort right now?

    Ian: Yeah. So Okay, so I... I can't say... I'll say it in quarters because it's it's easier, because this project has been I think if you look at the Miro board, you'll see this kind of radio thing? This kind of That was... and that's called a Perceptive Radio. This project is called Perceptive Podcast. And we built a second radio, which is basically a PC in a box, but it is basically extended the sensors of the phone. And so the podcast app, which only runs on Android right now. I'm sorry for you iPhone users, but I can explain exactly why if someone wants to ask me.

    Ian: Then we're currently looking at releasing the beta or one of the betas on the play store as a beater app. Probably the next quarter. So, you know, in... from spring into summer. Um, the other part, which is important, is that for a lot of people, they don't know how to code. To write this is quite easy. I say) It's scripting if you can write HTML pages, not JavaScript, just just HTML, you know, just writing tags and all that. Then you can write the SMIL because it's basically XML. For a lot of people they don't know how to write that. They don't understand that.

    Ian: So, we are basically building an editor, and the editor will also have a previewer so you can kind of start making your sequences and your parallels and making very complicated things and then play it back on your browser and go: Oh Yeah, that's... this works. That's great. And then you can... one of the features that we've just added to the beta is that you can automatically kind of take this... the audio and the the SMIL file and the JSON file, which is just a metadata, and then basically, zip it up and then put it on the phone and listen to it on your phone.

    Ian: You can just make sure it definitely sounds how you want it to sound. So maybe, like, make sure that the... it's definitely doing the things that you you want it to do, and then ultimately, once you've done that, you can put that zip file on your own site or anywhere you want. You can put it on GitHub or whatever and create an RSS feed. And then you can tell people just to subscribe to the RSS feed.

    Ian: So I guess we're looking at the editor, probably late summer, the open source in park we are looking at probably winter. There's some... it takes some time for... the whole thing is built for open sourcing, but it still takes some time for... Imagine a big broadcaster like ourselves, agreeing and get all that stuff sorted. But yeah, we do want to. That's the end goal is to open source a lot, so that people can start to integrate into their own platforms or build their own stuff. So, for example, the person who asked about overrides you know, that's something that we wouldn't put in, but someone else can put in quite easily. Does that answer your question? Help with your question?

    Participant: Yeah, sure. That's giving me a brief overview of everything, cheers.

    Ian: Most of it will be this year. I would say that we are... One thing I would say to everyone is that we're not going to build an iOS app. I can if you want to ask, you tell exactly why we use Android and not iOS. But we we might make a progressive web app, which might work on iOS in the future. But that's probably... that's a possible but unlikely. Okay?

    Ian: Any other questions?

    Ian: Literally my laptop just literally sitting there doing nothing. I am very sorry for my laptop. Um, it was all working before. I don't know what happened.

    Participant: I'll... Sorry it's Ali again from the previous question I'll bite, why... why wouldn't it be able to work on iOS?

    Ian: Okay, so it's not that it wouldn't work on iOS. The problem we got with iOS is... So for example, I... So in short, we needed a platform where we could develop it in a very kind of agile way. So it was literally, um we we started off with a student from Manchester Met, MMU in in the U. K. who... We had the idea, but we actually got him in to do a summer intern. So he started that. Then it kind of went to developers within the BBC and then kind of got passed to a freelancer, and it's kind of very, very nimble how we did it. We've been using GitHub very quickly, but the thing that's important, and I think this is the big difference with... with what we can't do with other platforms like IOS, is that every time we were, we made it, we basically made APK, which is an Android kind of application. And we're able to share that APK with people, they have to sideload it to their phone. If you don't know what sideloading there is, it allows you to basically put the phone... put the application on the phone without having to go via an official app store. And for us, that was really... that was reasonably cheap, was very cheap and also meant that we could test things and try things and do things that we couldn't easily do on IOS. I know you can use test pilot that costs a lot of money that also limits how many people you can who could use it. So we can... so one of things I was going to do in the slide deck. Sorry) Was I was gonna give you access to the APK so you can install on your phone and you can start playing with it right now and start building stuff for it. Unfortunately, because I can't see anything, um, it's been pretty hard to do. But, you know, that's one of the reasons why we've kind of gone for Android.

    Ian: Also for us, one of the aims was to make this really available to everybody, no matter how little resources they had. So you can buy a very cheap Android phone, at least in the UK by one for, like, 40 and it will still work. This application will work on really old phones, tablets, it doesn't need the latest stuff it will all just work. Where obviously the iPhone devices are a lot more money, um, and also to develop for it costs quite a bit more money and that kind of locking of... you have to use test pilot there's no other way to do this. You can't just share it with people, Um, is problematic for us. Um, so one of the things that we're looking at is that if it can't... if we can't get on the play store, we'll put it on one of the many other app stores that are on Android. Like, oh, I forgot the one. There's there's a few. There's quite a few loads of them. So because that's the reason. Does that answer the question a little bit?

    Participant: Yeah, I definitely got more of an understanding of this, like, the nuances and sort of limitations of trying to build on our iOS verses Android.

    Ian: Yeah, I think because we're building it for... we're trying to build it for open sourcing. Then it just becomes really problematic to like... iOS is great if you're building a very polished app and you're kind of like... you're limiting the amount of people that can access it, they try to test users, and then you go: great this is great and we just put it on the on the Apple app store. Where with the Google way, you could do that, but also, you can go, We're going to build this thing, and we're going to just make it work, see how it works. It will have lots of bugs, but it will work. And we're gonna do it really openly. And that... that worked in favour of what we were trying to achieve.

    Participant: Right? Yeah, I get what you're saying. Completely.

    Ian: Cool. Okay. So any other questions?

    Ian: Oh, this is really, really shameful.

    Participant: It's not a question. It's a comment.

    Ian: Oh thank you.

    Participant: I think we've crossed paths on the Storytellers United slack.

    Ian: Ah!

    Participant: I have a comment about the workshop itself. And I just wanted to say that actually, I appreciated that we're having a workshop about podcasting, and we're actually doing it in this sort of audio form. And I just wanted to say that actually, it's fun, and I like it. So, yeah, I mean, I think all of us are audio people. So actually, I think it's a very interesting medium. So I just wanted to comment on this.

    Ian: Oh so you can't even see my video videos because the cameras on.

    Participant: Yeah, yeah, but still, I mean...

    Ian: Oh okay.

    Participant: Listening to the thing. So, yeah, I just wanted to say this because you keep apologizing for the situation, but I personally enjoy it.

    Ian: OK. Well, thank you. Thank you very much. I mean, honestly, one of the things I would like to do is if I could is play some of the audio, um, as a demo, and I just remembered I can do it another way. So I'm going to place... I got my... I got my phone my work phone. Because it helps to understand how it works. Now I was going to play it on the laptop and try and play it. But one of things I can do is just play it through the microphone that I'm using. So if you bear with me, hopefully you'll be able to hear us. So this is. we called it the Starter Demo. There's just these are real tech demos that I've just made. You know, there's no kind of editorial stuff to it. I've just made it. So I'm going to play a few of these. And if you... if you want to ask any questions, please chime in because I cannot see anything. Okay?

    Ian's phone: Hello, Ian. It's 21: 07. Welcome to Adaptive Podcasting.

    Ian: Can you hear it? Okay.

    Ian's phone: The difference between today's podcasting...

    Wrangler: Yes loud and clear.

    Participant: Yes.

    Ian: Thank you.

    Ian's phone: For example, this text is dynamically being read out by me your smartphone. You might have noticed I called you by your name. Yes, Ian, that is your name, Ian.

    Ian's phone: Okay, you get the picture. There is a number of things I can do in addition to playing audio files. I can jump around audio files only playing for a short while or adjusting the balance to get the right levels. Think of me as a master remixer.

    Ian: Okay. I'm not going to play the rest of it because it's just it goes on. But I'll play you a more musical example. So one of things about this is a we were able to do... so depending on the phone, Um, on my pixel, it can play up to 320 audio objects at the exact same time. On my Nokia it's less of them, but it's still 160. So you can really layer stuff up and do some amazing stuff, so I'll play just to give you an idea of how it kind of works. Um, here we go. So this is actually from YouTube, but we were able to to use it.

    Ian's phone: Layered audio number of test four layers.

    Ian: So it's subtle at first? Sorry. I'll stop talking.

    Ian's phone: Eight layers, 12 layers, 16 layers, 20 layers...

    Ian: So you know. So if you imagine if you... if you do any kind of audio editing, when you usually have all these layers of audio and you're making this beautiful sound that you have to kind of squeeze it down into a stereo track. You don't have to do that with this. You can do all that stuff, um, live on the device. Um, just some other quick demos. So some real techie demo. So here's one for you.

    Ian's phone: It is 21: 10. It is getting dark now.

    Ian: There you go. So, yeah, it's 21: 10 in the UK. It is dark. And obviously, it would... if you say in the daytime, it will say something different. You know, I can do... Oh this one's always makes me laugh, always makes other people laugh.

    Ian's phone: Hello, Ian. You have 2244 contacts in your phone book. Also, your battery is at 80%

    Ian: There you go. I don't know what you would do with that, but I mean, once someone did say to me based on your battery, you can reduce the amount of audio that's being played so that it's not kind of zapping someone's battery, for example.

    Ian: And I played the binaural one, but you won't be able to hear it because... well you hear it, but it won't sound binaural, but just to give you an idea.

    Ian's phone: The first time I saw it, it appeared as if from nowhere staring at me.

    Ian: Okay, so if you have stereo headphones, it sounds like it's surround sound. It sounds amazing.

    Participant: Yeah, it's behind you. And I have another quick question.

    Ian: Yes.

    Participant: When you were, like, showing like so, like, it can play back the multi track. I'm assuming this can also let you adapt and you know where that's responsive to the sensors and so on, right?

    Ian: Yes, exactly.

    Participant: What kind of capability does it on the audio side does it have for, like... so does it just do cross fading between tracks? Does it only allow things like, you know, rebalancing the volumes? Like what kind of other manipulations can be done on the audio side with that data, you know?

    Ian: So currently, and this is one of the things that we've done is that, um I think you're right. I would love to be able to do other manipulations to the audio, but right now, we've only got it so it will do... It will set the volume left and right, so it could adjust it so it sounds more left or more, right. It can... it can literally go... So you can, you know, no let's use that example, so you can have a track where two seconds... the first two seconds are someone kind of yelling and then 3 to 4 seconds it's something different and then 5 to 10 seconds, something different. So you can make it... You can literally go: Okay only play the 3 to 4 seconds and it stops dead. And obviously, because it's a computer, then it can go right. Play exactly from here to here, and then move on to the next track. And you can layer that on top of other things so you can suddenly take a track apart and do different things in there. We thought about doing different effects, we haven't done fades. We've also now added loop, and we've also added slowdown.

    Ian: So I'm actually... Oh, just quickly I'll give you the... This is... this is brand new, this is not, you know... So this is completely new, right? So this is the speed one.

    Ian: Where's the volume?

    Ian: Can you hear that?

    Ian's phone: So desperately want...

    Ian: No why is this so low?

    Ian's phone: Trying to get us to abandon that terminology.

    Ian: Okay, well, okay, so in the... in the set... so basically, this is... this is the... this is the built... the new beta that we're... we are building right now. So, yeah, it does that as well. It does looped stuff, but yeah, what are things that we'd like to do when we open source it, is for people to just go: I want to build to add this filter to it. I want to be able to do that to it. Um, and when you know, because it's open source, then that won't stop anyone from doing it.

    Ian: The final thing I know I've got a minute left is, um, we really want to build a community. So in the, um, in the I forgot what it's called now...

    Ian: In the Miro, on the right hand side, there's a link to this community called Storytellers United (SU) and there's a link. And if you click on that link and join, it's a Slack group. And, um, we got a channel called Adaptive Podcasting, Um or Adaptive Podcast. Please join. And you can ask all types of questions. There's also other people in there who are also experimenting with Adaptive Podcasting. I think that is it for me. Unless someone has any burning questions. I see that it's quarter past. So if any of you have burning questions, say so now.

    Participant: I have a really quick one Ian if you entertain it. I don't know. Are you familiar with IA... IAB and podcast measurement? They kind of deal with analytics and things like that. You know, I fully get where the group is coming from as far as being ethical and, uh, maybe keeping targeted ad insertion and things like that out of it. But what if the creator wanted to be able to monetize their creation? So is... does the framework support the ability to meet IAB compliance or ad insertion, should the person want to do that or to make their own APK out of the open source?

    Ian: Yes. Yeah. Once it's open source, you can add all the kind of functionality you want. So, for example, we have this thing called the Merchants, and the Merchants are basically the ability for the application to talk to a sensor and that sensor could be a Web Service or an API that we haven't allowed. So, for example, the geolocational stuff. It's up to you what you want. You know, we don't want to restrict what you could do. Once it's open source, you know, it's up to you what you wanna do. We want to make sure that there's a ethical, safe version that anyone could use, but if you want to... So we've been talking to some big organizations, which I'm not going to say their name because it's being recorded, who want to use this to do things like ad insertions, but also because you know which bits of audio are being played to be able to give, like a really clear view of this person is listening to this, and they've heard it because it's night and they've heard this one. You know, we've heard this advert, for example. That's something that we're not interested in doing. But if someone else wants to do it, once it's open source you can do... you can go crazy.

    Participant: Cool. Thank you very much.

    Ian: Okay. I think I got room for one more question, if it's quick, otherwise I'm going to restart my machine. Unless... unless there's something else I need to do, uh, to try and get the story... to try and get the zoom back. I'm hoping that.

    Participant: Thank you. This was so interesting.

    Participant: Thank you so much this was really an experience.

    Ian: No it's been an experience for me. But hopefully, I'm sorry if anyone hasn't got the presentation, um, if someone can copy it into the chat? I'll stick around for a little bit just so that you can get the presentation.

    Ian: And then on the Miro board, there's lots of like... there's links to some videos. There's the Storytellers, United link. Just, you know, knock yourself out. You know and if you ever want to get hold of me, join Storytellers United and just ask questions in there, Okay?

    Ian: I think that's it. I'm gonna stick around and thank you very much for entertaining my my laptop that... it's a brand new laptop. I don't know what... it's been perfect for... except for today, and it did crash. But it takes about two minutes to get zoom back, and then I know it's hopping, but this time it's just like stuck.

    Participant: Ian, Ian! I'm from Espoo, Finland. We have met here when you were at my data conference. I wish to tell you. Thank you very much. And you gave me the sample of how to behave in a stressful situation. You're very good. Thank you. You did it perfectly. Don't like be nervous or something. You did it great. Yeah.

    Ian: Yeah, I second that I second that.

    Ian: Thank you.

    Participant: And you're getting also some comments in the chat as well?

    Ian: I'd love to see them.

    Participant: I think we can export them maybe... maybe someone could export them for you and send them over, you know, to your slack once you have your screen back.

    Ian: Okay. Thank you. Thank you very much. I don't know. Um, I can't remember the person who is co hosting. I forgot your name.

    Wrangler: Me. Gracielle.

    Ian: Is there anything that I should be doing? Because I can't... I cannot see Zoom. I can see some other bits and pieces, but I can't see Zoom. It's all like, greyed out. So is there anything I should do before I kind of like restart my machine.

    Wrangler: Uh, no, I was just going to say that. First of all, congratulations. This was a great session. Um, and I have a suggestion. If you want. We have the emergent sessions. Every worked every week day at 3 CET.

    Ian: Yes.

    Wrangler: So if if you want to, uh, make your activities there and we can invite everyone that is here to join you.

    Ian: Okay, that's a good idea. That's a very good idea, actually. Um, if that's possible, that would be useful. Because, um, yeah, I literally had to do all the top of my head and try and remember some bits of what was in my slides. Um, I would love to. One thing I really wanted to do is to do the task, because I really am interested What people would do, um, with all of the kind of capability. So, yeah.

    Wrangler: I agree it will be awesome. So if... whenever you want, you can contact us at the Creative AI Space and we can try to publicize everything, and everyone is invited already.

    Ian: Okay, great. And if you can make a copy of the... the chat because I'm just, like, just intrigued.

    Wrangler: We usually can't. But I'm trying to, because the recording is with the chat and it goes to the cloud, but...

    Ian: Okay. All right. So I'm going to...

    Wrangler: I'll stop the recording and then you can reboot.

    Ian: Okay, I'm gonna restart and is there anything else I need to do just before I do restart because I want don't want to, like, disappear. And then everything's gone, and you're like... ...

    Read more: Creative AI at MozFest: Creatively Collaborate With Machines.

    Built with Hyperaudio Lite – part of the Hyperaudio Project.