Logo

    online video

    Explore "online video" with insightful episodes like "Becoming An Authority On YouTube With Mark Yates From Big Man In The Woods", "14 Livestreaming Tips for Business", "Online-Videos: Wohin geht die Reise?", "MPEG Through the Eyes of Its Chairman, Leonardo Charliogne." and "Microservices – Good on a Bad Day with Dom Robinson & Adrian Roe from id3as." from podcasts like ""YouTube Creators Hub", "Online Video Made Easy", "Der Longtail Media Podcast", "The Video Insiders" and "The Video Insiders"" and more!

    Episodes (12)

    Becoming An Authority On YouTube With Mark Yates From Big Man In The Woods

    Becoming An Authority On YouTube With Mark Yates From Big Man In The Woods

    Becoming An Authority On YouTube With Mark Yates From Big Man In The Woods

    In this episode, we sit down with Mark Yates, the man behind the wildly popular YouTube channel "Big Man in the Woods," to discuss what it takes to become an authority on YouTube. Mark shares his insights on how he built his channel from scratch, the challenges he faced along the way, and his strategies for engaging and growing an audience. Whether you're a seasoned YouTube creator or just starting, you won't want to miss this deep dive into the world of YouTube content creation. Join us for an insightful conversation on how to take your YouTube channel to the next level.

    About Mark Yates

    I'm a Scout Leader from London.

    My channel is all about helping Scout Leaders be the best leader they can be.

    I give out game ideas, gear reviews, and advice. How to deal with specific situations.

    “Unlock Your YouTube Success with TubeBuddy: Skyrocket Your Channel Growth Effortlessly! 🚀 Get Your FREE 30-Day Trial NOW! 👉 Click Here!”

    🎙️ Be Our Star Guest! Share Your YouTube Journey on Our Podcast! 🌟 Click HERE to Submit Your Channel and Join the Conversation!

    🔥 Boost Your YouTube Game with Exclusive Perks! Join Our Patreon Family NOW & Unlock Access to Our Private Discord Server, Monthly Mastermind Group, and MORE! 💪 Support the Show & Elevate Your Journey!

    🎯 Transform Your YouTube Channel with Personalized Coaching & Consulting! 🚀 Unleash Your Full Potential and Skyrocket Your Success! ✨ Don’t Wait, Book Your Session NOW!

    Connect With Mark Here:

    YouTube Channel /// Website /// Instagram

    Links Discussed In This Episode

    Uscreen - With Uscreen, video creators build branded, accessible, and engaging memberships that earn sustainable revenue.

    Fiverr – Hire the right people for the jobs you need to make your YouTube life and workflow easier!

    Bluehost – If you need a website use this link to get a Free Domain Name and a great deal on hosting

    14 Livestreaming Tips for Business

    14 Livestreaming Tips for Business

    #018 - In this episode, Kerry covers 14 livestreaming tips that businesses can use to reach more clients and customers.

    These include:

    Introduce new products or services in the most exciting way possible - with a livestream show.

    Host regular Q&A sessions.

    Offer exclusive deals and discounts to your audience.

    Share behind-the-scenes footage of your business.

    Collaborate with an influencer or complimentary business/service.

    Host a virtual event such as a workshop, webinar, or conference.

    Offer personalized consultations and help your customers make informed purchasing decisions.

    Share customer testimonials.

    Showcase your products or services in action.

    Share industry news and updates.

    Livestream a charity event or a community project to show your audience how you give back to the community.

    Host a virtual tour of your physical location.

    Showcase your team and introduce your team members to your audience.

    Host a virtual open house, perhaps tied to an actual open house, and give your audience an inside look at your business.

    Where To Find Kerry Online

    You can always find Kerry's smartphone tech gear recommendations right here.

    Follow Kerry Shearer "The Livestream Expert" on Instagram

    Kerry's web site is KerryShearer.com

    Microservices – Good on a Bad Day with Dom Robinson & Adrian Roe from id3as.

    Microservices – Good on a Bad Day with Dom Robinson & Adrian Roe from id3as.

    E07: The Video Insiders talk with a pioneering software development company who is at the center of the microservices trend in modern video workflows. Featuring Dom Robinson & Adrian Roe from id3as.

    Beamr blog: https://blog.beamr.com/2019/02/04/microservices-good-on-a-bad-day-podcast/

    Following is an undedited transcript of the episode. Enjoy, but please look past mistakes. Mark & Dror

    Intro: 00:00 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation Kodak nerd and a marketing guy who knows what I-frames and Macroblocks are. And, here are your hosts, Mark Donnigan and Dror Gill.

    Mark Donnigan: 00:22 Well, welcome back to the Video Insiders. It's so great to be here. Dror, how are you doing?

    Dror Gill: 00:29 I'm doing great and I'm really excited to do another episode of the Video Insiders. I would say this is probably the best part of my day now doing the Podcast. Although, watching video all day isn't bad at all.

    Mark Donnigan: 00:45 That's not a bad job. I mean, hey, what do you tell your kids?

    Dror Gill: 00:49 So, exactly, this is [crosstalk 00:00:52]. I work part-time out of my home office and my daughter comes in after school and she sees me watching those videos and she says, "Dad, what are you doing?" So, I said, I'm watching videos, it's part of my work. I'm checking quality, stuff like that. Then she says, "What? That's your work? You mean they pay you to do that? Where can I get a job like that? You get paid to watch TV."

    Dror Gill: 01:18 Now, of course, I'm not laid back on a sofa with some popcorn watching a full length movie, no. I'm watching the same boring video clip again and again, the same 20, 30 seconds segments, and I'm watching it with our player tool, with Beamr view and typically one half is flipped over like butterfly mode. And then, you're pausing on a frame and you're looking for these tiny differences in artifacts. So, it's not exactly like watching TV in the evening, but you get to see stuff, you get to watch content, it's nice but could get tiring after a while. But, I don't think I'll ever get tired of this Podcast Mark.

    Mark Donnigan: 02:04 No, no. I know I won't. And, I think going back to what you do in your day job watching video, I think our listeners can relate to. It's a little bit of a curse, because here you are on a Friday night, you want to relax, you just want to enjoy the movie, and what do you see? All of the freaking artifacts and all the ... And, you're thinking that ABR recipe sure could have been better because I can see it just switched and it shouldn't have, anyway, I think we can all relate to that. Enough about us, let's launch into this episode, and I know that we're both super excited. I was thinking about the intro here, and one of the challenges is all of our guests are awesome, and yet it feels like each guest is like this is the best yet.

    Dror Gill: 02:56 Yeah. Really today we have two of really the leading experts on video delivery. I've been running into these guys at various industry events and conferences, they also organize conferences and moderate panels and chair sessions, and really lead the industry over the top delivery and CDNs and all of that. So, it's a real pleasure for me to welcome to today's Podcast Dom and Adrian from id3as, hi there?

    Adrian Roe: 03:26 Hey, thank you very much.

    Dom Robinson: 03:27 Hey guys.
    Adrian Roe: 03:27 It's great to be on.

    Dom Robinson: 03:28 How are you doing?

    Dror Gill: 03:29 Okay. So, can you tell us a little bit about id3as and stuff you do there?

    Adrian Roe: 03:34 Sure. So, id3as is a specialist media workflow creation company. We build large scale media systems almost always dealing with live video, so live events, be that sporting events or financial service type announcements, and we specialize in doing so on a very, very large scale and with extremely high service levels. And, both of those I guess are really crucial in a live arena. You only get one shot at doing a live announcement of any sort, so if you missed the goal because the stream was temporary glitch to that point, that's something that's pretty hard to recover from.

    Adrian Roe: 04:14 We've passionate about the climate and how that can help you build some interesting workflows and deliver some interesting levels of scale and we're primary constructors. Yeah, we're a software company first and foremost, a couple of the founders have a software background. Dom is one of the original streamers ever, so Dom knows everything there is to know about streaming and the rest of us hang on his coattails, but have some of the skills to turn that into one's a note, so work for our customers.

    Dror Gill: 04:46 Really Dom, so how far back do you go in your streaming history?

    Dom Robinson: 04:50 Well, anecdotally I sometimes like to count myself in the second or third webcasters in Europe. And interestingly, actually one of the people who's slightly ahead of me in the queue is Steve Clee who works with you guys. So, did the dance around Steve Clee in the mid '90s. So, yeah, it's a good 20, 23 years now I've been streaming [inaudible 00:05:12].

    Dror Gill: 05:11 Actually, I mean, we've come a long way and probably we'll talk a bit about this in today's episode. But first, there's something that really puzzles me is your tagline. The tagline of id3as is, good on a bad day. So, can you tell us a bit more about this? What do you mean by good on a bad day?

    Adrian Roe: 05:33 We think is probably the most important single facet about how your systems behave, especially again in a live context. There are hundreds or possibly even thousands of companies out there who can do perfectly good A to B video encoding and transcoding and delivery when they're running in the lab. And, there's some great tools, open source tools to enable you to do that, things like FFmpeg and so on. What differentiates a great service from a merely good service though is what happens when things go wrong. And especially when you're working at scale, we think it's really important to embrace the fact that things will go wrong. If you have a thousand servers running in your x hundred events at any one particular time, every now and then, yeah, one of those servers is going to go up in a puff of smoke. Your network's going to fail, or a power supply is going to blow up, or whatever else it may be.

    Adrian Roe: 06:31 And so, what we think differentiates a great service from a merely good one is how well it behaves when things are going wrong or ranji, and partly because of the technology we use and partly because of the background we come from. Technically, when we entered the media space, so as a company that was about eight years ago, obviously Dom's been in the space forever, but as a company it's been eight years or so, we came to it from exactly that angle of how can we ... So, our first customer was Nasdaq delivering financial announcements on a purely cloud based system, and they needed to be able to deliver SLAS to their customers that were vastly higher than the SLAS you could get for any one particular cloud service or cloud server. And so, how you can deliver a fantastic end to end user experience even when things inside your infrastructure are going wrong, we think is much more important than merely, can you do an A to B media chain?

    Mark Donnigan: 07:27 That's interesting Adrian. I know you guys are really focused on micro services, and maybe you can comment about what you've built and why you're so vested in data architecture.

    Adrian Roe: 07:39 With both things, there's nothing new in technology. So, Microservices as a phrase, I guess has been particularly hot the last, I don't know, three, four years.

    Mark Donnigan: 07:49 Oh, it's the buzzy, it's the buzzy word. Dror loves buzzy words.

    Dror Gill: 07:54 Micro services, buzz, buzz.

    Mark Donnigan: 07:54 There we go. I'm afraid you have to hear the rap, you have to hear his rap. I'm telling you it's going to be number one on the radio, number one on the charts. It's going to be a hit, it's going to be viral, it's going to be [inaudible 00:08:08].

    Adrian Roe: 08:09 So, our approach to Microservices I'm afraid is grounded in the 1980s, so if we're going to do a rap at that point, I'd need to have a big bouffant hair or something in order to do my Microservices-

    Mark Donnigan: 08:18 And new eyes.

    Dom Robinson: 08:21 You left your flares in my house dude.

    Adrian Roe: 08:23 Oh, no, my spare pairs are on, it's okay. Actually, a lot of that thinking comes from the Telco space where when we were starting to get into ... In a past life I used to build online banks and big scale systems like that, but one of the things that was interesting when we came to media is actually if you've got 500 live events running, that's a big system. The amount of data flowing through that with all the different bit rates and so on and so forth is extremely high. Those 500 events might be running on a thousand servers plus in order to give you a full scale redundancy and so on and so forth, and those servers might well be spread across three, four, five different data centers in three, four, five different continents.

    Adrian Roe: 09:14 And, there are some properly difficult problems to solve in the wider space rather than specifically in the narrow space of a particular single element to that workflow. And, we did some research a while back, we said actually other people must have faced some of these challenges before. And, in particular the Telco space has faced some of these challenges for a long time, and people get so used to just being able to pick up the phone and have the call go from A to B, and the technology by and large works so well that you don't really notice it's there, which is actually another good strap line I think, technology is so good you ignore it, that's what we aspire to.

    Adrian Roe: 09:51 So, we came across a technology called Erlang, which takes a whole approach to how you build systems. It's different to the traditional. As I say, in itself is not a new technology and that's one of the things we like about it, but basically it says the problems that Erlang was trying to solve when it was created back in the '80s was specifically for things like mobile phones, which is where you would have a mobile phone switch, would be a whole bunch of proprietary boards, each of which could handle maybe, I don't know, five or 10 calls or something, and they'd be stuck together and a dish great big rack with some kind of backplane joining them altogether. And, the boards themselves were not very reliable, and in order for the mobile or for the Telcos to be able to deliver a reliable service using this kind of infrastructure, if any one particular board blew up, the service itself had to continue and other calls, it was really important to those other calls weren't impacted and so on and so forth.

    Adrian Roe: 10:48 So, this language Erlang was invented specifically to try and solve that class of problem. Now, what was interesting is if you then wind the clock forward 20, 30 years from that particular point and you consider something like the cloud, the cloud is lots and lots of individual computers that on their own aren't particularly powerful and on their own aren't particularly reliable, but they're probably connected together with some kind of LAN or WAN that actually is in pretty good shape.

    Adrian Roe: 11:17 And, the challenges that back then were really customed to the mobile and network space suddenly become incredibly good patterns of behavior for how you can build high scale cloud systems and super reliable cloud systems. And so, this as is always the case, these new shiny technologies, Erlang, for example, had its moment in the sun about a year or so back when WhatsApp was bought by Facebook, because when WhatsApp were bought by Facebook for $18,000,000,000 or whatever it was, I believe that WhatsApp had a total of 30 technical staff of which only 10 were developers, and they build all of their systems on top of Erlang and got some major advantage from that.

    Adrian Roe: 11:57 And so, when we came into the whole media space, we thought that there were some very interesting opportunities that would be presented by adopting those kinds of strategies. And now, what's nice then about what a Microservices come into that, so in Erlang or the way we build systems, you have lots of single responsibility, small bits of function, and you gather those bits of function together to make bigger, more complex bits of function and then you gather those together to make progressively more larger scale and more complex workflows. And, what's really nice about that as a strategy so people are increasingly comfortable with using Microservices where I'll have this to do my packaging and this to do my encoding, and then I'll plug these together and so on and so forth.

    Adrian Roe: 12:46 But, when your language itself is built in those kinds of terms, it gives you a very consistent way of describing about the user experience all the way through your stack. And, the sorts of strategies you have for dealing with challenges or problems that are very low level are exactly the same as the strategies you have for dealing with server outages, and so on and so forth. So, it gives you a very consistent way that you can think about the kind of problems you're trying to solve and how to go about them.

    Dror Gill: 13:10 Yeah, that's really fascinating. So basically, we're talking about building a very reliable system out of components where not all of these components are reliable all the time, and inside those components are made out of further sub components, which may fail.
    Adrian Roe: 13:28 Correct, yeah.

    Dror Gill: 13:29 And then, when you employ a strategy of handling those failures and failing over to different components, you can apply that strategy at all levels of your system from the very small components to the large servers that do large chunks of work.

    Adrian Roe: 13:45 I could not have put it better myself, that is exactly right. And, you get some secondary benefits, so one is I am strongly of the opinion that when you have systems as large and as complex as the media workflows that we all deal in, there will be issues. Things will go wrong either because of physical infrastructure role, just because of the straight complexity of the kinds of challenges you're looking to meet. So, Erlang would take an approach that says let's treat errors as a first class citizen, let's not try and pretend they're never going to happen, but let's instead have a very, very clear pattern of behavior about how you go about dealing with them, so you can deal with them in a very systematic way. And, if those errors that are very, very micro level, then the system will probably replace the things that's gone bad, and do so in a few well under a fractions of a millisecond. So, you literally don't notice.

    Adrian Roe: 14:41 We had one particular customer where they had a component that allowed them to patch audio into a live media workflow, and they upgraded their end of that particular system without telling us or going through a test cycle or something which was kind of disappointing. And, a week or so after their upgrade, we were looking at just some logs from an event somewhere, and they seemed a bit noisier than usual. We couldn't work out why and the event had been perfect, nothing had gone wrong, and we discovered that they started to send us messages, one part of the protocol, so they were just incorrectly sending us messages as part of this audio integration that they'd done and they were just sending us junk.

    Adrian Roe: 15:24 And, the handler forwarded our end was doing what it ought to do in those particular cases that was crashing and getting itself replaced. But, because we designed the system really well, the handler and the logic for it got replaced. The actual underlying TCP connection, for example, stayed up and there wasn't a problem. And, actually we're having to restart the handler several times a second on a live two way audio connection and you literally couldn't hear that it was happening.

    Mark Donnigan: 15:49 Wow.

    Adrian Roe: 15:49 Yeah. So yeah, you can get ... But, what's nice is exactly the same strategy in the way of thinking about things and works. Yeah, right at the other level where I've gone seven data centers, and 1000, or 1500 servers running and so on and so forth, and it gives you a camera and a consistent strategy for how you reason about how you're going to behave in order to deliver a service that just keeps on running and running and running even when things go bad. I will give one example, then I'll probably let Dom share some of his views for a second, which was there was a reasonably famous incident a few years back when Amazon in US East just disappeared off the map for about four days and a number of very large companies had some really big challenges with that, and frankly we were just offline for four days.

    Adrian Roe: 16:36 We had 168 servers running in US East at the time for Nasdaq, one of our customers, we did not get a support call. And so, all of the events that were running on there failed over to other servers that we're running in US West typically. About five minutes later we were back in a fully resilient setup, because we'd created infrastructure in Tokyo and Dublin and various other data center, so that had US West disappeared off the face of the earth as well. Again, we might've got a support call the second time around, but we literally read about it in the papers the next day.

    Mark Donnigan: 17:06 That's pretty incredible. Are there any other video systems platforms that are architected on Erlang, or are you guys the only ones?

    Adrian Roe: 17:15 The only other one I am aware of out of the box is a company that specializes more in the CDN and final content delivery side of things, so we're not quite unique, but we are certainly highly, highly unusual.

    Mark Donnigan: 17:28 Yeah. Yeah. I didn't want to go to Dom, and Dom with your experience in the industry, I'm curious what you're seeing in terms of how companies are architecting their workflows. Are you getting involved in, I guess evolutionary projects, that is you're extending existing platforms and you're in some cases probably shoe honing, legacy approaches, solutions, technologies, et cetera, to try and maybe bring them to the cloud or provide some sort of scale or redundancy that they need? Or, are people just re architecting and building from the ground up? What are people doing out there and what are specifically your clients doing in terms of-

    Dom Robinson: 18:20 So, it's interesting, I was talking, I did a big review of the Microservices space for Streaming Media Magazine, which came out I think in the October edition this year, and that generated quite a lot of conversations and panel sessions and so on. When we've been approached by broadcasters who have established working workflows, and they're sometimes quite testy because they've spent a lot of time and then they're emotionally quite invested in what they might have spent a decade building and so on. So, they often come with quite testy challenges, what advantages would this bring me? And quite often, there's very little advantage in just making the change for the sake of making the change. The value really comes when you're trying to scale up or take benefit from scaling down. So, with a lot of our financial needs clients the cycle of webcasts, if you'd like a strongly quarterly though, it's all about financial reporting at the end of financial quarters. So, they often want to scale down their infrastructure while during the quiet weeks or quiet months because it saves them costs.

    Dom Robinson: 19:25 Now, if you're doing 24/7 linear broadcasting, the opportunity to scale down may simply never present itself, you just don't have the opportunity to scale down. Scaling up is a different question, but scaling down, if it's 24/7, there's no real advantage to scaling down, and this is true of cloud as much as it is of Microservices specifically. But, when people come to us and say, right, we've really want to make that migration, they sometimes start with the premise that they'd like to take tiny little pieces of the workflow, and just migrate those little tiny incremental steps. In some cases we may do that, but we tend to try to convince them to actually build a Microservice architecture or virtualized architecture to run in parallel. So, quite often we might start with the client by proposing that they look at their virtualized strategies disaster recovery strategy in the first instance. And then, what happens is after the first disaster, they never go back to their old infrastructure.

    Mark Donnigan: 20:21 I'm sure, yeah.

    Dom Robinson: 20:22 And after that, they suddenly see they have all the benefits and it is reliable and despite the fact that they have no idea where on earth this physically is happening, it's working and it works really reliably. And, when it goes wrong, they can conjure up another one in a matter of seconds or minutes. These are not apparent until the broadcaster actually puts them into use. I spent 20 years trying to convince the broadcast industry that IP was going to be a thing, and then overnight they suddenly embraced it fully, and these things people do have epiphany's and they suddenly understand the value.

    Dom Robinson: 20:56 Disaster recovery has been a nice way to make people feel comfortable because it's not a suggestion of one day we're going to turn off your trusted familiar, nailed down tin and move it all into something you have no idea where it is, what it's running on, how it's running and so on. People are risk averse naturally in taking that type of leap of faith, but once they've done it, they almost invariably see the benefits and so on. So, it's about waiting for the culture in the larger broadcasters to actually place that confidence in the, if you like, the internet era, which generally means as people who are being cynical. I used to make testy comments on panel sessions about the over '50s, '60s, I don't know where you want to put your peg in there. Once those guys finally let internet natives take control, that's when the migration happens.

    Mark Donnigan: 21:48 Yeah, that's interesting. I can remember going back, oh, 10 years or more and sitting in the cable show which no longer exists, but certain sessions there and Cisco was presenting virtualized network function. And, when the room would always be packed and you'd have a sense if you're sitting in these sessions like this is really happening. This is, wow, this is really happening in all the biggest MSLs were there, all the people were there, right? And then, you'd come back the next year, it'd be the same talk the same people in the room, then come back the next year after that and nobody was [crosstalk 00:22:25], because it's the future.

    Dom Robinson: 22:23 Yeah, absolutely.

    Dror Gill: 22:28 It was always the future I was making fun of.

    Mark Donnigan: 22:30 Now, the switch has absolutely flipped and we're seeing that even on the codecs side, because there was a time where unless you were internet native as you said, you needed a full solution, a black box. It had to go on a rack, it had to ... That's what they bought. And so, selling a codec alone was a little bit of a challenge, but now they can't use black boxes, and they're ... So.

    Dom Robinson: 22:58 Sometimes I liken it to the era of HI-FI as digital audio and MP3 started to arrive, I was quite involved in MP3 as it emerged in the mid '90s. And, I have over the last two decades flip flop from being the musicians, worst enemy to best friend to worst enemy to best friend, and everybody just depends on the mood of the day. I was reflecting, and this is a bit of a tangent, but I was reflecting when you guys were talking about watching for artifacts in videos. I've spent so long watching 56K blocky video that Adrian, Nick and Steven, the rest of the team never ever let me give any opinion on the quality of video, because I'm quite happy watching a 56K video projected on my wall three meters wide and it doesn't bother me, but I'm sure Dror would be banging his head against the wall if he [inaudible 00:23:47] videos.

    Dror Gill: 23:49 No, I also started with 56K video and real video, and all of those the players and still in the '90s, but I managed to upgrade myself to SD and then to HD, and now if it's not HDR, it's difficult to view. But in any case, if we look at this transition that is happening, there are several levels to this transition. I mean, first of all, you make the transition from hardware to software then from the software to the cloud, and then from regular software running in the cloud and VMs to this kind of Microservices architecture with Dockers. And, when I talk to customers they say, yeah, we need it as a Docker, we're going to do everything as a Docker. But then, as Mark said, you're not always sure if they're talking about the far future, the new future, the present, and of course it changes if you're talking to the R&D department or you're talking with the people who are actually doing the day to day production.

    Adrian Roe: 24:51 There were some interesting ... And, I think Docker, this maybe a slightly unpopular thing to say, but yeah, so I think Docker is fantastic and yeah, we use it on a daily basis and development and it's a great on my laptop, I can simulate a cluster of eight servers or doing stuff and failing over between them and so on and so forth and it's fantastic. And, and we've had Docker based solutions in production for four years, five years, certainly a long time, and actually we were starting to move away from Docker as a delivery platform.

    Dror Gill: 25:22 Really? That's interesting. So, you were in the post Docker era?

    Adrian Roe: 25:26 Yes, I think just as other people are getting very excited that their software can run on Docker, which I always get confused with announcements like that, because Docker is essentially another layer of virtualization, and strangely enough people first all got excited because their software would run not on a machine but on a virtual machine and it takes quite a strange software requirement before the software can really even tell the difference between those. And then, you move from a virtual machine to a Docker type environment.

    Adrian Roe: 25:52 Yeah. Docker of course being conceptually nothing new and yeah, it's a wrapper around something the Linux kernel has been able to do for 10 years or so. Yeah. And, it gives you certain guarantees about kerniless and that the sandbox isn't going to interfere with the sandbox and so on and so forth. And, if those things are useful to you, then absolutely use Docker to solve those business problems.

    Adrian Roe: 26:13 And another thing that Docker can do that again solves a business problem for me when I'm developing is I can spin up a machine, I can instantiate a whole bunch of stuff, I can create virtual networks between them, and then when I rip it all down my laptop's back in pretty much the same state as it was before I started, and I have some guarantees around that. But especially in a cloud environment where I've got a premium job coming in of some sort, I'll spin up a server to do that and probably two servers in different locations to be able to do that. And, they'll do whatever they need to do and yeah, there'll be some complex network flows and so on and so on and so forth to deliver that.

    Adrian Roe: 26:48 And then, when that event's finished, what I do is I throw that server in the bin. And so, actually Docker there typically is just adding an extra abstraction layer, and that abstraction layer comes at a cost in particular incidence of disk I/O and network I/O that for high quality video workflows you want to go into with your eyes open. And so, when it's solving a business problem for you, I think Docker is a fantastic technology, and some very clever people are involved and so on and so forth. I think there's a massive amount of koolaid been drunk just to see if Docker where it's actually adding complexity and essentially no value.

    Dror Gill: 27:25 So, I would say that if you have, as I said, if you have a business problem, for example, you have Linux and Windows servers, it's a given you can't change that infrastructure and then you want to deploy a certain task with certain servers, and you wanted to work across them seamlessly with those standard interfaces that you mentioned, then Docker could be a good solution. On the other hand, what you're saying is that if I know that my cluster is fully Linux, certain version of Ubuntu, whatever, and because that's how I set it up, there's no advantage in using the Dockers because I can plan the workflow or the workload on each one of those servers, and at the level of cloud instances launch and terminate them, and then I don't need the Docker. And the issue of overhead, we haven't seen a very large overhead for Docker, we always compare it to running natively. However, we did find that if your software is structured in a certain way, it can increase the overhead of Docker beyond the average.

    Dom Robinson: 28:31 Something important that came up in some of the panels, Streaming Media West and content delivery world recently on this topic, at the moment people talk synonymously about Microservices and Docker, and that's not true. Just because something's running in Docker does not mean you're running a Microservices architecture. In fact if you dig under the ... All too often-

    Dror Gill: 28:50 Right, it could be a huge one of the thick servers. Servers that are just running on Docker.

    Dom Robinson: 28:54 Exactly. All too often people have just simply drop their monolith into a Docker container and called it a Microservice, and that's a ... Well, I won't say it on your Podcast, but that's not true. And, I think that's very important, hence we very much describe our own Erlang based architecture as a Microservices architecture. Docker is as Adrian was explaining, it's nice to have in certain circumstances, it's an essential, but in other circumstances it's just not relevant to us. So, it is important that Docker is a type of virtualization and is nothing to do with Microservices architecture, and it's a very different thing. So, well Adrian might kick me under the virtual table.

    Adrian Roe: 29:27 No, no, that's all ... Yeah, there's a lot of people who will say if you take an application and you turn it into ... You take a monolithic application and Microservicize it what you have is a monolithic application that's now distributed. So, you've taken a hard problem and made it slightly harder.

    Dom Robinson: 29:44 Exactly.

    Adrian Roe: 29:45 So, what's probably more important is that you have good tools and skills and understanding to deal with the kinds of challenges you get in distributed environments. And, actually understanding your own limitations is interesting there. I think if you look at how one coordinate stuff within a particular OS application, then Microservices are a great way of structuring individual applications, and they can cooperate, and they're all in the same space, and you can replace bits of them and that's cool. And then, if you look at one particular server, again, you're Microservices architecture there might go, okay, this component is an an unhealthy state, I'm going to replace it with a clean version and yeah, you can do that in very, very quick time and that's all fantastic.

    Adrian Roe: 30:33 And then, maybe even if I'm running in some kind of local cluster, I can make similar decisions, but as soon as I'm running in some kind of local cluster, you have to ask the question, what happens if the network fails? What's the probability of the network failing? And if it does, what impact is that going to have on my service? Because yeah, it's just as bad typically to have two servers trying to deliver the same instance of the same live services as it is to have none, because there'll probably be a closed network floods and all sorts of bad things can happen as a result, so.

    Adrian Roe: 31:08 And then, if you look at a system that's distributed over more than one day center that absolutely just going, oh, I can't see that other service. Yeah, so Microservice is part of my overall delivery. Making decisions based on that is is something you need to do extremely carefully and there's an awful lot of academic work done around consensus algorithms in the presence of network splits and so on and so forth, and it's not until you understand the problem quite well that you actually understand how damned hard the problem is. You're just naive understanding of it is, oh, how hard can it be just to have three servers agree on which of them should currently be doing x, y, z job? Turns out it's really, really, really hard, and that you stand on the shoulders of giants because there's some amazing work done by the academic community over the last few decades, go and leverage the kind of solutions that they've put together to help facilitate that.

    Dom Robinson: 31:59 I think one of the upsides of Docker though is it has subtly changed how dev teams are thinking, and I think it's because it represents the ability to build these isolated processes and think about passing data between processes rather than just sharing data in a way a monolith might have done. I think that started people to architect in a Microservices architecture. I think people think that that's a Docker thing, but it's not. Docker is more of a catalyst to it than actually bringing about the Microservices architecture.

    Mark Donnigan: 32:33 That's interesting Dom. I was literally just about to make the point or ask the question even. I wonder if Docker is the first step towards truly Microservices architecture for a lot of these organizations, and I think Adrian did a great job of breaking down the fact that a lot of maybe what is getting sold or assumed to be Microservices really isn't, but in reality it's kind of that next step towards a Microservices architecture. And, it sounds like you agree with that.

    Dom Robinson: 33:09 Yeah, yeah, yeah. I think it's part of the path, but it's a-

    Mark Donnigan: 33:12 That's right.

    Dom Robinson: 33:13 Going back to my original statement Doc-

    Adrian Roe: 33:13 I am not even sure that strongly it's an available tool in this space.

    Mark Donnigan: 33:18 It's an available tool, yeah.

    Adrian Roe: 33:18 You can absolutely build Microservices at dentonville Docker anywhere. Yeah.

    Mark Donnigan: 33:24 Sure. Absolutely. Yeah. I wasn't saying that Docker's a part of that, but I'm saying if you come from this completely black box environment where everything's in a rack, it's in a physical location, the leap to a truly Microservices architecture is massive. I mean, it's disruptive on every level.

    Adrian Roe: 33:46 And, it's a great tool, it's part of that journey. I completely do agree with that.

    Mark Donnigan: 33:48 Yeah, exactly. Exactly. Well, this leads into a conversation or a topic that's really hot in the industry right now, and that's a low latency. I was chuckling, I was walking around Streaming Media West just couple of weeks ago, and I don't think there was one booth, maybe there was one, I just didn't see it. Maybe the Panasonic camera booth, they didn't have low latency plastered all over it, but every booth, low latency, low latency,

    Adrian Roe: 34:16 There's some interesting stuff around low latency because there's a beautiful reinvention of the wheel happening because, [crosstalk 00:34:28].

    Mark Donnigan: 34:29 Well, let's talk about this because maybe we can pull back a little bit of the, I don't know the myths that are out there right now. And also, I'd like to have a brief real honest conversation about what low latency actually means. I think that's one of the things that, again, everybody's head nods, low latency. Oh yeah, yeah, yeah, yeah. We want that too, but then you're like what does it mean?

    Dror Gill: 34:57 Yeah, everybody wants it. Why do they want it, is an interesting question. And, I heard a very interesting theory today because all the time you hear about this effect of if you're watching a soccer game and you have a lot of latency because you're viewing it over the internet and somebody else has cable or satellite and they view it before you, then you hear all those roars of the goal from around the neighborhood and this annoys the viewer.

    Dror Gill: 35:25 So, today I heard another theory that that's not the problem of low latency because to block those roars you can just isolate your house and put on headphones or whatever. The real problem that I heard today is that, if there's a large latency between when the game actually happens and when you see it, then you cannot affect the result of the game. Okay? So, the theory goes like this, you're sitting at home, you're wearing your shirt and your fan, and you're sitting in a position that is a lucky position that will help your team. So, if the latency is high then anything you do cannot affect the game because it's too late, because the latency is low you'll have some effect over the result of the game.

    Adrian Roe: 36:13 When TiVo was brand new and there was the first personal video digital video recorders were a thing. They had this fantastic advert where somebody was watching an american football game, and as they're in sudden death overtime and the kick is just about to do a 45 yard kick. Yeah, and if it goes over, they win the game and if it doesn't, they lose the game. Kickers just running up towards it and he hits pools on the live stream, runs off to the church, prays for half an hour, comes back, and it's really good.

    Dror Gill: 36:47 Oh, so that's the reason for having a high latency.

    Adrian Roe: 36:55 It's interesting, the primary businesses in broadcast distribution as In over the air type distribution, but we do a bunch of the hybrid TV services, and as part of that we actually have to do the direct hand off to a bunch of the TVs and set top boxes and so on and so forth. Principally because, the TVs and sets of boxes are so appallingly behaved in terms of the extent to which they deal with then follow standards and so on. So, in order to deliver the streams to a free view plus HDTV in the UK, we just deliver them a broadcast quality transport stream as a progressive download, and entirely so this has been live in the field for, I don't, seven years or something. And entirely without trying to, we have an end to end latency of around two seconds from when the viewer in the home sees it on the TV, as opposed to the original signal coming off the satellite. And nowadays, that would be called super low latency and actually clever and remarkable and so on and so forth. And actually, it's primarily created by the lack of segmentation.

    Mark Donnigan: 38:01 That's right.

    Adrian Roe: 38:03 What's happened that suddenly made you have an RTMP streams. It's depended a little bit on how much buffering you had in the player and so on, but they typically have an end to end latency in a video workflow based around RTMP, five, six seconds, that was normal and they would really comment on it. And now, suddenly that you have segment oriented distribution mechanisms like HLS and Dash and all these kinds of things, people talk about low latency and suddenly they mean five to 10 seconds and so on and so forth. And, that's actually all been driven by the fact that I think by and large CDNS hate media, and they want to pretend that all media or assets are in fact JPEGS or JavaScript files and so and so forth.

    Dror Gill: 38:48 Or webpages.

    Adrian Roe: 38:49 Exactly.

    Dror Gill: 38:50 Yeah, like small chunks of data that's what they know how to handle best.

    Adrian Roe: 38:52 Exactly. And so, the people distributing the content like to treat them as static assets, and they all have their infrastructures built around the very, very efficient delivery of static assets, and that creates high high latency. So, you then get technologies like WebRTC which is emerging, which we use heavily in production for ... So, one of our customers is a sports broadcaster, their customers can deliver their own live commentary on a system over WebRTC, and it basically doesn't add any latency to the process because while we'll hand off a low latency encoder of the feed over WebRTC to wherever the commentator is, the commentator will view the stream and commentate.

    Adrian Roe: 39:34 In the meantime, we're going to a really high quality encode. In fact, this might be a mutual customer, but I probably won't say their name on air. We're going to do a really high quality encoder that same content in the meantime, and by the time we get the audio back from the commentator, we just mix that in with the crowd noise, add it to the video that we've already encoded at that point and away you go. And, you're pretty much getting live commentary on a system for free in terms of end to end latency. Yeah, and then sports, so we should be using WebRTC, we should be in this ...

    Adrian Roe: 40:05 The problem, CDNS don't like WebRTC not at least because it's a connection oriented protocol. You can't just do the same for everybody. You've got to have separate encryption keys and it's all peer to peer and so on and so forth. And so, it doesn't scale using their standard models. And so, most of the discussion around low latency as far as I can tell is the extent to which you can pretend that your segmented assets are in fact live streams, and so Akamai has this thing where they'll start playing a segment before it's finished and so on and so forth. Well actually, it starts to look an awful lot like a progressive download at that point.

    Mark Donnigan: 40:41 That's a great point. That's absolutely. Absolutely. And, what I find as I've walked around, like I said, walking around Streaming Media West, and looking at websites, reading marketing material, of everybody who has a low latency solution with a few exceptions, nobody's addressing the end to end factor of it. So, it cracks me up when I see an encoding vendor really touting low latency, low latency and I'm sitting here thinking, I mean Dror, what are we like 20 milliseconds? How much more low latency can you get than that?

    Dror Gill: 41:19 Yeah, at the Kodak level it is very low.

    Mark Donnigan: 41:21 Yeah, at the Kodak level. And then, when you begin to abstract out and of course the process adds time, right? But still, I mean the point is, is like it's ... I don't know, I guess part of what am reacting to and what I'm looking for, even in your response is that end to end, yes, but addressing latency end to end is really complicated because now just as you said, Adrian, now you have to look at the CDN, and you have to look at what you're doing on packaging, and you have to look at even your player architecture like progressive download. Some players can deal with that, great, other players can't. So, what do you do?

    Dom Robinson: 42:04 So, one of the things that I think just stepping back and having a reasonably long game view of the evolution of the industry over here in, in the UK, particularly in Europe general, low latency has been a thing for 15, 20 years. And, the big thing that's changed and why low latencies all over the global US driven press is the deregulation of the gambling market, and that's why everyone's interested in low latency. Over here in the UK, we've had gambling online for live sports for 15, 20 years. And, for everyone ... I used to run a CDN from 2001 to end of the 2000s, and all the clients were interested in was fast start for advertising for VOD assets and low latency for betting delivery. And obviously, low latency is important because the lower the latency, the later you can shut your betting gates. And, if you've got a ten second segment or 30 seconds to an hour, three segments to wait, you've got to shut your betting maybe a minute, half a minute before the race finishes or before the race starts, whichever way you're doing the betting.

    Dom Robinson: 43:14 And, that was very important over here. You didn't have a gambling market in the states online until last year I believe. And so, low latency just really wasn't very interesting. People were really only interested in can actually deliver reliably a big audience rather than can I deliver this to even small audiences, but with low latency, because I've got a betting market going on. And, as that betting deregulations come in, suddenly all the US centric companies have become really fascinated in whether they can shorten that low latency and so on and so forth. And, that's why companies 15, 20 years ago, over here, some of the big sports broadcast and so on, they were using RTMP extensively so that they could run their betting gates until the last second, and it really ramps up the amount of betting in those few seconds before the race starts.

    Dom Robinson: 44:03 So, that's why it's important. It's not for any other reason. In fact, I sometimes rather sourly ask audiences if they really ever heard their neighbors cheering to a football game before they've seen it because being caught on a sweeney of socially gathering around the TV, and it's an important game like that where your neighbors might have have their TV on loud enough, you frankly got a TV and it's on as well.

    Dom Robinson: 44:28 The real benchmark of the whole thing is can you beat the tweet, that's the measurable thing, and there's absurd little data in a tweet and a lot of tweets are machine generated, a goal is scored and it doesn't even take a fan in the stadium to type it, and send it to his friends, it's just instantly updated trying to beat a few packets of data across the world compared to trying to compress video, get it buffered, get it distributed across probably two or three stages of workflow decoded in the player and rendered. You're never going to be to tweet at that level. So, really the excitement is about betting, the deregulation of the betting market and gambling market.

    Dror Gill: 45:06 So, that's interesting. Today you don't measure the latency between and over the air broadcast and the top over the internet broadcasts, but you want to beat another over the internet broadcast, which is a very small packets of the tweet. So.

    Adrian Roe: 45:22 Exactly right.

    Dror Gill: 45:23 Actually, competing with the social networks and other broadcast networks.

    Dom Robinson: 45:26 Exactly.

    Adrian Roe: 45:28 I can remember, there were tongue in cheek when WhatsApp were bought, they were boasting about the number of messages that they dealt with a day, and yeah it was very large number, billions of messages a day. And, I remembered a little back of an envelope calculation that if you ... Based on the adage that a picture was worth a thousand words, and across all the various different events and channels and live sports and stuff like that we cover, if you counted a thousand words for every frame of video that we delivered, we were two orders of magnitude higher than WhatsApp.

    Dror Gill: 46:07 So, yeah. So, you had more traffic in your small company, you had more traffic than WhatsApp.

    Adrian Roe: 46:11 Yeah.

    Dror Gill: 46:13 A picture is worth a thousand words, and then you have 25 or 50 pictures every second. And, this is across all of your channels. So, yeah [crosstalk 00:46:22].

    Mark Donnigan: 46:21 That's a lot of words. It maybe chuckle up. Well, this is-

    Dror Gill: 46:27 We always say video is complicated and now we know why.

    Mark Donnigan: 46:32 Exactly. Well, this has been an amazing discussion, and I think we should bring into a close with, I'd really like your perspective, Adrian and Dom, you're working with broadcasters and presumably sitting right in the middle of this OTT transition. Dom, I know you mentioned that for 20 years you'd been evangelizing IP, and now finally it's a thing, everybody gets it. But, just curious, maybe you can share with the listeners some trends that you're seeing, how is a traditional broadcast or someone who's operating a little more of your traditional infrastructure, et cetera, how are they adopting OTT into their workflows? Are they building parallel workflows? Are some fork lifting and making the full IP transition. I think this is a great conversation to end with.

    Adrian Roe: 47:25 I think we're right at the cusp of exactly that. So, none of our customers are doing it side by side if they are full blown traditional broadcasters. I think increasingly a lot of our customers who may be deliver exclusively over the internet would also consider themselves broadcasters, and so I think the parlance is perhaps slightly out of date, but that's one of the things that I think is really interesting is some of the cultural challenges that come out of this. So, one of our customers who is a full blown traditional broadcaster, when you're dealing with fault tolerant large scale systems of the sort, that idea is built, then one of the things that's a given is that it's going to be a computer that decides which server is going to be responsible for which particular, this is BBC one's encoder, this is ... Yeah, whatever ITVs encoder or whatever. It's going to be a computer that makes those decisions because a computer can react in milliseconds if one of those services is no longer available and reroute it somewhere else.

    Adrian Roe: 48:28 And, this wasn't a public cloud implementation it was a private cloud implementation that they had couple of racks of servers and data management infrastructure on top that was doing all of the dynamic allocation and tolerance and all this clever stuff. And they said, so when we're showing our customers around, if channel four comes around, how can we tell then which is their encoder? And we said, you count. There isn't a channel four encoder there's an encoder that might be doing the job.

    Adrian Roe: 48:55 And, one of the features we had to add to the product as just to get over the cultural hurdle with them was the concept of a preferred encoder. So, if everything was in its normal happy state, then yeah, this particular encoder, halfway down on the right hand side of rack three, was going to be the one doing channel four, and just those simple things where they think people do still think in terms of appliances and raw rian and so on and so forth, and some of the challenges to move away from that into cloud thinking bit actually on the cloud or not, cloud thinking still applies it. It's funny where people trip up.

    Dom Robinson: 49:36 One of my bugbears in the industry, I'm a bit of a pedant with some of the terminology that gets used and so on. One of my bugbears is the term OTT. So, having spent a good long while playing with video and audio distribution over IP networks and so on, I struggle to think of any broadcast technology, which doesn't use IP at some point in this either production or distribution workflow, there just isn't any now. And so, if you're watching live news, the contribution visa coming over cell phones which are contribution is some sort of streaming protocol or a film or TV program production people are emailing files or they're dropboxing files, or they're sending them through digital asset management systems or however it may be.

    Dom Robinson: 50:20 But, the programs are being created using IP and have been for quite a while and increasingly nobody replaces technology with some sort of proprietary non IP based tool these days at any level in the broadcast industry. I rather store everything I can to try to avoid using the word OTT. And being a pedant about it, OTT simply means the paywall is outside of this last mile access network. That's all it means. It has nothing whatsoever to do with video distribution or streaming or anything like that. It's simply to do with where you take your payment from somebody.

    Dom Robinson: 50:57 So, Netflix has a hybridized side, but Netflix, you generally access through an ISP and when you make your payment, you pay Netflix directly. You don't pay through your ISP, that is an OTT service. Skype is an OTT service. Again, you connect through your phone service, your cable service, whatever it may be, but you actually subscribe directly with Skype, that is a true OTT service, and that's what OTT means. It's become in the last eight years synonymous with streaming ,and I can't think of a broadcast network which doesn't at some point use IP either streaming or file transfer based technologies to compose the program.

    Dom Robinson: 51:37 So, broadcast is streaming, streaming is broadcast. They have been synonymous for over a decade. It is how you connect the payment, which defines something as OTT, and it may well be that you can receive a video stream outside of one particular ISPs network, but that doesn't really mean anything. So, this battle between broadcast and OTT, it's a meaningless decision of where you're collecting payments for me. It really doesn't have any bearing on the technologies that we all work with which are video compression and distribution and so on. So.

    Mark Donnigan: 52:11 That's brilliant. That is really, really a smart observation and analysis there Dom. Well, I think we should wrap it up here. We definitely need to do a part two. I think we will have you guys back, there's so much more we could be talking about, but I want to thank our amazing audience, without you the Video Insiders Podcast would just be Dror and me talking to ourselves.

    Dror Gill: 52:38 Buzzing to ourselves some buzzy words.

    Mark Donnigan: 52:40 Buzzy words, buzzing, buzzing, taking up bits on a server somewhere and this has been a production of Beamer Imaging Limited, you can subscribe at thevideoinsiders.com where you can listen to us on Spotify, on iTunes, on Google Play, and more platforms coming soon. And, if you'd like to try out Beamer Codecs in your lab or production environment, we're actually giving away up to 100 hours of HEVC and H.264 encoding every month. Just go to beamer.com/free, that's F-R-E-E to get started. And until next time, thank you and have an awesome day encoding video.
    Speaker 1: 53:30 Thank you for listening to the Video Insiders Podcast, a production of Beamr Limited. To begin using Beamrs' Codecs today, go to https://beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 trans coding every month.

    2018, the Year HEVC Took Flight with Tim Siglin.

    2018, the Year HEVC Took Flight with Tim Siglin.

    E04: In this episode, The Video Insider's catch up with industry expert, Tim Siglin, to discuss HEVC implementation trends that counter previous assumptions, notable 2018 streaming events, and what's coming in 2019.

    The following blog post first appeared on the Beamr blog at: https://blog.beamr.com/2019/01/01/2018-the-year-hevc-took-flight/

    By now, most of us have seen the data and know that online video consumption is soaring at a rate that is historically unrivaled. It’s no surprise that in the crux of the streaming era, so many companies are looking to innovate and figure out how to make their workflows or customers workflows better, less expensive, and faster.

    In Episode 4 of The Video Insiders, we caught up with streaming veteran Tim Siglin to discuss HEVC implementation trends that counter previous assumptions, notable 2018 streaming events, and what’s coming in 2019.
    Tune in to hear The Video Insiders cover top-of-mind topics:

    HEVC for lower resolutions
    Streaming the World Cup
    Moving from digital broadcast to IP-based infrastructure
    What consumers aren’t thinking about when it comes to 4K and HDR
    Looking forward into 2019 & beyond
    Tune in to Episode 04: 2018, the Year HEVC Took Flight or watch the video below.

    Want to join the conversation? Reach out to TheVideoInsiders@beamr.com

    TRANSCRIPTION (lightly edited to improve readability only)

    Mark Donnigan: 00:00 On today’s episode, the Video Insiders sit down with an industry luminary who shares results of a codec implementation study, while discussing notable streaming events that took place in 2018 and what’s on the horizon for 2019. Stay tuned. You don’t want to miss receiving the inside scoop on all this and more.

    Announcer: 00:22 The Video Insiders is the show that makes sense of all that is happening in the world of online video, as seen through the eyes of a second generation Kodak nerd and a marketing guy who knows what I frames and macroblocks are. Here are your hosts, Mark Donnigan and Dror Gill.

    Mark Donnigan: 00:40 Welcome, everyone. I am Mark Donnigan, and I want to say how honored Dror and I are to have you with us. Before I introduce this very special guest and episode, I want to give a shout of thanks for all of the support that we’re receiving. It’s really been amazing.

    Dror Gill: 00:58 Yeah. Yeah, it’s been awesome.

    Mark Donnigan: 00:59 In the first 48 hours, we received 180 downloads. It’s pretty amazing.

    Dror Gill: 01:06 Yeah. Yeah, it is. The industry is not that large, and I think it’s really an amazing number that they’re already listening to the show from the start before the word of mouth starts coming out, and people spread the news and things like that. We really appreciate it. So, if it’s you that is listening, thank you very much.

    Mark Donnigan: 01:29 We really do aim for this to be an agenda-free zone. I guess we can put it that way. Obviously, this show is sponsored by Beamr, and we have a certain point of view on things, but the point is, we observed there wasn’t a good place to find out what’s going on in the industry and sort of get unbiased, or maybe it’s better to say unfiltered, information. That’s what we aim to do in every episode.

    Mark Donnigan: 01:57 In this one, we’re going to do just that. We have someone who you can definitely trust to know what’s really happening in the streaming video space, and I know he has some juicy insights to share with us. So, without further ado, let’s bring on Tim Siglin.

    Tim Siglin: 02:15 Hey, guys. Thank you for having me today and I will definitely try to be either as unfiltered or unbiased as possible.

    Mark Donnigan: 02:21 Why don’t you give us a highlight reel, so to speak, of what you’ve done in the industry and, even more specifically, what are you working on today?

    Tim Siglin: 02:31 Sure. I have been in streaming now for a little over 20 years. In fact, when Eric Schumacher-Rasmussen came on as the editor at StreamingMedia.com, he said, “You seemed to be one of the few people who were there in the early days.” It’s true. I actually had the honor of writing the 10-year anniversary of Streaming Media articles for the magazine, and then did the 20-year last year.

    Tim Siglin: 02:57 My background was Motion Picture production and then I got into video conferencing. As part of video conferencing, we were trying to figure out how to include hundreds of people in a video conference, but not need necessarily have them have two-way feedback. That’s where streaming sort of caught my eye, because, ultimately, for video conferencing we maybe needed 10 subject matter experts who would talk back and forth, and together a hundred, then it went to thousands, and now hundreds of thousands. You can listen in and use something like chat or polling to provide feedback.

    Tim Siglin: 03:31 For me, the industry went from the early revolutionary days of “Hey, let’s change everything. Let’s get rid of TV. Let’s do broadcast across IP.” That was the mantra in the early days. Now, of course, where we are is sort of, I would say, two-thirds of the way there, and we can talk a little bit about that later. The reality is that the old mediums are actually morphing to allow themselves to do heap, which is good, to compete with over the top.

    Tim Siglin: 04:01 Ultimately, what I think we’ll find, especially when we get to pure IP broadcast with ATSC 3.0 and some other things for over-the-air, is that we will have more mediums to consume on rather than fewer. I remember the early format ways and of course we’re going to talk some in this episode about some of the newer codec like HEVC. Ultimately, it seems like the industry goes through the cycles of player wars, format wars, browser wars, operating system wars, and we hit brief periods of stability which we’ve done with AVC or H.264 over the last probably eight years.

    Tim Siglin: 04:46 Then somebody wants to stir the pot, figure out how to either do it better, less expensively, faster. We go back into a cycle of trying to decide what the next big thing will be. In terms of what I’m working on now, because I’ve been in the industry for almost 21 years. Last year, I helped start a not-for-profit called Help Me Stream, which focuses on working with NGOs in emerging economies, trying to help them actually get into the streaming game to get their critical messages out.

    Tim Siglin: 05:18 That might be emerging economies like African economies, South America, and just the idea that we in the first world have streaming down cold, but there are a lot of messages that need to get out in emerging economies and emerging markets that they don’t necessarily have the expertise to do. My work is to tie experts here with need there and figure out which technologies and services would be the most appropriate and most cost effective.

    Mark Donnigan: 05:46 That’s fascinating, Tim.

    Tim Siglin: 05:48 The other thing I’m working on here, just briefly, is we’re getting ready for the Streaming Media Sourcebook, the 2019 sourcebook. I’m having to step back for the next 15 days, take a really wide look at the industry and figure out what the state of affairs are.

    Dror Gill: 06:06 That’s wonderful. I think because this is exactly the right point, is one you end and the other one begins, kind of to summarize where we’ve been in 2018, what is the state of the industry and the fact that you’re doing that for the sourcebook, I think, ties in very nicely with our desire to hear from you an overview of what were the major milestones or advancements that were made in the streaming industry in 2018, and then looking into next year.

    Dror Gill: 06:39 Obviously, the move to IP, getting stronger and stronger, now the third phase after analog and digital, now we have broadcast over IP. It’s interesting what you said about broadcasters not giving up the first with the pure OTT content providers. They have a huge business. They need to keep their subscribers and lower their churn and keep people from cutting the cord, so to speak.

    Dror Gill: 07:04 The telcos and the cable companies still need to provide the infrastructure for Internet on top of which the over-the-top providers and their content, but they still need to have more offering and television and VLD content in order to keep their subscribers. It’s very interesting to hear how they’re doing it and how they are upgrading themselves to the era of IP.

    Tim Siglin: 07:30 I think, Dror, you hit a really major point, which is we, the heavy lift … I just finished an article in ATSC 3.0 where I talk about using 2019 to prepare for 2020 when that will go live in the U.S. The heavy lift was the analog to digital conversion. The slightly easier lift is the conversion from digital to IP, but it still requires significant infrastructure upgrade and even transmission equipment to be able to do it correctly for the over-the-year broadcasters and cable.

    Dror Gill: 08:07 That’s right. I think on the other hand, there is one big advantage to broadcast, even broadcast over-the-air. That is the ability to actually broadcast, the ability to reach millions, tens of millions, hundreds of millions of people over a single channel that everybody is receiving. Whereas, because of historic reasons and legacy reasons in IP, we are limited, still, when you broadcast to the end user to doing everything over unicast. When you do this, it creates a tremendous load on your network. You need to manage your CDNs.

    Dror Gill: 08:46 I think we’ve witnessed in 2018 on one hand very large events being streamed to our record audience. But, on the other hand, some of them really failed in terms of user experience. It wasn’t what they expected because of the high volume of users, and because more and more people have discovered the ability to stream things over IP to their televisions and mobile devices. Can you share with us some of the experience that you have, some of the things that you’re hearing about in terms of these big events where they had failures and what were the reasons for those failures?

    Tim Siglin: 09:30 I want to reiterate the point you made on the OTA broadcast. It’s almost as if you have read the advanced copy of my article, which I know you haven’t because it’s only gone to the editor.

    Dror Gill: 09:42 I don’t have any inside information. I have to say, even though we are the Video Insiders.

    Mark Donnigan: 09:47 We are the Video Insiders. That’s right.

    Dror Gill: 09:49 We are the Video Insiders, but …

    Mark Donnigan: 09:49 But no inside information here.

    Dror Gill: 09:51 No inside information. I did not steal that copy.

    Tim Siglin: 09:55 What I point out in that article, Dror, I think which will come out in January shortly after CES is basically this. We have done a good job in the streaming industry, the OTT space of pushing the traditional mediums to upgrade themselves. One of the things as you say with OTA, that ability to do essentially a multicast from a tower wirelessly is a really, really good thing, because to get us to scale, and I think about things like the World Cup, the Olympics and even the presidential funeral that’s happened here in December, there are large-scale events that we in the OTT space just can’t handle, if you’re talking about having to build the capacity.

    Tim Siglin: 10:39 The irony is, one good ATSC transmission tower could hit as many people as we could handle essentially globally with the unicast (OTT) model. If you look at things like that and then you look at things like EMBMS in the mobile world, where there is that attempt to do essentially a multicast, and it goes to points like the World Cup. I think one of the horror stories in the World Cup was in Australia. There was a mobile provider named Optus who won the rights to actually do half of the World Cup preliminary games. In the first several days, they were so overwhelmed by the number of users who wanted to watch and were watching, as you say, in a unicast model that they ended up having to go back to the company they had bid against who had the other half of the preliminaries and ask them to carry those on traditional television.

    Tim Siglin: 11:41 The CEO admitted that it was such a spectacular failure that it damaged the brand of the mobile provider. Instead of the name Optus being used, everybody was referring to it as “Floptus.” You don’t want your brand being known as the butt of jokes for an event that only happens once every four years that you have a number of devotees in your market. And heaven forbid, it had been the World Cup for cricket, there would have been riots in the street in Sydney and Melbourne. Thank goodness it was Australia with soccer as opposed to Australia with cricket.

    Tim Siglin: 12:18 It brings home the point that we talk about scale, but it’s really hard to get to scale in a unicast environment. The other event, this one happened, I believe, in late 2017, was the Mayweather fight that was a large pay-per-view event that was streamed. It turned out the problem there wasn’t as much the streams as it was the authentication servers were overwhelmed in the first five minutes of the fight. So, with authentication gone, it took down the ability to actually watch the stream.

    Tim Siglin: 12:53 For us, it’s not just about the video portion of it, it’s actually about the total ecosystem and who you’re delivering to, whether you’re going to force caps into place because you know you can’t go beyond a certain capacity, or whether you’re going to have to partner up with traditional media like cable service providers or over-the-air broadcasters.

    Mark Donnigan: 13:14 It’s a really good point, Tim. In the World Cup, the coverage that I saw, it was more of, I’d almost say or use the phrase, dashed expectations. Consumers, they were able to watch it. In most cases, I think it played smoothly. In other words, the video was there, but HDR signaling didn’t work or didn’t work right. Then it looked odd on some televisions or …

    Tim Siglin: 13:40 In high frame rate …

    Tim Siglin: 13:43 20 frames a second instead of 60 frames a second.

    Mark Donnigan: 13:48 Exactly. What’s interesting to me is that, what I see is, the consumer, they’re not of course walking around thinking as we are, like frame rate and color space and resolution. They are getting increasingly sensitive to where they can look at video now and say, “That’s good video,” or “That doesn’t look right to me.” I know we were talking before we started recording about this latest Tom Cruise public service announcement, which is just super fascinating, because it …

    Tim Siglin: 14:24 To hear him say motion interpolation.

    Mark Donnigan: 14:26 Yeah. Maybe we should tell the audience, for those, since it literally just came out I think today, even. But you want to tell the audience what Tom Cruise is saying?

    Tim Siglin: 14:38 Essentially, Tom Cruise was on the set of Top Gun, as they’re shooting Top Gun. Another gentleman did a brief PSA for about a minute asking people to turn off motion interpolation on their televisions, which motion interpolation essentially takes a 24-frame per second and converts it to 30 frames per second by adding phantom frames in the middle. Because Mission Impossible: Fallout is just being released for streaming, Cruise was concerned and obviously others were concerned that some of the scenes would not look nearly as good with motion interpolation turned on.

    Tim Siglin: 15:17 I think, Mark, we ought to go to a PSA model, asking for very particular things like, “How do you turn HDR on? How do you …” Those types of things, because those get attention in a way that you and I or a video engineer can’t get that attention.

    Dror Gill: 15:33 How do you know if what you’re getting is actually 4K or interpolate HD, for example?

    Tim Siglin: 15:38 Especially in our part of the industry, because we will call something OTT 4K streaming. That may mean that it fits in a 4K frame, but it doesn’t necessarily mean it’s that number of pixels being delivered.

    Dror Gill: 15:52 It can also mean that the top layer in your adaptive bit rate stream is 4K, but then if you don’t have enough bandwidth, you’re actually getting the HD layer or even lower.

    Tim Siglin: 16:01 Exactly.

    Dror Gill: 16:02 Even though it is a 4K broadcast and it is 4K content. Sometimes, you can be disappointed by that fact as well.

    Mark Donnigan: 16:11 I have to give a very, very funny story directly related, and this happened probably, I don’t know, maybe, at least 18 months ago, maybe two years ago. I’m sitting on an airplane next to this guy. It’s the usual five-minute, get acquainted before we both turn on our computers. Anyway, when someone asks, “What do you do?” I generally just say, “I work for a video software company,” because how do you explain digital encoding? Most people just sort of stop at that, and don’t really ask more.

    Mark Donnigan: 16:44 But this guy is like, “Oh, really?” He said, “So, I just bought a 4K TV and I love it.” He was raving about his new Samsung TV. Of course, he figured I’m a video guy. I would appreciate that. I said, “Hey.” “So, you must subscribe to Netflix.” “Yes. Yes, of course,” he says. I said, “What do you think of the Netflix quality? It looks great, doesn’t it?”

    Mark Donnigan: 17:10 He sort of hem and hawed. He’s like, “Well, it really … I mean, yeah. Yeah, it looks great, but it’s not quite … I’m just not sure.” Then, I said, “I’m going to ask you two questions. First of all, are you subscribed to the 4K plan?” He was. Then I said, “How fast is your Internet at home.” He’s like, “I just have the minimum. I don’t know. I think it’s the 20 megabit package,” or whatever it was. I don’t remember the numbers.

    Mark Donnigan: 17:38 I said, “There’s this thing.” And I gave him like a 30-second primer on adaptive bit rate, and I said, “It is possible, I have no idea of your situation, that you might be watching the HD version.” Anyway, he’s like, “Hah, that’s interesting.” I connect with the guy on LinkedIn. Three days later, I get this message. He says, “I just upgraded my Internet. I now have 4K on my TV. It looks awesome.”

    Mark Donnigan: 18:04 On one hand, the whole situation was not surprising and, yet, how many thousands, tens of thousands, maybe millions of people are in the exact same boat? They’ve got this beautiful TV. It could be because they’re running some low-end router in the house. It could be they truly have a low end bandwidth package. There could be a lot of reasons why they’re not getting the bandwidth. They’re so excited about their 4K TV. They’re paying Netflix to get the top layer, the best quality, and they’re not even seeing it. It’s such a pity.

    Tim Siglin: 18:37 I had a TSA agent asked me that same question, Mark, when I came through customs. I’m like, “Sure. I’ll stand here and answer that question for you.” The router was actually what I suggested that he upgrade, because he said his router was like this (old unit).

    Mark Donnigan: 18:53 In a lot of homes, it’s a router that’s 15 years old and it just isn’t (up to the task).

    Tim Siglin: 18:58 But it brings out the point that even as we’re talking about newer codecs and better quality, even if we get a lower sweet spot in terms of 4K content (streaming bandwidth), or as we found in the survey that we worked on together, that using HEVC for 1080p or 720p, if the routers, if the software in the chain is not updated, the delivery quality will suffer in a way that people who have a tuned television and seen the consistent quality aren’t certain what to do to fix when they use an over-the-top service.

    Tim Siglin: 19:34 I think this is a key for 2019. As we prepare for ATSC 3.0 on over-the-air broadcast where people will be able to see pristine 4K, it will actually force those of us in the OTT space to up our game to make sure that we’re figuring out how to deliver across these multiple steps in a process that we don’t break.

    Dror Gill: 19:54 You really see ATSC 3.0 as a game-changer in 2019?

    Tim Siglin: 19:59 What I see it as is the response from the broadcast industry to, A) say that they’re still relevant, which I think is a good political move. And, B) it provides the scale you were talking about, Dror. See, I think what it does is it at least puts us in the OTT space on notice that there will be in certain first world countries a really decent quality delivery free of charge with commercials over the air.

    Tim Siglin: 20:31 It takes me back to the early days of video compression when, if you had a good class-one engineer and an analog NTSC transmission system, they could give you really good quality if your TV was tuned correctly. It only meant having to tune your TV. It didn’t mean having to tune your router or having to tune your cable modem, having to tune your settings on your TV. I think that’s where the game-changer may be, is that those tuner cards, which will send HDR signaling and things like that with the actual transmission, are going to make it much easier for the consumer to consume quality in a free scenario. I think that part of it is a potential game-changer.

    Mark Donnigan: 21:19 That’s interesting. Tim, we worked together earlier this year on a survey, an industry survey that I think it would be really, really interesting to listeners to talk about. Shall we pivot into that? Maybe you can share some of the findings there.

    Tim Siglin: 21:38 Why don’t you take the lead on why Beamr wanted to do that? Then I’ll follow up with some of the points that we got out of it.

    Mark Donnigan: 21:46 Obviously, we are a codec developer. It’s important for us to always be addressing the market the way that the market wants to be addressed, meaning that we’re developing technologies and solutions and standards that’s going to be adopted. Clearly, there has been, especially if we rewind a year ago or even 18 months ago, AV1 was just recently launched. There were still questions about VP9.

    Mark Donnigan: 22:19 Obviously, H264 AVC is the standard, used everywhere. We felt, “Let’s go out to the industry. Let’s really find out what the attitudes are, what the thinking is, what’s going on ‘behind closed doors’ and find out what are people doing.” Are they building workflows for these new advanced codecs? How are they going to build those workflows? That was the impetus, if you will, for it.

    Mark Donnigan: 22:49 We are very happy, Tim, to work with you on that and of course Streaming Media assisted us with promoting it. That was the reason we did it. I know there were some findings that were pretty predictable, shall we say, no surprises, but there were some things that I think were maybe a little more surprising. So, maybe if you like to share some of those.

    Tim Siglin: 23:12 Yeah. I’ll hit the highlights on that. Let me say too that one of the things that I really like about this particular survey, there was another survey that had gone on right around that time that essentially was, “Are you going to adopt HEVC?” What we took the approach on with this survey was to say, “Okay. Those of you who’ve already adopted HEVC, what are the lessons that we can learn from that?”

    Tim Siglin: 23:36 We didn’t exclude those who were looking at AV1 or some of the other codes, even VP9, but we wanted to know those people who used HEVC. Were they using it in pilot projects? Were they thinking about it? Were they using it in actual production? What we found in the survey is that AVC, or H.264, was still clearly dominant in the industry, but that the ramp-up to HEVC was moving along much faster than at least I … I believed. Mark, I told you when we started the survey question creation, which was about a year ago and then launched it in early 2018, I expected we wouldn’t see a whole lot of people using HEVC in production.

    Tim Siglin: 24:23 I was pleasantly surprised to say that I was wrong. In fact, I think you mentioned in our recent Streaming Media West interview that there was a statistic you gave about the number of households that could consume HEVC. Was it north of 50%?

    Mark Donnigan: 24:40 Yeah, it’s more than 50%. What’s interesting about that number is that that actually came from a very large MSO. Of course, they have a very good understanding of what devices are on their network. They found that there was at least one device in at least 50% of their homes that could receive and decode, playback, HEVC. That’s about as real world as you can get.

    Tim Siglin: 25:06 What was fascinating to me too in this study was, we asked open-ended questions, which is what I’ve done in the research projects for the last 25 years both the video conferencing and streaming. One of the questions we asked was, “Do you see HEVC as only a 4K solution or do you see it as an option for lower resolutions?” It turned out overwhelmingly, people said, “We not only see it for 4K. We see it for high-frame rate (HFR) 1080p, standard frame rate 1080p, with some HDR.”

    Tim Siglin: 25:40 Not a majority, but a large number of respondents said they would even see it as a benefit at 720p. What that tells me is, because we had a large number of engineers, video engineers, and we also have people in business development who answer these questions, what it tells me is that companies know as we scale because of the unicast problem that Dror pointed out in the beginning that scaling with a codec that consumes more bandwidth is a good way to lose money, kind of like the joke that the way a rich man can lose money really fast is to invest in an airline.

    Tim Siglin: 26:19 If indeed you get scale with AVC, you could find yourself with a really large bill. That look at HEVC is being not just for 4K, HDR, or high frame rate in the future, but also for 1080p with some HDR and high frame rate. It tells me that the codec itself or the promise of the codec itself was actually really good. What was even more fascinating to me was the number of companies that had AVC pipelines that were actually looking to integrate HEVC into those same production pipe.

    Tim Siglin: 26:55 It was much easier from a process standpoint to integrate HEVC into an AVC pipeline, so in other words, H265 into H264 pipeline than it was to go out of house and look at something like AV1 or VP9, because the work that was done on HEVC builds on the benefits that were already in place in AVC. Of course, you got Apple who has HLS, HTTP Live Streaming, and a huge ecosystem in terms of iPhones and iPads, laptops and desktops supporting HEVC not just as a standard for video delivery, but also with the HEIC or HEIF image format, now having all of their devices shoot images using HEVC instead of JPEG. That in and of itself drives forward adoption of HEVC. I think you told me since that survey came out, probably now seven months ago, you all have continued to see the model of all-in HEVC adoption.

    Dror Gill: 28:03 This is what we promote all the time. It’s kind of a movement. Are you all in HEVC or are you doing it just for 4K, just where you have to do it? We really believe in all-in HEVC. Actually, this week, I had an interesting discussion with one of our customers who is using our optimization product for VOD content, to reduce bit-rate of H.264 (streams). He said, “I want to have a product. I want to have a solution for reducing bit-rates on our live channels.”

    Dror Gill: 28:32 So, I asked them, “Okay. Why don’t you just switch your codec to HEVC?” He said, “No, I can’t do that.” I said, “Why not?” He said, “You know compatibility and things like that.” I asked, “Okay. What are you using? What are you delivering to?” He said, “We have our own set-top boxes (STB), IP set-top boxes which we give out to our customers. Well, these are pretty new.” So, they support HEVC. I’m okay there. “Then we have an Apple TV app.” “Okay, Apple TV has a 4K version. So, it supports HEVC. All of the latest Apple TV devices have HEVC. That’s fine.” “Then we have smartphone apps, smart TV apps for Android TV and for the LG platform.”

    Dror Gill: 29:15 Obviously, TV’s support 4K. So, I’m okay there. With delivering to mobile devices, all the high-end devices already support HEVC. He was making this estimate that around 50 to 60% of his viewers are using devices that are HEVC capable. Suddenly, he’s thinking, “Yeah, I can do that. I can go all in HEVC. I will continue, of course, to support H.264 for all of the devices that don’t support HEVC. But if I can save 50% of the bandwidth to 50 to 60% of my customers, that’s a very big savings.”

    Mark Donnigan: 29:48 What’s interesting about this conversation, Dror, is first of all I’m pretty certain that the operator you’re talking with is different than the operator that I shared, found the exact same thing. This is a consistent theme, is that pretty much in developed parts of the world, it really is true that 50% or more of the users can today receive HEVC. This number is only growing. It’s not like it’s static It is just growing. Next year, I don’t know if that number will be 60% or 70%, but it’s going to be even bigger.

    Mark Donnigan: 30:27 What’s fascinating is that, again, we’ve said earlier, that the consumer is getting just more aware of quality, and they’re getting more aware of when they’re being underserved. For operators who are serving to lowest common denominator, which is to say, AVC works across all my devices, and it’s true. AVC works on all the high-end devices equally well, but you’re under-serving a large and growing number of your users.

    Mark Donnigan: 31:01 If your competitors are doing the same, then I guess you could say … well, “Who are they going to switch to?” But there are some fast-moving leaders in the space who are either planning or they’re shortly going to be offering better quality. They’re going to be extending HEVC into lower bit rates or lower resolutions, that is, and therefore lower bit rates, and the consumers are going to begin to see like, “Well, wait a second. This service over here that my friend has or we have another subscription in the household, how come the video looks better?” They just begin to migrate there. I think it’s really important when we have these sorts of conversations to connect to this idea that don’t underserve your consumer in an effort to be something to everybody.

    Tim Siglin: 31:57 I would add two other quick things to that, Mark. One is, we’ve always had this conversation in the industry about the three-legged stool of speed, quality and bandwidth in terms of the encoding.

    Mark Donnigan: 32:09 That’s right.

    Tim Siglin: 32:09 Two of those are part of the consumer equation, which is quality and bandwidth. Then, oftentimes, we’ve had to make the decision between quality and bandwidth. If the argument is ostensibly that HEVC as it stands right now, had a couple years of optimization, can get us to about, let’s say, 40%. Let’s not even say 50%. For equivalent quality, it can get us to 40% bandwidth reduction. Why wouldn’t you switch over to something like that?

    Tim Siglin: 32:39 Then the second part, and I have to put a plugin for what Eric Schumacher-Rasmussen and the Streaming Media team did at Streaming Media West by having Roger Pantos come and speak, Roger Pantos being of course the inventor of HLS, and I’m not a huge fan of HLS, just because of the latency issues. But he pointed out in his presentation, his tutorial around HLS that you can put two different codecs in a manifest file. There is absolutely no reason that an OTT provider could not provide both HEVC and AVC within the same manifest file and then allow the consumer device to choose.

    Tim Siglin: 33:22 When Dror mentioned the company who has the OTT boxes that they give away, they could easily set a flag in those boxes to say, “If you’re presented with a manifest file that has AVC and HEVC, go with HEVC to lower the bandwidth, overall.” The beauty is it’s a technical issue at this point and it’s a technical implementation issue, not a ‘can we make it work?’ Because we know that it works based around the HLS.

    Mark Donnigan: 33:54 This is excellent. Tim, let’s wrap this up, as I knew it would be. It has just been an awesome conversation. Thank you for sharing all your years of collective experience to give some insight into what’s happening in the industry. Let’s look at 2019. I know we’ve been talking a little bit about … you’ve made references to ATSC 3.0. Some of our listeners will be going to CES. Maybe there’s some things that they should be looking at or keeping their eyes opened for. What can you tell us about 2019?

    Tim Siglin: 34:35 Here’s what I think 2019 is bringing. We have moved in the cloud computing space and you all are part of this conversation at Beamr. We’ve moved from having cloud-based solutions that were not at parity with on-premise solutions to actually in 2018 reaching parity between what you could do in an on-premise solution versus the cloud. Now, I think in 2019, what we’re going to start seeing is a number of features in cloud-based services, whether it’s machine learning, which the popular nomenclature is AI, but I really like machine learning as a much better descriptor, whether it’s machine learning, whether it’s real-time transcoding of live content, whether it’s the ability to simultaneously spit out AVC and HEVC like we’ve been talking about here that the cloud-based solutions will move beyond parity with the on-premise solutions.

    Tim Siglin: 35:35 There always will be needs for the on-premise parts from a security standpoint in sort of the industries, but I don’t think that will inhibit cloud-based in 2019. If people are going to CES, one of the things to look at there, for instance, is a big leap in power consumption savings for mobile devices. I’m not necessarily talking about smartphones, because the research I’ve done says the moment you turn GPS on, you lose 25% of battery. Tablets have the potential to make a resurgence in a number of areas for consumers and I think we’ll see some advances in battery (capacity).

    Tim Siglin: 36:19 Part of that goes to HEVC, which as we know is a much harder codec to decode. I think the consumer companies are being forced into thinking about power consumption as HEVC becomes more mainstream. That’s something I think people should pay attention to as well. Then, finally, HDR and surround sound solutions, especially object placement like Dolby Atmos and some of these others, will become much more mainstream as a way to sell flat panels and surround sound systems.

    Tim Siglin: 36:56 We sort of languished in that space. 4K prices have dropped dramatically in the last two years, but we’re not yet ready for 8K. But I think we’ll see a trend toward fixing some of the audio problems. In the streaming space, to fix those audio problems, we need to be able to encode and encapsulate into sort of the standard surround sound model. Those are three areas that I would suggest people pay attention.

    Mark Donnigan: 37:25 Well, thank you for joining us, Tim. It’s really great to have you on. We’ll definitely do this again. We want to thank you, the listener, for supporting the Video Insiders. Until the next episode. Happy encoding!

    Announcer: 37:39 Thank you for listening to the Video Insiders Podcast, a production of Beamr Imaging Limited. To begin using Beamr’s codecs today, go to Beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

    Codec Efficiency Is in the Eye of the Measurer with Mark Donnigan & Dror Gill.

    Codec Efficiency Is in the Eye of the Measurer with Mark Donnigan & Dror Gill.

    Is AV1 more efficient than HEVC? Dror & Mark get into the middle of a 3 against 1 standoff over whether AV1 is actually more efficient than HEVC.

    The following blog post first appeared on the Beamr blog at: https://blog.beamr.com/2018/11/23/codec-efficiency-is-in-the-eye-of-the-measurer-podcast/

    When it comes to comparing video codecs, it’s easy to get caught up in the “codec war” mentality. If analyzing and purchasing codecs was as easy as comparing fuel economy in cars, it would undoubtedly take a lot of friction out of codec comparison, but the reality is that it’s not that simple.

    In Episode 02, The Video Insiders go head-to-head comparing two of the leading codecs in a three against one standoff over whether AV1 is more efficient than HEVC.

    So, which is more efficient?

    Listen in to this week’s episode, “Codec Efficiency Is in the Eye of the Measurer,” to find out.

    Want to join the conversation? Reach out to TheVideoInsiders@beamr.com.

    TRANSCRIPTION (lightly edited to improve readability only)

    Mark Donnigan: 00:41 Hi everyone I am Mark Donnigan and I want to welcome you to episode two of the Video Insiders.

    Dror Gill: 00:48 And I am Dror Gill. Hi there.

    Mark Donnigan: 00:50 In every episode of the Video Insiders we bring the latest inside information about what’s happening in the video technology industry from encoding, to packaging, to delivery, and playback, and even the business behind the video business. Every aspect of the video industry is covered in detail on the Video Insiders podcast.

    Dror Gill: 01:11 Oh yeah, we usually do cover everything from pixels, to blocks, to microblocks, to frames, to sequences. We go all the way up and down the video delivery chain and highlight the most important things you should know before you send any video bits over the wire.

    Mark Donnigan: 01:28 In our first episode we talked about a very hot topic which asked, “Hasn’t this kind of been worn out?” The whole HEVC, AV1 discussion. But I think it was very interesting. I sure enjoyed the talk. What about you Dror?

    Dror Gill: 01:47 Yeah, yeah, yeah. I sure did. It was great talking about the two leading codecs. I don’t want to say the word, codec war.

    Mark Donnigan: 01:58 No, no, we don’t believe in codec wars.

    Dror Gill: 01:59 We believe in codec peace.

    Mark Donnigan: 02:00 Yeah, that’s true. Why is it so complicated to compare video codecs? Why can’t it be as simple as fuel economy of cars, this one gets 20 miles per gallon and that one gets 30 and then I make a decision based on that.

    Dror Gill: 02:15 I wish it was that simple with video codecs. In video compression you have so many parameters to consider. You have the encoding tools, tools are grouped into what’s called profiles and levels, or as AV1 calls them “experiments.”

    Mark Donnigan: 02:31 Experiments, mm-hmm…

    Dror Gill: 02:35 When you compare the codecs which profiles and levels do you use. What rate control method? Which specific parameters do you set for each codec? And each codec can have hundreds, and hundreds of parameters. Then there is the question of implementation. Which software implementation of the codec do you use? Some implementations are reference implementations that are used for research, and others are highly performance optimized commercial implementations. Which one do you select for the test? And then, which operating system, what hardware do you run on, and obviously what test content? Because encoding two people talking, or encoding an action scene for a movie, is completely different.

    Dror Gill: 03:13 Finally, when you come to evaluate your video, what quality measure do you use? There’re various objective quality measures and some people use actual human viewers and they assesses subjective quality of the video. On that front also, there’re many possibilities that you need to choose from.

    Mark Donnigan: 03:32 Yeah, so many questions and no wonder the answers are not so clear. I was quite surprised when I recently read three different technical articles published at IBC actually, effectively comparing AV1 versus HEVC and I can assume that each of the authors did their research independently. What was surprising was they came to the exact same conclusion, AV1 has the same compression efficiency as HEVC. This is surprising because some other studies and one in particular (I think we’ll talk about) out there says the contrary. So can you explain what this means exactly, Dror.

    Dror Gill: 04:16 By saying that they have the same compression efficiency, this means that they can reach the same quality at the same bitrate or the other way round. You need the same bitrate to reach that same quality. If you need for example, two and a half megabits per second to encode an HD video file using HEVC at a certain quality, then with AV1 you would need roughly the same bitrate to reach that same quality and this means that AV1 and HEVC provide the same compression level. In other words, this means that AV1 does not have any technical advantage over HEVC because it has the same compression efficiency. Of course that’s if we put aside all the loyalty issues but we discussed that last time. Right?

    Mark Donnigan: 04:56 That’s right. The guys who wrote the three papers that I’m referencing are really top experts in the field. It’s not seminar work done by a student, not to downplay those papers, but the point is these are professionals. One was written by the BBC in cooperation with the Multimedia and Vision Group at the Queen Mary University of London. I think nobody is going to say that the BBC doesn’t know a thing or two about video. The second was written by Ateme, and the third by Harmonic, leading vendors.

    Mark Donnigan: 05:29 I actually pulled out a couple of phrases from each that I’d like to quote. First the BBC and Queen Mary University, here is a conclusion that they wrote, “The results obtained show in general a similar performance between AV1 and the reference HEVC both objectively and subjectively.” Which is interesting because they did take the time to both do the visual assessment as well as use a quality measure.

    Mark Donnigan: 06:01 Ateme said, “Results demonstrate AV1 to have equivalent performance to HEVC in terms of both objective and subjective video quality test results.”

    Dror Gill: 06:10 Yeah, very similar.

    Mark Donnigan: 06:16 And then here is what Harmonic said, “The findings are that AV1 is not more advantageous today than HEVC on the compression side and much more complex to encode than HEVC.” What do you make of this?

    Dror Gill: 06:32 I don’t know. It sounds pretty bad to me, even two of those papers also analyzed subjective quality so they used actual human viewers to check out the quality. But Mark what if I told you that researchers from the University of Klagenfurt in Austria together with Bitmovin published a paper which showed completely different results. What would you say about that?

    Mark Donnigan: 06:57 Tell me more.

    Dror Gill: 06:58 Last month in Athens I was the ICIP conference that’s the IEEE International Conference on Image Compression and Image Processing. There was this paper presented by this University in Austria with Bitmovin and their conclusion was, let me quote, “When using weighted PSNR, AV1 performs consistently better for bit rate compared to AVC, HEVC, and VP9.” So they claim AV1 is better than three codecs but specifically it’s better than HEVC. And then they have a table in their article that compares AV1 to HEVC for six different video clips. The table shows that with AV1 you get up to 25% lower bitrate at the same quality than HEVC.

    Dror Gill: 07:43 I was sitting there in Athens last month when they presented this and I was shocked.

    Mark Donnigan: 07:50 What are the chances that three independent papers are wrong and only this paper got it right? And by the way, the point here is not three against one because presumably there’re some other papers. I’m guessing other research floating around that might side with Bitmovin. The point is that three companies who no one is going to say that any of them are not experts and not highly qualified to do a video assessment, came up with such a different result. Tell us what you think is going on here?

    Dror Gill: 08:28 I was thinking the same thing. How can that be. During the presentation I asked one of the authors who presented the paper a few questions and it turned out that they made some very questionable decisions in all of that sea of possibility that I talked about before. Decisions related to coding tools, codec parameters, and quality measures.

    Dror Gill: 08:51 First of all, in this paper they didn’t show any results of subjective viewing. Only the objective metrics. Now we all know that you should always your eyes, right?

    Mark Donnigan: 09:03 That’s right.

    Dror Gill: 09:04 Objective metrics, nice numbers, but obviously you need to view the video because that’s how the actual viewers are going to assess the (video) quality. The second thing is that they only used the single objective metric and this was PSNR. PSNR, it stands for peak signal-to-noise ratio and basically this measure is a weighted average of the difference in peaks between pixel values of the two images.

    Dror Gill: 09:30 Now, we’re Video Insiders, but even if you’re not an insider you know that PSNR is not a very good quality measure because it does not correlate very well with human vision. This is the measure that they choose to look at but what was most surprising is that there is a flag in the HEVC open source encoder which they used that if chosen, the result is improved PNSR. What it does, it turns off some psycho-visual optimizations which make the video look better but reduce the PSNR, and that’s turned on by default. So you would expect that they’re measuring PSNR they would turn that flag on so you would get higher PSNR. Well, they didn’t. They didn’t turn the flag on!

    Mark Donnigan: 10:13 Amazing.

    Dror Gill: 10:17 Finally, even then AV1 is much slower than HEVC, and they also reported in this data that it was much, much slower than HEVC but still they did not use the slowest encoding standing of HEVC, which would provide the best quality. There’s always a trade off between performance and quality. The more tools you employ the better quality you can squeeze out of the video, of course that takes you more CPU cycles but they used for HEVC, the third slowest setting which means this is the third best quality you can get with that codec and not the very best quality. When you handicap an HEVC encoder in this way, it’s not surprising that you get such poor results.

    Dror Gill: 11:02 I think based on all these points everybody can understand why the results of this comparison were quite different than all of the other comparison that were published a month earlier at IBC (by Ateme, BBC, Harmonic).

    Mark Donnigan: 11:13 It’s interesting.

    Mark Donnigan: 11:14 Another critical topic that we have to cover is performance. If you measure the CPU performance on encoding time of AV1, I believe that it’s pretty universally understood that you are going to find it currently is a hundred times slower than HEVC. Is that correct?

    Dror Gill: 11:32 Yeah, that’s right. Typically, you measure the performance of an encoder and FPS which is frames per second. For HEVC it’s common to measure an FPM which is frames per minute.

    Mark Donnigan: 11:42 Frames per minute, (more like) frames per hour, FPH.

    Dror Gill: 11:45 A year and a half ago or a year ago when there were very initial implementation, it was really FPD or FPH, Frames per hour or per day and you really needed to have a lot of patience, but now after they’ve done some work it’s only a hundred times slower than HEVC.

    Mark Donnigan: 12:02 Yeah, that’s pretty good. They’re getting there. But some people say that the open source implementation of AV1 I believe it’s AOM ENC.

    Dror Gill: 12:11 Yeah, AOM ENC.

    Mark Donnigan: 12:16 ENC exactly has not been optimized for performance at all. One thing I like about speed is either your encoder produces X number of frames per second or per minute, or it doesn’t. It’s really simple. Here is my next question for you. Proponents of AV1 are saying, “well it’s true it’s slow but it hasn’t been optimized, the open source implementation,” which is to imply that there’s a lot of room (for improvement) and that we’re just getting started, “don’t worry we’ll close the gap.” But if you look at the code, and by the way I may be a marketing guy but my formal education is computer science.

    Mark Donnigan: 13:03 You can see it already includes performance optimizations. I mean eptimizations like MMX, SSE, there’s AVX instructions, there’s CPU optimization, there’s multithreading. It seems like they’re already trying to make this thing go faster. So how are they going to close this a hundred X (time) gap?

    Dror Gill: 13:22 I don’t think they can. I mean a hundred X, that’s a lot and you know even the AV1 guys they even admit that they won’t be able to close the gap. I talked to a few senior people who’re involved in the Alliance for Open Media and even they told me that they expect AV1 to five to 10 times more complex than HEVC at the end of the road. In two to three years after all optimization are done, it’s still going to be more complex than HEVC.

    Dror Gill: 13:55 Now, if you ask me why it’s so complex I’ll tell you my opinion. Okay, this is my personal opinion. I think it’s because they invested a lot of effort in side stepping the patents (HEVC).

    Mark Donnigan: 14:07 Good point. I agree.

    Dror Gill: 14:07 They need to get that compression efficiency which is the same as HEVC but they need to use algorithms that are not patented. They have methods that use much more CPU resources than the original patent algorithms to reach the same results. You can call it kind of brute force implementation of the same thing to avoid the patent issue. That’s my personal opinion, but the end result I think is clear, it’s going to be five to 10 times slower than HEVC. It has the same compression efficiency so I think it’s quite questionable. This whole notion of using AV1 to get better results.

    Mark Donnigan: 14:45 Absolutely. If you can encode let’s say on a single computer with HEVC a full ABR stack, this is what people want to do. But here we’re talking speeds that are so slow let’s just try and do (encode) one stream. Literally what you’re saying is you’ll need five to 10 computers to do the same encode with AV1. I mean, that’s just not viable. It doesn’t make sense to me.

    Dror Gill: 15:14 Yeah, why would you invest so much encoding into getting the same results. If you look at another aspect of this, let’s talk about hardware encode. Companies that have large data centers, companies that are encoding vast amount of video content are not looking into moving from the traditional software encoding and CPUs and GPUs, to dedicated hardware. We’re hearing talks about FPGAs even ASICs … by the way this is a very interesting trend in itself that we’ll probably cover in one of the next episodes. But in the context of AV1, imagine a chip that is five to 10 times larger than an HEVC chip and which is the same complexity efficiency. The question I ask again is why? Why would anybody design such a chip, and why would anybody use it when HEVC is available today? It’s much easier to encode, royalty issues have been practically solved so you know?

    Mark Donnigan: 16:06 Yeah, it’s a big mystery for sure. One thing I can say is the Alliance for Open Media has done a great service to HEVC by pushing the patent holders to finalize their licensing terms … and ultimately make them much more rational shall we say?

    Dror Gill: 16:23 Yeah.

    Mark Donnigan: 16:25 Let me say that as we’re an HEVC vendor and speaking on behalf of others (in the industry), we’re forever thankful to the Alliance for Open Media.

    Dror Gill: 16:36 Definitely, without the push from AOM and the development of AV1 we would be stuck with HEVC royalty issue until this day.

    Mark Donnigan: 16:44 That was not a pretty situation a few years back, wow!

    Dror Gill: 16:48 No, no, but as we said in the last episode we have a “happy ending” now. (reference to episode 1)

    Mark Donnigan: 16:52 That’s right.

    Dror Gill: 16:52 Billions of devices support HEVC and royalty issues are pretty much solved, so that’s great. I think we’ve covered HEVC and AV1 pretty thoroughly in two episodes but what about the other codecs? There’s VP9, you could call that the predecessor of AV1, and then there’s VVC, which is the successor of HEVC. It’s the next codec developed by MPEG. Okay, VP9 and VVC I guess we have a topic for our next episode, right?

    Mark Donnigan: 17:21 It’s going to be awesome.

    Narrator: 17:23 Thank you for listening to the Video Insider podcast a production of Beamr limited. To begin using Beamr codecs today go to beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

    Brand Fast-Trackers #183 – Content is King & Queen

    Brand Fast-Trackers #183 – Content is King & Queen

    Categories: Advertising, Brand Marketing Strategy, Branded Content, Branded Entertainment, Content Marketing, Creativity, General Discussion, Marketing Start-ups, Marketing Strategy, Podcast Discussion, Starting a Business

    Tags: , , , , , , , , ,

    Today’s episode is with Rob Barnett, CEO/Founder of My Damn Channel. Rob had a long career in TV and radio when he founded My Damn Channel in 2007. Think back, in 2007, YouTube had been purchased by Google already, but it was nowhere near the behemoth it has become. Rob and his team took the bet [...]

    (Read more...)

    The post Brand Fast-Trackers #183 – Content is King & Queen appeared first on Brand Fast-Trackers.

    Nick Bolton - The evolution and commercialisation of online video

    Nick Bolton - The evolution and commercialisation of online video
    Internet video has come a long way from the postage stamp generic media player to the commercial success it is today. This session looks at this journey, and examines the multitude of online video options available. We will look at content creation (simple single piece, to multi-platform, and user generated), distribution methods and publishing strategies. Then once the video is published, how do you justify it (the ROI), commercialise it (leverage the content) and monetise it through syndication, advertising, sponsorship, or pay-per-view/subscription. There will be real time demos and case studies. Since Nick ran his first live webcast in 2000, he has managed several hundred webcast productions for most of the top corporates, publishers and broadcasters in Australia. Key highlights include Australia’s first live medical operation on the web, the Australia 2020 Summit and World Youth Day. Nick regularly speaks at conferences here and overseas on online video creation and distribution, and is also an avid short film maker, actor and theatre producer. Nick is on the NSW committee of AIMIA - the Australian Interactive Media Industry Association. Licensed as Creative Commons Attribution-Share Alike 3.0 (http://creativecommons.org/licenses/by-sa/3.0/).
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io