How to support video with mitigated latency
nScreenMedia claims that: “data from Ericsson and FreeWheel paints a rosy picture for mobile video. Mobile data volume is set to increase sevenfold over the next six years, with video’s share increasing from 50% to 70%. The smartphone looks to be in the driver’s seat."
To top this, Forbes reported in September 2015 that “Facebook users send on average 31.25 million messages and view 2.77 million videos every minute, and we are seeing a massive growth in video and photo data, where every minute up to 300 hours of video are uploaded to YouTube alone.”
Cisco also finds that “annual global IP traffic will pass the zettabyte ([ZB]; 1000 exabytes [EB]) threshold by the end of 2016, and will reach 2.3 ZB per year by 2020.
By the end of 2016, global IP traffic will reach 1.1 ZB per year, or 88.7 EB per month, and by 2020 global IP traffic will reach 2.3 ZB per year, or 194 EB per month.” The firm also predicts that video traffic will grow fourfold from 2015 to 2020, a CAGR of 31 percent.
More recently, a blog by FPV Blue claims that it can solve the latency problems that can dog many marketers and consumers by suggesting that ‘Glass to glass video latency is now under 50 milliseconds’.
Previously, the company announced that this video latency figure stood at 80 milliseconds. To reduce this latency, the firm needed to undertake a hardware revision.
Its blog post nevertheless questions the industry standard for measuring First-Person View latency (FPV latency).
FPV Blue defines latency as follows:
“Before measuring it, we better define it. Sure, latency is the time it takes for something to propagate in a system, and glass to glass latency is the time it takes for something to go from the glass of a camera to the glass of a display.
However, what is that something? If something is a random event, is it happening in all of the screen at the same time, or is restricted to a point in space?
If it is happening in all of the camera’s lenses at the same time, do we consider latency the time it takes for the event to propagate in all of the receiving screen, or just a portion of it? The difference between the two might seem small, but it is actually huge.”
Therefore, whether the video is being used for flying drones or for other purposes, people need to consider how they can accurately measure and mitigate the effects of video latency because in general, video traffic is increasing exponentially.
Cisco’s Visual Networking Index claim: “It would take more than 5 million years to watch the amount of video that will cross global IP networks each month in 2020. Every second, a million minutes of video content will cross the network by 2020.”
The findings of Cisco’s survey also reveal that video traffic over the internet will equate to 82% of all Internet Protocol (IP) traffic, relating to businesses and consumers.
Video: The TV star
Michael Litt, CEO and co-founder of Vidyard, also claims that the future of the internet is television because more and more people are using streaming services for entertainment, which means that the major broadcasters are also having to play catch-up.
At this juncture, it’s worth noting that BBC 3 has moved online to meet the demands of a younger and digital device savvy audience.
Talking about Facebook, Mashable reports that its livestream coverage of the third US presidential debates has one big advantage over everyone else.
“Facebook was delivering its stream at a 13 second delay, on average, compared to radio”, writes Kerry Flynn. The network suffering from the slowest latency was Bloomberg at an arduous 56 seconds.
She rightly adds that the disparity between the different networks should worry the traditional broadcast networks: “Watching the debate on Facebook meant that a viewer not only did not have a TV or pay for cable, they also had the fastest stream accompanied by real-time commentary and reactions.”
The surprise was that Facebook, according to the findings of Wowza Media Systems, managed to – pardon the pun – trump the satellite and cable networks for some viewers.
“Facebook's livestream setup isn't that different from what other companies use. Cable systems, however, tend to outsource livestreaming to content delivery networks (CDNs) that are easy to integrate and reliable — but also relatively slow”, writes Flynn.
With a lot of streaming data you have to get the CDN closer to the viewers to improve the user experience. This gives you the problem of getting the content to the CDN in the first place.
When the CDN is at distance from the centralised source to the CDN, the latency will be considerably higher which in turn affects the data throughput to the CDN. As this rich data is compressed, traditional WAN optimisation techniques are ineffective.
The problem: Latency
With the increasing proliferation of video content, why should anyone by concerned about the volume of video that is being produced and latency?
High viewing figures can after all lead to higher advertising revenues for the many broadcasters
From a competitive advantage perspective, increasing volumes of video data means that there is more noise to contend with in order to get marketing messages across to one’s target audiences.
So there is more pressure on internet and on content delivery services with increasing demand and higher quality play out, but on the whole many of these facilities have been sorted even with seamless stitching advertising services.
If latency impinges on livestream services, too, then the viewer is likely to choose the network with the fastest stream.
The key problem is that video and audio can be impeded by the effects of network latency. Slow networks can leave the reputations of customers - whose own ‘consumers’ use video for a variety of reasons - tarnished.
In a commercial situation, this could lead to lost business. A fast network from any datacentre will in contrast engender confidence. You can’t accelerate it all because it’s going at a fixed speed.
It’s about video in general. There are so many different applications for video, and all of them can be affected by bandwidth or latency - or both.” How we produce, consume, and store information has changed dramatically over the past few years with the YouTube and Facebook generation growing up.
To support video companies using video for broadcasting, advertising, video-conferencing, marketing, or for other purposes, need to avoid settling for traditional WAN optimization.
Instead they should employ more innovative solutions that are driven by machine intelligence – such as PORTrockIT, which accelerates data while reducing data packet loss and it mitigates the effects of latency.
Adexchanger offers some more food for thought about why this should concern marketers in particular: “Video on a landing page can increase conversion rates by 80%? Or, that 92% of mobile video consumers share videos with others.”
Marketers should therefore ask their IT departments to invest in solutions that enable them to deliver marketing messages without their conversations being interrupted by network latency.
Similarly, broadcasters should invest in systems that mitigate the impact that latency can have on their viewers to maintain their loyalty.
High viewing figures can after all lead to higher advertising revenues for the many broadcasters, social media networks and publishers whom are offering video content as part of their service.
They may also need to transfer and back up large and uncompressed video files around the world quickly – that’s a capability which WAN optimisation often fails to deliver, but it can be achieved with the right solution.
It’s therefore important to overview the alternative options that exist on the market.
- » Google Cloud secures $2.6bn quarterly revenues at 53% growth as Alphabet reveals all for first time
- » How financial services can stay secure in the cloud: A guide
- » Spotting the elephant in the room: Why cloud will not burst colo’s bubble just yet
- » Microsoft posts more strong financials and 62% Azure growth – with differentiation key to success
- » Hybrid cloud environments: A guide to the best applications, workloads, and strategies