top of page

Summary of our June 21st EU Parliamentary Interface and LESS Accord Brainstorm


On June 21st 2023 Greening of Streaming members and guests met in the Thon Europe hotel next to the European Parliament. The discussions were focussed in two areas.


The first area of focus was to learn about public affairs and policy in Europe, and specifically to engage with BEREC (the European Communications Regulator) about the evolving regulations, policy, and directives surrounding sustainability in ICT.


The second area of focus was to end 'stage one' of our LESS Accord initiative, and run some brainstorms on the key projects. You can jump to that section here.


To set the scene, we asked our keynote to dispel some misconceptions.


Energy and Networks


Rudolf van der Berg of Stratix provided the opening keynote. Rudolf is a longtime public affairs professional with a sharp focus on telecoms and internet issues in Europe.



Rudolf's honest and strong opinions are founded in huge amounts of real-world data. Importantly they often challenge common assumptions about the relationships between energy and internet/telecoms infrastructure deployment and use. Always straight talking, and always founded in abundant well-evidenced data, Rudolf strongly advocated a view which is now gathering consensus across Greening of Streaming members, and a view that seems to be finally gathering wider acceptance through the streaming and content delivery ecosystem:


There is almost no direct relationship between internet traffic measured in Gigabytes and Energy measured in kWh.

With many carbon calculators portraying a near-direct relationship between GB and kWh (and therefore construed to CO2 emissions), many in the industry believe that a sustainability strategy for digital media simply comprises reducing data generation or consumption.


At Greening of Streaming, where many of our members either generate or distribute data in huge quantities, we have been extremely cautious about this relationship.


Measurements undertaken by our working group 4 clearly show that while video/audio encoding and decoding workloads at each end of the network vary significantly with elements such as bit rates, quality targets, and image size (and more), the workload of the transmission network itself does not.


In practice we have found that the capacity of a network (rather than its 'use') is the single most important guide to energy consumption, with a secondary and key influence being the age and generation of the technology.


So a new high-capacity link can in practice consume significantly less energy than an older, lower capacity link. And critically, as Rudolf confirmed, information modulated in the link/signalling may vary from 'all zeros' (no data) to 'a combination of zeros and ones' (data), but there is no energy differentiation at all between those situations.


So sending less data does not in fact reduce the energy usage of the internet.


As Rudolf highlighted: "Measuring network energy by how many gigabytes are transferred is like working out how much energy is consumed by streetlights by counting how many cars pass underneath them".

Within the telecoms and streaming industry this view is finally becoming accepted. At Fraunhofer's FOKUS event last week, T-Mobile clearly stated this same view, and informal discussions between Greening of Streaming members and other operators and CDNs are correlating a picture that our own Working Group 4's activities (specifically looking at correlations between energy and live event streaming) have been telling us privately for a couple of years: 'Reducing bandwidth' is not a panacea for digital service providers who are seeking energy reduction (and related sustainability marketing).


Rudolf's slides are available here:

Greening of Streaming Rudolf
.pdf
Download PDF • 3.47MB


Navigating The Regulatory Frameworks


Part of Greening of Streaming's activities and 'mission' is to reach out to regulators and public policy-focussed groups to offer access to our members for technical and engineering discussions. We are expressly NOT a lobbying organisation: Currently we do not have any objections to any regulations that are being proposed that we are aware of, and nor do we seek regulation. We firmly believe that the best and fastest way to transform the streaming industry's sustainability impacts is through industry-led reform.


However, we do feel that ensuring regulators have access to a broad spread of industry actors will help any such policy, where it is necessary, become better formed.


So our programme of outreach to policy makers is principally focussed on making sure any forthcoming industry or public consultations are on our radar, and contributing to those processes with some sector insights and expertise.


For that reason, this year we reached out to a variety of public affairs groups as we planned to 'come to town' for our event in Brussels, and we were honoured when Sandrine Elmi Hersi, who co-chairs BEREC's sustainability working group, welcomed us to the European regulatory space. She provided a deeply interesting overview of the working processes of the European Commission and its regulatory affairs responsibilities and activities.


She included some fascinating discussion points from technical projects that some of BEREC's member state regulators had produced (many concurred with Rudolf's well-made points from earlier in the day, particularly the view on the traffic/energy relationship.


Her very detailed slides can be found on the link below. They are necessary reading for any streaming service, technology, or product supplier wanting to address the EU market and wanting to consider compliance to EU regulations relating to ICT.


Sandrine's slides are available here:


Presentation Greening Digital Streaming 2023 vf SEH
.pdf
Download PDF • 1.02MB


...and all that was before the morning coffee break!



The LESS Accord Brainstorms


Having challenged everyone's thinking very hard on a wide range of issues, the discussions then turned to our brainstorms focussed on the Low Energy Sustainable Streaming (LESS) Accord project.


For the past 6 months, Greening of Streaming has been reaching out to the streaming industry's engineers to court ideas/gut intuition and 'best guesses' about what we could do to optimise energy use throughout our workflows and systems. We are trying to form an 'accord' among the engineering community about where to collectively focus efforts to align the multitude of diverse (and sometimes divergent) initiatives that have been emerging over the past few years.


With several dozen responses to the call for input, and within those some very clear 'groups' of testable ideas, members met online earlier this month to distill the list of proposals down to a shortlist of 4 'projects'.


This is the final list:


These projects formed the basis for our brainstorms at our event in Brussels.


In the coffee break Tommy Flanagan from Faultline (industry press) commented to Dom Robinson, Founder of Greening of Streaming and the organiser of the event, that he had not been to a brainstorm session like this before, and wasn't really sure what to expect. Dom replied, "Don't worry, I'm organising the brainstorms, but I have never been to one before either: I have no idea what's going to happen!"


So in the spirit of adventure the group reconvened and, taking each project one by one, dove into a deep ~1hour discussion on each, with an aim to refine what we were trying to test and to identify organisations who could form the project groups to undertake these tests, with members volunteering first and non-member guests being invited to complement the teams where there were 'gaps' among members in the technical workflows or human skills required.


The four projects each touch on some very different groups of issues, and the discussions were deep, technical and varied.


Stepping through one by one:


1) Intelligent Distribution Model Shifting

Can we better define when each of 3 distribution models (unicast/P2P/net layer multicast) is the most energy efficient and implement decisioning to help CDNs seamlessly move among models based on energy efficiency, much the way a car shifts gears to optimise performance?


While today's internet stream distribution models are dominated by unicasting - a model which requires a direct scaling of 'stream serving' infrastructure on a 1:1 basis as the audience grows - there are at least two other techniques available to engineers for delivery in some contexts, and these have different scaling, costs of operations, and (presumably) energy demands.


The first of these is IP multicast, a technology has been around since 1988, but has faced adoption challenges for just as long. This is in part due to some technical challenges (multicast took a little while to overcome some design issues to do with client discovery that caused some early networks to become hugely congested and left a bad taste in the mouths of those early-ISPs), and then in the longer term, after those early issues were solved, a significant problem emerged in how to agree to route multicast 'inter-domain' (between ISPs), which is in practical terms going on today. However IP multicast has been widely used for scaling live streaming within single networks such as ISPs (creating 'IPTV' services) and within enterprises for internal corporate communications. There are a number of specialist technologies out in the market - our member Broadpeak deserves mention - who deploy application layer solutions that help to 'translate' between unicast and multicast to help operators benefit from multicast where they can, but to leverage the widely supported unicast where it is simpler and to the operators benefit.


There is also a model known today as peer-to-peer (P2P). In the early 90s P2P was known as 'application layer multicast', which hints at its similarities to IP multicast (which in this context would strictly be termed network layer multicast). P2P has emerged in it own right and today has nuances which challenge referring to it as an application layer multicast. However for simplicity we agreed to simply refer to three distribution models - unicast, multicast and P2P - in this discussion.


Unicast requires a connection to a server, which we talk about as being in the 'application layer' for each end user. These servers may all be concentrated in one data centre, and this model makes operations simpler, but causes concentrated congestion at or 'around' the data centre as the streaming audience scales up. More typically content delivery networks locate servers in ISPs where the audiences originally connect to the internet, significantly reducing that central congestion. These 'edge caches' (as the industry refers to them) themselves only need to receive the stream from the central origin once each, and they can then locally - within the ISP - deliver the unicast streaming. As well as decongesting the core, this tends to improve the quality of experience (QoE), but regardless of the distribution of the servers (centralised or 'edged') there is still some 1:1 scaling of core server resource as each viewer/listener joins the stream.


In the case of P2P there are a number of subtly different approaches. The first we could describe as a 'consumer edge cache' - essentially when you subscribe to the stream yourself, since you already have it on your machine, then the P2P model would include your machine as a potential edge cache source for anyone else joining the stream. This reduces pressure on the core of the system in exactly the same way the unicast edge caches relieve the pressure on their core. However there is no guarantee that a user will stay watching the stream as long or longer than anyone now using the machine as its edge cache, and so there needs to be some fairly good failover techniques in place to handle these ad-hoc leaves and joins and changes in the service topology. This invariably impacts quality of service (QoS)/service-level guarantees. P2P can offer some significant ability to in effect 'emulate' a massive CDN and for very low capital overhead, since typically end users bring their equipment and use their energy and their ISP connection. However that lack of SLA can cause challenges for P2P providers in unmanaged network (public internet) environments.


P2P can also be run within a single QoS-managed network, and in this case many of the SLA and QoS issues can be overcome through prudent setup on the managed network, making p2p then very useful as a way to dynamically deploy edge caches. In this model this starts to look very like a unicast CDN, albeit constrained to the operators network.


So each mode has some contextual benefits over the other.


Unicast is commoditised, and today the telecoms under the internet assume they are scaling ISP services for unicast. It is cheap and 'easy' to deploy, but there is a direct scaling cost as audiences grow, and this means cost of operations scales directly with revenue.


Network layer multicast is amazing at scaling live streams. Each router only forwards a video packet once out of each route that has a down stream user wanting that stream. Overall, done properly, a stream delivered from 10 locations too 10,000 people each location, would switch from 10*10,000 = 100,000 unicast connections to the equivalent of ~10 multicast streams (slightly depending on whats happening in network, but the order of magnitude and scaling difference should be apparent). However, multicast comes with the technical challenges outlined above, plus other video-specific challenges such as a lack of support in the W3C/browser and set-top boxes, and the fact that routers in the end users homes need to support IGMP v2 at least, if not v3.


Each use case has typically been deployed for a specific purpose - multicast for IPTV, P2P for video on demand distribution, and unicast for 'simple OTT'.


The intuition that was introduced to the LESS Accord was that these could all become options, and if they can be provisioned on a near-ad-hoc basis, then there must be some underlying heuristics that can help inform that 'shifting' from one mode to another.


In summary, the project has a key aim: First to try to construct a workflow and include energy measurement to the systems involved, and then to understand at what stage the operator should transition from one mode to another, with they central transition driver being the energy of the complete system, and the while assuming the end user's experience won't be affected.


2) "Good Enough" Codec / Ladder Configuration

Can we save energy through codec choices and optimisation and demonstrate real-world energy reduction while maintaining ‘good enough’ quality for audience consumption?


Last autumn, one of the initial conversations among members that led to the LESS Accord project centred on 'green buttons'. In this context a 'green button' is an interaction with a client streaming media player application or device where the user can opt for an experience that is - at least ostensibly - optimised for energy efficiency. We know from various surveys that there is little public use of green buttons, and indeed 'eco mode' (which most TV's are shipped in) is often turned off at the point of installation of the display and never turned back on. The thought experiment in the conversation explored turning the idea on its head, and to default all stream delivery to a 'good enough' quality, but one that was optimised for energy efficiency. The 'green button' interaction was replaced with a 'gold button' interaction which would allow the viewer to upscale/upgrade the quality for the duration of the following programme; thereby defaulting the streaming system to energy efficiency while at the same time not constraining consumer choice.


Given the assumption that most users won't typically opt into the gold button option, there is potential to significantly reduce the global average power.


Of course this would depend on adjustments to quality actually having a real-world effect on energy reduction: very much something that needs validating.


To extend the test, and because of the proliferation of renditions that some transcoding plants are setup for to meet the demand of modern adaptive bitrate ladders, these ladders are considered 'in scope' for project 2's investigations too.



3) Energy ‘Breadcrumb’ Metadata Stamps (to drive energy aware workflows)

Can we obtain useful energy information from streaming systems to intelligently determine workflow strategy based on ‘energy context’ and create a container/manifest layer control plane for such decisions?


There has been much discussion about energy measurement throughout all of Greening of Streaming's activities. Working Group 4 has been evolving ways to evaluate fluctuations in system wide energy across CDNs and transcoding systems during live events. We have been developing monitoring and probing systems that allow us to not only read energy from systems but also correlate that energy with specific changes in traffic. It has not been easy!


But this type of measurement measures the system as traffic passes through it. It is key for an operator of services to understand where to focus on making that delivery system efficient. However it cannot provide any information about the energy consumption of a single file as that file passes through the system, and for an end user this is important.


There is a huge complexity here. The infrastructure of subsystems that that file passes through between source and destination is shared, but the only energy information that can be gathered is of each 'whole' subsystem. Putting that into simpler terms, we can measure the energy consumption of a computer running encoding software, and in a lab we might be able to run a single encoding job on that computer and see a difference between 'no encoding' and 'a single active encode'; there would be a 'baseline' energy used during periods of no encoding and a higher amount of energy used during encoding. However in a production environment there is an economy of scale so each computer would likely run multiple encoding jobs at the same time. Not only does this concurrency affect the model, but in some situations running multiple jobs at the same time will affect each other adversely too. So the linearity of scaling up these tasks and correlating a scaling up of energy can be complex. To add to this complexity, if we are to try to divide that energy up per user we will see the relative energy consumption per user decrease as the audience size scales up.


Other stages in the workflow will have different scaling/energy relationships too. For example only a subset of an audience may have access to certain bandwidths of transcode - the model outlined above should not therefore correlate the energy used in core transcoding of multiple bitrates to the user who can only receive 2 or 3 of those on their client device.


So answering the question to the end user of 'How much energy did it take to get this stream to me?', while highly desirable for economic, marketing, and feedback reasons (which can lead to behaviour change), is extremely complex. To add to this, if we can't find a clear way to answer this question, we will struggle to model it in computational processes, and the potential to develop systems that can be 'intelligent' about using energy efficient workflows will be out of reach.


Project three explores an idea with a known flaw, but with some further framing of that flaw it may provide a useful indicator. The idea is that a video file is typically created on a computer. At the start of the encoding process we could take a reading of the kWh so far consumed by the computer since it was powered on. At the end of the encoding process we could take a second reading of the kWh. It will have increased. The difference between the two could then be stored in the file's own metadata.


Regardless of the fact that that machine may be sharing resources with other tasks, the final kWh is a real representation of the energy that actually was consumed making the resource available to the encoding task. Obviously a misunderstanding of this data model could lead to double counts of that energy; if a second encode task is running at the same time, the total energy will be counted for both encoding jobs and this total would be stored in the metadata of both files. While indisputably correct for the energy used in the creation of each file, if you 'add them together' you would be adding the same energy twice causing an accounting problem at a system-level view. From a systems operation point of view this would be highly misleading and this reading should not be used.


For this reason it may be we need to frame it in a new metric - here we will use the example "ckWH" for 'cumulative kWH'.


The ckWh 'meter' in each files metadata could collect each ckWh as it passes through various stages in the workflow. For example if the file is written to storage, then subsequently a copy is taken by a video-on-demand user, then at the time the copy is taken that copy would have the difference in kWh between the time of writing to storage, and the time of copy added to its ckWh counter.


From the file user's perspective, then, the ckWh in the file received at the end of the workflow would clearly reflect how many kWh had actually been used in the lifecycle of that file between its creation and the user's consumption. The difficult thing is that if the user consumed a second file from exactly the same workflow, with likely very similar ckWh, this WOULD NOT mean that you could combine the two files' ckWH and make any assertions about the efficiency of the underlying systems, since each file may well already include the energy used in the production of the other file.


As a concept this is quite challenging, but it clearly highlights the complexity of modeling energy consumption.



4) Hardware and Infrastructure Optimisation

Can we combine technologies such as optimised silicon, immersion cooling, relocation, etc., to move existing workloads (encoding/caching) to different hardware environments to realise significant energy efficiencies?


In many ways, compared to the other projects, this is the simplest ... in principle! We want to explore some of the latest technology advances and how they might reshape our service deployment models at a hardware level.


Areas this might include are:

  • Exploring optimisation between silicon environments: for example moving transcoding between CPU, GPU, FPGA, DSPs and so on. These tests are reasonably prolific in the industry, and so we hope that with active members already in this space we can coordinate some straightforward benchmarking.

  • Immersion cooling: comparing current air-cooling systems with mineral oil (or similar)-based cooling systems in HPC streaming environments such as encoding, packaging, caching, or potentially even consumer premises equipment such as routers and set-top boxes.

  • Deploying tasks such as low-priority encoding to infrastructures that have unused surplus renewable energy.


As an example, immersion cooling can (so tech vendors claim) potentially remove 40-60% of energy consumption from high performance high-density computing environments such as the data centres that transcode or deliver video. Moving to immersion cooling is not simple, though. Operationally handling failed systems that are immersed in mineral oil is much more complex - particularly at scale - than the current air-cooled models. A change of technology like this would change the operational practices of supporting very scaled-up systems, so while it may bring about significant energy reduction, doing so would be non-trivial and subject to change-resistance.


Knowing that very large streaming clients are driving demand might influence decisions to make such a change more quickly. However, capital investment will almost always be required for immersive cooling changes, and the same is true even for testing (since any computer lowered into mineral oil can never be cleaned and repurposed back into a traditional compute environment), so the opportunity to test some of these models will be limited by what resources the Greening of Streaming community can source, making go/no-go on this particular project scope somewhat binary!


The potential to explore relocating workloads to resources is somewhat more attainable. Scottish Enterprise have shown great interest in the LESS Accord, and we hope to explore a test of relocating a non-time-critical transcoding workload to infrastructure that is in Scotland and powered by unused surplus wind power (which is abundant in Scotland) to see if it is viable. Zattoo demonstrated a similar model at the Fraunhofer FOKUS event in Berlin last week, and we also hope to engage with their team to seek some guidance on how they have already paved the way to explore such workload shifts toward renewable power in geographic terms.


Summary

As we reach the end of stage one of our LESS Accord project, there is a rewarding sense of focus. We have tried very hard to reach out widely to the industry to get a sense of where engineers feel we can best focus our collective engineering efforts. We have four clear areas of focus , and we have had an exciting day brainstorming the challenges that are specific to exploring these.


Over the next weeks we hope to move from brainstorm to plan, and to present those testing plans back to the industry at IBC in September.


Providing the industry doesn't raise any significant showstoppers or obstacles to going ahead in practice, with a little luck we should be in a place to start some, if not all, of these projects off in earnest in the autumn and winter, and by next spring the academic community will hopefully be working with us to make the outputs of the tests quantifiable and potentially actionable.


We will be presenting that output at the end of Q2 2024.


There is, of course, a chance that we discover that some of these models just cannot be made viable, or early tests may indicate that they do not produce meaningful energy savings. But that is why we do science, folks, and learning that an option is unviable is just as important as discovering a transformative way to work that can reduce the energy demand of the streaming sector.


Do reach out and get involved if you can - we are always up for new membership, and further we are exploring 'guest' membership for participants who can help with the above projects where our membership lacks the expertise in specific workflow stages or elements of these tests.


We look forward to hearing from you: info@greeningofstreaming.org









bottom of page