Does a Low Latency Broadband Subscription make sense?March 22, 2022
As BITAG points out in a recent report, too high Working Latency (travel-time in rush hour traffic) is becoming the key reason for poor Internet. The internet is plagued by high peak latencies and they break gaming, video conferencing, interactive AR and VR (dare I say the Metaverse) experiences. The report also points out that increasing bandwidth does in no way guarantee decreasing latency.
There are known techniques for reducing latency peaks, deployment is lagging. An important reason for the lack of deployment is that the ISPs have no incentives to implement them. People buy megabits per second, and even regulators have ignored working latency. The risk of upgrading dusty network equipment with new software is deemed greater than the benefit.
My anecdotal opinion is that there are people that are willing to pay money for low latency broadband. If that is the case, it seems we have a potential supply and demand, which sounds like a business opportunity (That ECON101 class I took is really paying off).
Enter the complexities…
Low vs. Lower
What does low latency even mean?
How low is low? 1ms? 10ms? 100ms?
Where is it measured from and to? The end-user device to a central server? Which central server? From the modem to the internet gateway?
To have a low latency broadband subscription, these things would need to be defined. There is however one more complexity that really needs to be addressed. Because a low latency broadband subscription really needs some sort of guarantee of low latency.
There are many technologies that can help an ISP reduce latency. Again, BITAG points to several solutions. If the rest of this paragraph make zero sense, to you, don’t worry, it does not matter. I will simply name some techniques that will vastly lower latency, however they won't guarantee latency.
These techniques are Active Queue Management techniques leveraging Flow Queueing, separating Queue-Building and Non-Queue-Building behavior, leveraging Explicit Congestion Notification with the proposed L4S technique.
Now, onto the primary problem of guaranteeing low latency:
The majority of internet traffic relies on inducing latency for itself.
70-90% of internet traffic uses End-to-end Congestion Control (E2E-CC) (TCP, BBR, QUIC). My colleague Bjørn points out that there are fundamental limits to how reliably e and low the latency can be with these protocols. How can you guarantee low latency to something that creates latency for itself? Even if the ISP were able to guarantee X ms latency for the underlying network, your network traffic may still induce latency.
So, our problem is as follows:
Lower latency is (relatively) easy to implement but difficult to sell
Low latency is difficult to implement but (likely) easier to sell
How could it be done?
A list of requirements:
One would have to define between which points the latency is guaranteed. For example between the modem and the ISPs core network edge. And the number of milliseconds that the ISP guarantees.
Between the points, there must either be continuous measuring, or testing capabilities for the end-user. The proof is in the pudding, as they say
It needs to be separated from the rest of the internet traffic
As much of the network latency we meet is self induced by TCP/QUIC/BBR. There needs to be a discussion whether these protocols should be strictly banned, as the self-inflicted Latency is indistinguishable from Latency induced by competing traffic. Alternatively, they could be policed before they enter the Low Latency broadband, the downside being that applications using these protocols will experience higher latency, which to some extent defeats the purpose.
Regulatory compliance (Net Neutrality):
One must not reduce the quality for non-specialized services: I would argue you could only set aside a small portion of the network for the low latency subscription. This is not as bad as it sounds like, as an application that requires low-latency will, almost by definition have limited bandwidth requirements (at least for now)
It needs to be clear what traffic can and can’t be a low latency
The end-user must be empowered to choose what apps/services (or protocols) can use the low latency broadband
When the service is initiated, or personally promoted, it needs to figure out what latency it can guarantee to that subscriber. Physics prevents a one-latency-value-fits-all
What would it look like for an end-user?
When buying it, it could look something like this:
200/50 Mbps Regular - $ x
500/100 Mbps Regular - $ y
10/10 Mbps - Low Latency Guaranteed - $ z
From your home, we can guarantee less than q ms latency to these servers
When using it, the end-user should be able to either select what traffic can use the Low Latency subscription. Something like this:
All internet traffic < 10 Mbps using UDP
Video Conferencing only
Set specific ports or IPs
Now, of course, if the traffic volume for these are beyond the 10Mbps, there will be additional latency. So an option such as :
Send notification if the traffic volume goes beyond 10Mmbps
And of course the ability to view the delivered and/or test the latency, either in a mobile or web app.
As I started thinking about this blog, I figured the primary argument against the title is that ISPs should improve latency, regardless of monetization. I certainly do agree with that and that by applying known techniques we can reduce latency vastly. Honestly, the title should be “Does an Ultra-Low Latency Broadband Subscription make sense?”
On the other hand, we us in the industry know how difficult it is to get technical innovations deployed when there is no improvement on the bottom line. The bufferbloat project has provided 8 years of work developing old free and open solutions that are yet to be widely deployed.
If a small part of the population wants to pay a bit extra to be able to more smoothly chat awkwardly in the metaverse, or give their bots a small advantage to win those ultracool NFT monkey jpeg auctions, isn't that OK?
By selling a low latency broadband subscription, the ISPs would have to implement some of the newer techniques, and I would argue that it would likely reduce latency for everyone.