From my observations, it's not the backbones that lag behind, but the interconnections. ATT and Verizon wouldn't need to cap bandwidth if customers were entirely contained by the network. For instance, if you were a U-Verse customer, only utilizing ATT services (phone to other ATT subscribers, video only from ATT, data usage only with other ATT customers), you would not impact the network much at all. Except for long distance calling, most other service consumption would be nearby. There would be the occasional outlier, much like long distance calling, perhaps data transfer with someone cross-country, but the bulk of your usage would be contained. And thus more easily provided for. However, when services are used that require another network to provide, you start to encounter tension. Bear in mind, not all networks. Google, for instance, is fairly ubiquitous (and YouTube, by the same token). Google has relationships with all of the major carriers. They've emplaced their data-centers-in-a-box all over the place, and so most Google data is fairly close, no matter where you are. Others, not so much. Akamai, for example, has no direct relation of any sort with ATT (and, by extension, the old SBC and BellSouth networks). Caveat: Anecdotal evidence only, from speaking with Akamai people. This means any data an ATT user retrieves from an Akamai cache is causing pain to ATT, because it has to traverse peering. Now Akamai is pretty savvy, and they try their best to deliver the content from the closest source to the destination that they can, they run analytics all the time to make sure they're doing that. Others ... again, not so much. So there's a whole hierarchy of well-funded, well-peered entities all the way down to the individual user. And as you step down the pyramid, you become less and less able to solve for x, where x is efficiency (and thus, profit). So consequently as you step down the pyramid, in order to preserve profit, bit costs should increase. ATT pays the least per bit to deliver to their users, anyone with whom ATT has diverse settlement-free peering with pays the second least, and so on down the line. Bear in mind, at this point we're still only talking about content to consumer. Ideally, there would be an obvious demarcation between producer and consumer. Sadly, this is not the case. Comcast, as an example, is both. They have their own content, they have their own eyeballs, but they want to make their content available to everyone, and their eyeballs want content from everywhere. Add to this it's almost impossible to gauge what effect a direct relationship with Comcast is going to do to a network, and other providers shy away from doing so, unless large amounts of money change hands, which is a problem Comcast faces. They're not as well-connected as they need to be to provide the service they purport to sell, and thus the bandwidth caps.
But back to bit costs. There is a cost to move one bit. This cost is almost identical to the cost to move a million bits. So if we stipulate that the producers pay more per bit as they move less bits, it can be easy to see that a consumer pays less per bit as they consume more bits. Not only that, but there comes a point where a consumer begins paying negative amounts per bit. And an unwanted side effect is that those consumers degrade the experience of the people paying positive costs per bit. So for those users who pay negative amounts per bit, what do we do with them? Well, we can either tell them that they're not allowed to consume enough bits to be paying negative amounts per bit, or we can tell them that they can pay a positive amount per bit. And also just to drop them as subscribers, but that's fairly rare. Of the available options, apparently the least offensive is to tell a consumer that they're not allowed to consume enough bits to pay negative costs per bit. I say apparently because I haven't gone around doing the research myself. I hope that somebody has somewhere. I'm not too convinced of that, but I hope so. What is not an option (working only from empirical evidence) is telling consumers that they can't have the capacity to consume enough bits to pay negative costs per bit. Probably because that would take consensus, which certain entities like to call collusion and put people in jail for.
Offerings for the last mile, the consumer, are also problematic in that they're not, by-and-large, bandwidth conscientious. The prevailing theory is that bandwidth is cheap. And in some parts of the network, it is (relatively) cheap. But that's not the part of the network where it needs to be. When you find a bottleneck, it's very seldom on the backbone. Occasionally you'll see one, they're most prevalent during backhoe season, and they tend to be fairly transient. Most bottlenecks occur, though, either at the peering edge, or the customer edge. Bottlenecks at the peering edge can be dealt with more simply. Traffic can be rerouted, the peering can be augmented (this can take a while, but is generally fairly quick, as peering is watched closely and most augments are already in process by the time things get saturated, but humans are involved so sometimes things slip a bit too far), and maybe it's due to bad traffic and that can get resolved. Bottlenecks close to the customer are problems. As you get nearer the customer, and especially the residential customer, hardware age starts going up, and your options beyond a complete upgrade are extremely limited. Wholesale platform migrations take a long time, and often a platform just gets capped, a new platform put in place, and customer churn is relied on to solve the problem. Of course that doesn't always work. And sometimes a new platform just comes with its own intrinsic problems. And if you're putting in a new platform in one place, you probably need to do it all over, and that takes some grip. Which subtracts from the money you have available to perform needed expenditures elsewhere. Cash rules everything around me. Cream. Get the money. Dollar dollar bill, y'all.
_________________
|