Why The Internet in Sub-Saharan Africa Sucks
(Mildly technical post about Internet performance in emerging markets.)
Kenya is one of the most wired & sophisticated economies of sub-Saharan Africa, with a plentiful selection of undersea fiber optic landings to Mombasa; indeed the country serves as the gateway for many others in the region, like landlocked Rwanda and Uganda.
And yet despite this even in Nairobi a substantial amount of traffic ends up routing through London or Amsterdam, producing 150-250ms round trip times, even to the most sophisticated companies with aggressively global solutions.
I’m on superb WiFi here at the airport lounge as I wait to catch my flight (-54dB RSSI on channel 36 to be precise). The airport appears to use AccessKenya as their upstream ISP. And the Internet is still pretty bad.
Twitter is being served to Nairobi from Atlanta. Which it might be worth noting is really really far away from Kenya. Like 300ms distant. HTTPS handshakes mean three full round trip exchanges must be completed before you can even begin your request to a server, so nearly a full second passes before your client is even starting to send its request. DOMContentLoaded is 2.95s, and page load only finishes at 12.85s. Facebook, served from London, has a 4s DOMContentLoaded.
Google.com has a more reasonable 636ms to DOMContentLoaded due to “cheating” using a special protocol called QUIC that’s different from HTTPS and doesn’t require any round trips to initiate communicate once a session has ever been established, which makes latency hurt less. It’s clear these kinds of low-overhead protocols could have high impact in these markets.
Even David Ulevitch’s superbly peered and colo’ed OpenDNS finds their anycast IP 208.67.222.222 some 200+ms away in South Africa.
Traffic to Google Public DNS delightfully ingresses to Google’s network in Mombasa (!) a mere 8ms away…but the anycast IP (8.8.8.8) is 150ms — I’m guessing London. Network ingress != RTT sadly; at least until content providers push more intelligence to the edge of their networks.
Startups like Couple hosted on AWS without CloudFront require traversing the net to the US backbone in whichever hosting zone they happen to have plopped their instances.
Box.net, no tiny startup anymore, is being served out of SAN JOSE. Half a world and 368ms away.
So is Uber, which just launched in Nairobi.
Apple Company, ever an Akamai flagship customer, finds itself served out of Paris (186ms). Have I mentioned that, as a publicly traded company, pretty much Akamai’s primary job is to get as close to an end-user as possible? And they are still further from me at this global crossroads than a round trip through mid-earth orbit via O3b!
Which leads me to perhaps a mildly surprising conclusion: from an end user perception of speed, it only matters somewhat how sophisticated the Kenyan domestic backbone gets or even how many new fiber ports the country gets — as long as peering is poor, content caches are distant, and sites require lots of round trips to build a meaningful experience for users. The Internet will still be slow here. And clearly if some of the most sophisticated startups launching products targeted to the local market aren’t doing well at this, it’s structurally hard to do.
That sounds to me like an interesting problem.
Originally posted on LinkedIn