This weekend, I pulled out wireshark on the new Sims 4 Create-A-Sim Demo. I was super impressed at how damn fast the community Sims were coming through and had to go behind the scenes.
You can scroll this infinite list as fast as you can, and descriptions are there immediately, with the JPEGs fill in async. It’s insanely fast. Wireshark reveals a TCP data connection (not HTTP) is kept open to the EA servers and then the JPEGs are filled in from Akamai over HTTP. The JPEGs take about 5ms each on my home Comcast connection. That’s pretty fast. So let’s break down req/resp for the data connection: 161ms 201ms 169ms 170ms … generally in that range
This doesn’t seem very impressive, until you traceroute and find out the server’s in freakin Dublin, Ireland. That’s 52ms round trip direct line speed-of-light. Ping gives me 159ms to this server. So basically this server is giving back sorted data in anywhere from 2-40ms, and mostly under 10ms. Generalized string search takes <10ms and 19ms once you hit return. And that’s including any network overhead in their data center in front of the service.. so the entire community service is probably responding in 2-3ms consistently. Holy hell. Then I tried some search suggest of notable top 10 websites (typing a few characters, then getting suggestions in the search box):
80ms (server ping 11ms) 158ms (server ping 13ms) 74ms (server ping 12ms) 221ms! (server ping 20ms)
<100ms is pretty good for most users, it’s what I aim for on HTTP endpoints. That one 221ms one is fairly surprising though given the size, and scale of the engineering org. It’s a company that rhymes with Cramazon, and this company widely uses service based architecture for its products.
Which led me to my next thought about all of this: why are folks adding such incredible overhead with HTTP on internal services–mostly for CRUD operations–at the expense of their users experience? The reason, almost universally, is to improve the developer’s lives in some way. What about the users? One internal service with a 150ms response time can obliterate the user experience upstream. Experiences like the Sims 4 experience benefit mobile as well. You can be on a terrible mobile connection and still get a great user experience with a service as fast, succinct and responsive as Sims 4. But most mobile developers would rather use 500+ms latencies of a Ruby on Rails REST API. That’s for themselves, not for their users. The point of all this is to reaffirm something I’ve been proselytizing for some time: performance is a base requirement. You can’t address it by quoting Knuth’s “premature optimization” and waving your hands. People who do that “suddenly” find themselves needing 10x-100x speedups later. Clearly the Sims 4 team took performance seriously up front and have made an incredibly smooth user experience as a result.