Please disable your adblock and script blockers to view this page

Scaling to 100k Users


API
Amazon
Digital Ocean’s
Managed Database
iOS
Android
API.In
PaaS
Elastic Beanstalk
AWS
CDN
PostgreSQL
MySQL
Elasticache on AWS
Google Cloud
Redis Cluster
NTWRK
USC


Graminsta
Heroku
YouTube
Mavid Mobrick
Mavid Mobrick’s
We’ll
Redis
Datadog
@alexpareto
Alex
Combinator S18
Facebook


DigitalOcean
Redis
Y

No matching tags


RDS


Graminsta
Ohio
Japan
Memcached
New Relic
LA
Andover


Superbowl

Positivity     39.00%   
   Negativity   61.00%
The New York Times
SOURCE: https://alexpareto.com/scalability/systems/2020/02/03/scaling-100k.html
Write a review: Hacker News
Summary

The client renders that data to the user.I’ve found in modern application development that thinking of the client as a completely separate entity from the API makes it much easier to reason about scaling the application.When we first start building the application, it’s alright for all three of these things to run on one server. In a way, this resembles our development environment: one engineer runs the database, API, and client all on the same computer.In theory, we could deploy this to the cloud on a single DigitalOcean Droplet or AWS EC2 instance like below:With that said, if we expect Graminsta to be used by more than 1 person, it almost always makes sense to split out the database layer.Splitting out the database into a managed service like Amazon’s RDS or Digital Ocean’s Managed Database will serve us well for a long time. As one part of the system gets more traffic, we can split it out so that we can handle scaling the service based on it’s own specific traffic patterns.This is why I like to think of the client as separate from the API. This allows for horizontal scaling (increasing the amount of requests we can handle by adding more servers running the same code).We’re going to place a separate load balancer in front of our web client and our API. We can set up our load balancer to increase the number of instances during the Superbowl when everyone is online and decrease the number of instances when all of our users are asleep.With a load balancer, our API layer can scale to practically infinity, we will just keep adding instances as we get more requests.Side note: At this point what we have so far is very similar to what PaaS companies like Heroku or AWS’s Elastic Beanstalk provide out of the box (and why they’re so popular). Redis has a built in Redis Cluster mode that, in a similar way to a load balancer1, lets us distribute our Redis cache across multiple machines (thousands if one so pleases).Nearly all highly scaled applications take ample advantage of caching, it’s an absolutely integral part of making a fast API. We can put this on new instaces behind their own load balancer that can scale up and down based on how many websocket connections have been opened or closed, independently of how many HTTP requests we have coming in.We’re also going to continue to bump up against limitations on the data layer.

As said here by