A server is a computer that provides services to other computers, called clients. This client-server model is commonly seen in ๐Ÿ—ผ Networking.

One crucial goal of the server is to maintain ๐Ÿ“ Concurrency, avoiding blocking system calls that might waste client time. To do so, they employ a variety of architectures:

  1. Polling-based: system calls are made nonblocking (return error instead of block), and server rotates among its open connections (polling) and keeps calling until something succeeds.
  2. Thread-based: server spins up one thread per connection, so if one thread blocks, the others are free to continue. With stream sockets, everything after accept() can be done by a worker thread. However, multithreaded server suffer from high resource usage and high context-switching overhead, causing throughput to decrease as the server load increases (more time spent on overhead). We can somewhat avoid this issue by bounding the number of threads via a Thread Pool.
  3. Event-driven: server has one thread that keeps a event loop that waits for new events. Blocking calls are received as events, and the loop calls the corresponding event handler, also called callback, that isnโ€™t blocking. Since we only have one thread, the state must be managed by a continuation, which determines what event handlers do. With this setup, thereโ€™s no throughput degradation under load, and we avoid the overhead costs.