A server is a computer that provides services to other computers, called clients. This client-server model is commonly seen in ๐ผ Networking.
One crucial goal of the server is to maintain ๐ Concurrency, avoiding blocking system calls that might waste client time. To do so, they employ a variety of architectures:
- Polling-based: system calls are made nonblocking (return error instead of block), and server rotates among its open connections (polling) and keeps calling until something succeeds.
- Thread-based: server spins up one thread per connection, so if one thread blocks, the others are free to continue. With stream sockets, everything after
accept()
can be done by a worker thread. However, multithreaded server suffer from high resource usage and high context-switching overhead, causing throughput to decrease as the server load increases (more time spent on overhead). We can somewhat avoid this issue by bounding the number of threads via a Thread Pool. - Event-driven: server has one thread that keeps a event loop that waits for new events. Blocking calls are received as events, and the loop calls the corresponding event handler, also called callback, that isnโt blocking. Since we only have one thread, the state must be managed by a continuation, which determines what event handlers do. With this setup, thereโs no throughput degradation under load, and we avoid the overhead costs.