1. What is Nginx?
Nginx is a high-performance, open-source Web server and reverse proxy server. Built on an event-driven, asynchronous, and non-blocking architecture, it can easily handle tens of thousands of concurrent connections with extremely low memory consumption.
Its most common use cases include acting as a static resource server, reverse proxy, and load balancer—distributing user requests across multiple backend application servers while exposing only Nginx itself to the outside world, thereby significantly enhancing performance, security, and availability.
Nginx is typically utilized to implement four core functionalities: Forward Proxy, Reverse Proxy, Load Balancing, and Dynamic/Static Separation.
2. Forward Proxy
Definition: A proxy server sitting between the client and the target server. It intercepts requests from the client and forwards them to the target server on the client's behalf. The target server only sees the proxy's IP and remains unaware of the actual client.
Core Logic: The client knows the destination but asks the proxy to fetch the data. It conceals the client's identity.
Common Use Cases:
* Bypassing Restrictions (VPN/Proxies): Accessing blocked target websites.
* Hiding Real IPs: Protecting client privacy.
* Access Control: Used by enterprises to restrict employee access to specific websites.
* Caching & Acceleration: Caching frequently requested resources to reduce redundant fetching.
3. Reverse Proxy
Definition: A proxy server positioned in front of backend servers. It intercepts incoming client requests, forwards them to the actual backend servers, and then returns the response to the client. The client only ever communicates with the proxy.
Core Logic: A reverse proxy "greets guests on behalf of the servers." The proxy and the actual servers act as a single unit to the outside world. It conceals the server's identity and real IP. (This is the most typical Nginx use case).
Common Use Cases:
* Load Balancing: Distributing traffic across multiple backend servers to prevent single-point overload.
* Security Protection: Hiding the real server IPs, shrinking the attack surface, and mitigating DDoS attacks.
* SSL Termination: Consolidating HTTPS encryption/decryption at the proxy level to offload CPU burden from backend servers.
* Caching & Acceleration: Caching static content for faster response times.
* Unified Entry Point: Serving as a single domain gateway routing to multiple distinct microservices.
4. Load Balancing
Definition: Built upon the concept of reverse proxying, it distributes concurrent user requests across a cluster of servers to prevent any single machine from being overwhelmed.
In Simple Terms: Instead of one server taking all the hits, a group of servers shares the workload. Requests are distributed according to rules so no single machine collapses.
Core Values:
| Benefit | Description |
|---|---|
| Increased Performance | Dispersing requests reduces the load on single machines and shortens response times. |
| High Availability | If one server crashes, traffic is automatically routed to healthy nodes. |
| Horizontal Scaling | Add more machines to handle increased traffic without the need to upgrade single-machine hardware. |
| Enhanced Security | Only the load balancer's IP is exposed to the public; backend servers remain completely hidden. |
Common Routing Algorithms:
1. Round Robin: Allocates requests sequentially and evenly (Simplest & Default).
2. Weighted Round Robin: Assigns more requests to servers with higher performance/weight.
3. Least Connections: Routes the request to the server with the fewest active connections.
4. IP Hash: Ensures requests from the same client IP are consistently routed to the same backend server (Useful for Session persistence).
5. Random: Selects a server entirely at random.
5. Dynamic & Static Content Separation
Definition: The practice of processing dynamic requests and static resources separately.
* Static Resources: Files whose content does not change, such as HTML, CSS, JS, images, videos, and fonts.
* Dynamic Requests: Requests that require backend computation, such as API calls, database queries, and user logins.
Core Approach: Nginx intercepts and serves static files directly, while only forwarding dynamic computational requests to backend application servers (e.g., Tomcat, Node.js, PHP-FPM).
Why Do It? (Core Values):
| Advantage | Description |
|---|---|
| Massive Performance Boost | Nginx returns files highly efficiently; its concurrent static I/O capacity is exponentially faster than app servers like Tomcat. |
| Reduced Backend Load | Backend servers only dedicate CPU to business logic, freed from the drag of static file I/O operations. |
| Optimized Caching | Static assets can easily implement independent browser caching policies and be seamlessly integrated with CDNs. |
| Scalability & Decoupling | Dynamic code nodes and static resource nodes can be deployed independently and scaled separately as needed. |