For NGINX the base and main configuration file is /etc/nginx/nginx.conf as it contains all the main and mandatory configs.
Let us add an include statement to pull our customized configs.
We can see the ngin.conf file has an include directive in the http section which will tell the main config to pull the configs from.
By default it will pull the configs from /etc/nginx/conf.d directory.
We'll add an additional include directive to pull our configs from /etc/nginx/vhost.conf in which we are going to have our custom configs. Here we'll remove the server directive and have that in our custom config file.
# cp -prv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.`date +"%Y%m%d"`
# vi /etc/nginx/nginx.conf
include /etc/nginx/vhost.d/*.conf;
If we are hosting multiple websites then we can have our SITE_NAME.d directory in place of vhost.d. This will help us in hosting multiple websites to avoid confusion.
Now let us create a directory named vhost.d and prepare our custom config.
Location of the nginx main config file is /etc/nginx/nginx.conf
Let us know about this configuration file and its options first
worker_processes
This is responsible for machine to know how many workers are available to spawned after getting bounded to an IP / Port.
worker_connections
By default this will be set to 1 but depending on the connections the host can accept we can change it.
Through ulimit -n we can know the max number of connections allowed to a host.
worker_connections 1024;
Buffers This section should be placed in the html section and before the include statement of the nginx config file.
client_body_buffer_size
client_body_buffer_size 10k;
client_header_buffer_size
client_header_buffer_size 1k;
client_max_body_size
client_max_body_size 8m;
large_client_header_buffers
large_client_header_buffers 2 1k;
Timeouts This will tell the server to know the max time to wait after a request has been received from a client. There are 2 types of timeouts.
client_body_timeout This can be set to a max of 12 seconds.
client_body_timeout 12;
client_header_timeout Maintaining the same as the timeout for the body
client_header_timeout 12;
keep_alive_timeout, send_timeout These will let the system know till when a request can be waited and when to send the error if not able to serve the request. If you already have these then you can modify those as per your choice.
keep_alive_timeout 15;
send_timeout 10;
Test your configurations This would always be a good practice to verify your config before restarting the application as you can get to know where you have misconfigured.
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption.
NGINX is one of a handful of servers written to address the C10K problem. Unlike traditional servers, NGINX doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load. Even if you don’t expect to handle thousands of simultaneous requests, you can still benefit from NGINX’s high-performance and small memory footprint. NGINX scales in all directions: from the smallest VPS all the way up to large clusters of servers.
Overview of nginx Architecture
Traditional process- or thread-based models of handling concurrent connections involve handling each connection with a separate process or thread, and blocking on network or input/output operations. Depending on the application, it can be very inefficient in terms of memory and CPU consumption. Spawning a separate process or thread requires preparation of a new runtime environment, including allocation of heap and stack memory, and the creation of a new execution context. Additional CPU time is also spent creating these items, which can eventually lead to poor performance due to thread thrashing on excessive context switching. All of these complications manifest themselves in older web server architectures like Apache's. This is a tradeoff between offering a rich set of generally applicable features and optimized usage of server resources.
From the very beginning, nginx was meant to be a specialized tool to achieve more performance, density and economical use of server resources while enabling dynamic growth of a website, so it has followed a different model. It was actually inspired by the ongoing development of advanced event-based mechanisms in a variety of operating systems. What resulted is a modular, event-driven, asynchronous, single-threaded, non-blocking architecture which became the foundation of nginx code.
Nginx uses multiplexing and event notifications heavily, and dedicates specific tasks to separate processes. Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.
Now we need to change the configs to change the ports as we know each instance should have some distinct ports to listen on.
[TOMCAT1]
Shutdown Port -> 7005
Web Port -> 7080
Redirect Port -> 7443
AJP Conn. Port -> 7009
[TOMCAT2]
Shutdown Port -> 8005
Web Port -> 8080
Redirect Port -> 8443
AJP Conn. Port -> 8009
Please refer to Basic Clustering to know how to change the ports also to know how can these tomcat instances be run in a clustered environment to balance the load