Saturday 30 December 2017

Nginx04-Standard-Configuration

Standard NGINX Configuration

  • For NGINX the base and main configuration file is /etc/nginx/nginx.conf as it contains all the main and mandatory configs.
  • Let us add an include statement to pull our customized configs.
  • We can see the ngin.conf file has an include directive in the http section which will tell the main config to pull the configs from.
  • By default it will pull the configs from /etc/nginx/conf.d directory.
  • We'll add an additional include directive to pull our configs from /etc/nginx/vhost.conf in which we are going to have our custom configs. Here we'll remove the server directive and have that in our custom config file.
# cp -prv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.`date +"%Y%m%d"`
# vi /etc/nginx/nginx.conf

include /etc/nginx/vhost.d/*.conf;
  • If we are hosting multiple websites then we can have our SITE_NAME.d directory in place of vhost.d. This will help us in hosting multiple websites to avoid confusion.
  • Now let us create a directory named vhost.d and prepare our custom config.
# cd /etc/nginx
# mkdir vhost.d
# cp -prv conf.d/default.conf vhost.d/
  • If you are unable to find the default.conf file in conf.d directory then you can copy default.conf
  • Comment the include directive which points to load the default configs from the /etc/nginx/default.d
  • Change the document root to /var/www/html from /usr/share/nginx/html in the location section.
  • Create the document root and create and index file.
# mkdir /var/www/html
# echo -e "WELCOME TO LINUX-LIBRARY NGINX WEBSERVER\nThis site is under development" > /var/www/html/index.html
  • Once you are done with your customizations then restart the nginx service.

Thursday 21 December 2017

Nginx03-Configuration-Optimization

Optimizing the NGINX Configurations

  • Location of the nginx main config file is /etc/nginx/nginx.conf
  • Let us know about this configuration file and its options first
  • worker_processes
    • This is responsible for machine to know how many workers are available to spawned after getting bounded to an IP / Port.
  • worker_connections
    • By default this will be set to 1 but depending on the connections the host can accept we can change it.
    • Through ulimit -n we can know the max number of connections allowed to a host.
     worker_connections 1024;
    
  • Buffers This section should be placed in the html section and before the include statement of the nginx config file.
    • client_body_buffer_size
     client_body_buffer_size 10k;
    
    • client_header_buffer_size
     client_header_buffer_size 1k;
    
    • client_max_body_size
     client_max_body_size 8m;
    
    • large_client_header_buffers
     large_client_header_buffers 2 1k;
    
  • Timeouts This will tell the server to know the max time to wait after a request has been received from a client. There are 2 types of timeouts.
    • client_body_timeout This can be set to a max of 12 seconds.
     client_body_timeout 12;
    
    • client_header_timeout Maintaining the same as the timeout for the body
     client_header_timeout 12;
    
    • keep_alive_timeoutsend_timeout These will let the system know till when a request can be waited and when to send the error if not able to serve the request. If you already have these then you can modify those as per your choice.
     keep_alive_timeout 15;
     send_timeout 10;
    
  • Test your configurations This would always be a good practice to verify your config before restarting the application as you can get to know where you have misconfigured.
# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
  • Restart nginx
# systemctl restart nginx

Saturday 16 December 2017

Nginx02-Installation

Installation and Setup

  • Configure EPEL repository and install NGINX
# yum install nginx -y
  • Enable Nginx and start it
# systemctl enable nginx
# systemctl start nginx

Nginx01-Intro

NGINX

NGINX WebServer Administration

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption.
NGINX is one of a handful of servers written to address the C10K problem. Unlike traditional servers, NGINX doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load. Even if you don’t expect to handle thousands of simultaneous requests, you can still benefit from NGINX’s high-performance and small memory footprint. NGINX scales in all directions: from the smallest VPS all the way up to large clusters of servers.

Overview of nginx Architecture

nginx-arch
Traditional process- or thread-based models of handling concurrent connections involve handling each connection with a separate process or thread, and blocking on network or input/output operations. Depending on the application, it can be very inefficient in terms of memory and CPU consumption. Spawning a separate process or thread requires preparation of a new runtime environment, including allocation of heap and stack memory, and the creation of a new execution context. Additional CPU time is also spent creating these items, which can eventually lead to poor performance due to thread thrashing on excessive context switching. All of these complications manifest themselves in older web server architectures like Apache's. This is a tradeoff between offering a rich set of generally applicable features and optimized usage of server resources.
From the very beginning, nginx was meant to be a specialized tool to achieve more performance, density and economical use of server resources while enabling dynamic growth of a website, so it has followed a different model. It was actually inspired by the ongoing development of advanced event-based mechanisms in a variety of operating systems. What resulted is a modular, event-driven, asynchronous, single-threaded, non-blocking architecture which became the foundation of nginx code.
Nginx uses multiplexing and event notifications heavily, and dedicates specific tasks to separate processes. Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.

Tomcat16-Multiple-Instances

Running Multiple Tomcat Instances on a Single Server Setup

  • Be sure you have installed JDK-8 and check the following variables
# env | grep JAVA
JAVA_HOME=/opt/jdk8

# env | grep JRE
JRE_HOME=/opt/jdk8/jre
  • Add tomcat user and group
# groupadd tomcat
# useradd -M -s /sbin/nologin -g tomcat -d /opt/tomcat tomcat
  • Have the tomcat base downloaded to your machine and extreact those to /opt/tomcat-src
  • In my case I already have tomcat downloaded. So I am going to extract those
# mkdir /opt/tomcat-src
# tar -xzvf ~/apache-tomcat-8.5.11.tar.gz -C tomcat-src/ --strip-components=1
  • Let us create 2 directories for our tomcat instances
# mkdir /opt/tomcat{1,2}
  • Copy the contents of tomcat-src to tomcat1 and tomcat2
# cp -prf /opt/tomcat-src/* /opt/tomcat1
# cp -prf /opt/tomcat-src/* /opt/tomcat2
  • Let us change the permissions of some directories in tomcat1 and tomcat2
# cd tomcat1/
# chgrp -R tomcat conf
# chmod g+rwx conf/
# chmod g+r conf/*
# chown -R tomcat work/ temp/ logs/
# cd tomcat2/
# chgrp -R tomcat conf
# chmod g+rwx conf/
# chmod g+r conf/*
# chown -R tomcat work/ temp/ logs/
  • Now we need to change the configs to change the ports as we know each instance should have some distinct ports to listen on.
[TOMCAT1]

Shutdown Port ->      7005
Web Port  ->      7080
Redirect Port ->      7443
AJP Conn. Port ->      7009
[TOMCAT2]

Shutdown Port   ->      8005
Web Port        ->      8080
Redirect Port   ->      8443
AJP Conn. Port  ->      8009
  • Please refer to Basic Clustering to know how to change the ports also to know how can these tomcat instances be run in a clustered environment to balance the load