Saturday 22 February 2014

A-Z bash commands for hungry learners

A-Z bash commands for hungry learners

A
alias: If creating an alias is what you want, then this is it.
apropos: We’re not the only ones providing help. This command you to search through the Help manual pages.
apt-get: This one works on Debian and Ubuntu distros. It is used to install and search for software packages.
aptitude: See the similarity with the above command? This one does the same thing.
aspell: Got bad spellings? Use the spell checker.
awk:No, this command is not for awkward situations. It lets you find text and replace it. Also, you can sort, index and validate things in a database.
 


B
basename: Sometimes files and directories have suffixes to their names. This one strips them off it.
bash: GNU Bourne-Again Shell
bc: This command is an arbitrary precision calculator language
bg: bg could stand for background couldn’t it? Regardless, that’s what it does, sends to the background
break: Exit from a loop
builtin: Run a shell builtin
bzip2: When there’s ‘zip’ in the name, that’s what it does. It compresses or decompresses files that are named.
 


C
cal: Need a calendar? This command displays one.
case: In ‘case’ you want to perform a command conditionally. This is how to do it.
cat: In programming, ‘cat’ usually stands for concatenate. Here too, but this command displays the content of the files after concatenation.
cd: Change Directory
cfdisk: In Linux, this command is the partition table manipulator
chgrp: This is how you change the ownership of a group.
chmod:‘Ch’ is for change. This one changes the access permissions.
chown: This one sounds too much like clown! Anyway, it’s not funny though. It changes the owner and group of a file.
chroot: Using this you can run a command, but with a different root directory
chkconfig System services (runlevel)
cksum: It displays the CRC checksum ad byte counts.
clear: If you need to clear the terminal screen, use this command.
cmp: Compare two files
comm: Compare two sorted files line by line
command: Run a command - ignoring shell functions
continue: This is for resuming the next iteration of a particular loop.
cp: Make a copy of files to a different location.
cron: Daemon to execute scheduled commands
crontab: Scheduling is sometimes very important. This command does it, it schedules a command that will run at a specified time.
csplit: Split a file into context-determined pieces
cut: When you need to cut down a file into parts, this is the command to use.
 


D

date: Use this command to change the date and time.
dc: The command stands for Desk Calculator.
ddrescue: Like most other such commands, this is the disk recovery tool.
declare: This command is used to declare the variables and to give attributes.
df: When you want to know the free space on your disk, use this.
diff: This command prints the differences between two files.
diff3: This is the same command as the previous one, but for three files.
dig: Need to lookup the DNS? Use this.
dir: Use this command for listing directory contents briefly.
dircolors: This command is used for colour setup for the ‘ls’ command.
dirname: Use this command to change a full pathname into just a path.
dirs: This command shows you the list of directories that are remembered.
dmesg: Use this command when you want to print kernel and driver messages.
du: Use this command to get an estimation of the file space usage.
 


E

echo: This command is used for displaying message on the screen.
egrep: This searches for files that have lines matching an extended expression.
eject: Use this when you need to eject a removable media.
enable: Use this to disable or enable bulletin shell commands.
env: Environment variables
ethtool: Ethernet card settings
eval: This command is used when you need to evaluate a many commands.
exec: For executing a command.
exit: Exiting the shell.
expand: This command converts all the tabs to spaces.
export: This command sets an environment variable.
expr: Some evaluate commands, this evaluates expressions.
 


F

false: Do nothing, unsuccessfully
fdformat: This command is used for low level format of a floppy disk.
fdisk: This is a partition table manipulator for Linux systems.
fg: This command is used for sending a task to the foreground.
fgrep: Use this command to search through files for tasks that match a string.
file: This is used to determine the file type.
find: This is used to find files that match a desired criteria.
fmt: This is used for reformatting paragraph text.
fold: The name is quite suggestive, it wraps text in order to fit a certain width.
format: This simply formats tapes or disks.
free: Use this to see the memory usage.
fsck: This is used for checking the consistency of the file system and repair it.
fuser: This command identifies and kills the process accessing a file.
 


G

gawk: This command is used to find text within files and replace it.
getopts: Parse positional parameters
grep: Through this you can search in files for lines matching a certain pattern.
groupadd: Use this command to add security user groups.
groupdel: This one is used for deleting a certain group.
groupmod: While the last one deletes, this one modified a group.
groups: Print the names of groups in which an user is located.
gzip: This command is used for compressing and decompressing files.
 


H

hash: This command is used to refer to the complete pathname of a name argument.
head: Use this for output for the first part of files.
help: Display the built in help for a command.
history: Command History
hostname: Print or set system name
 


I

iconv: Use this to convert the character set in files.
id: Display the group ids or user ids.
if: Conditional command.
ifconfig: Used to configure network interfaces.
ifdown: Use this command for stopping a network interface.
ifup: Start a network interface app with this command.
import: Used for the X server. Capture a screen and save image.
Install: Set attributes and copy files
 


J

jobs: Use this for listing jobs that are active.
Join: This one joins lines, which are on a common field.
 


K

kill: Stops a process from running.
Killall: Kills the processes by name.
 


L

less: This command displays the output on a single screen at a time.
let: This is for doing arithmetic on shell variables.
link: This command is used for creating a link to another file.
ln: This one creates a symbolic link to another file.
local: Use this for creating variables.
locate: This one is used for finding files.
logname: This is used to print the login name being used currently.
logout: Use this command to exit a login shell.
look: When you just want to see lines that start with a particular string.
lpc: It stands for Line Printer Control.
lpr: This is for offline print.
lprint: Use this command to print a file.
lprintd: Use this to abort an ongoing print job.
lprintq: This command lists the print queue.
lprm: This removes the jobs from the print queue.
 


M

make: This command is used for recompiling the group of programs.
man: This is short for manual and provides help on a command.
mkdir: Creating directories.
mkfifo: Use this to make FIFOs.
mknod: This is to create character special files or block files.
more: This displays the output, but in a single screen at a time.
mount: Used for mounting a particular filesystem.
mtools: Manipulating files from MS-DOS.
mtr: Network diagnostics command for things like ping and traceroute.
mv: Used for moving and renaming files and directories.
mmv: Mass Move and Rename
 


N

netstat: Get information on networking.
nice: Use this to set the priority of a job or a command.
nl: Write files and number lines.
nohup: This one runs a command, which is not affected by hangups.
notify-send: This command sends desktop notifications.
nslookup: This command is used to query internet name servers interactively.
 


O

open: This command opens a file in its default application.
op: Use this command for gaining operator access.
 


L

less: This command displays the output on a single screen at a time.
let: This is for doing arithmetic on shell variables.
link: This command is used for creating a link to another file.
ln: This one creates a symbolic link to another file.
local: Use this for creating variables.
locate: This one is used for finding files.
logname: This is used to print the login name being used currently.
logout: Use this command to exit a login shell.
look: When you just want to see lines that start with a particular string.
lpc: It stands for Line Printer Control.
lpr: This is for offline print.
lprint: Use this command to print a file.
lprintd: Use this to abort an ongoing print job.
lprintq: This command lists the print queue.
lprm: This removes the jobs from the print queue.
 


M

make: This command is used for recompiling the group of programs.
man: This is short for manual and provides help on a command.
mkdir: Creating directories.
mkfifo: Use this to make FIFOs.
mknod: This is to create character special files or block files.
more: This displays the output, but in a single screen at a time.
mount: Used for mounting a particular filesystem.
mtools: Manipulating files from MS-DOS.
mtr: Network diagnostics command for things like ping and traceroute.
mv: Used for moving and renaming files and directories.
mmv: Mass Move and Rename
 


N

netstat: Get information on networking.
nice: Use this to set the priority of a job or a command.
nl: Write files and number lines.
nohup: This one runs a command, which is not affected by hangups.
notify-send: This command sends desktop notifications.
nslookup: This command is used to query internet name servers interactively.
 


O

open: This command opens a file in its default application.
op: Use this command for gaining operator access.
 


P

passwd: Use this command to modify user passwords.
paste: This command is used for merging lines in files.
pathchk: It is used to check the portability of a file name.
ping: This command is used for testing network connections.
pkill: This command stops processes from running.
popd: This command restores the previous value of the directory you’re currently in.
pr: Prepare your files for printing using this.
printcap: Printer capability database
printenv: Print environment variables
printf: This command is used for formatting and printing data.
ps: This stands for Process Status.
pushd: Change the directory and save it first.
pwd: It stands for Print Working Directory.
 


Q

quota: This command displays the disk usage and its limits.
quotacheck: This commands lets you scan a file system to find its disk usage.
quotactl: This is used to set disk quotas.
 


R

ram: Ram disk device
rcp: When using two machines, this command copies files between them.
read: This commands is used for reading a line from standard input.
readarray: This commands reads from stdin into an array variable.
readonly: This command marks the variables and functions as readonly.
reboot: Self explanatory, use this command to reboot your system.
rename: Rename files
renice: This command alters the priority of the processes running.
remsync: This command synchronises remote files through email.
return: This is used to exit from a shell function.
rev: This command reverses the lines in a file.
rm: Use this to remove particular files.
rmdir: Same as above, but for directories.
rsync: This is for synchronising file trees.
 


S

screen: Use this to run remote shells using ssh.
scp: This is used to create a secure copy.
sdiff: This command is used to merge two files in a secure manner.
sed: This is for the stream editor.
select: This is used when you need to accept keyboard inputs.
seq: This command is used for printing numeric sequences.
set: This command lets you manipulate shell functions and variables.
sftp: Run the secure file transfer program using this.
shift: This command is used for shifting positional parameters.
shopt: Shopt stands for Shell Options.
shutdown: Use this command when you want to shutdown Linux or restart it.
sleep: Add a delay using this command.
slocate: This is used to find particular files.
sort: Text files are sorted using this.
source: This command is used for running commands from a file.
split: This command is used to break a file into fixed sizes.
ssh: This is used to run the remote login program, that is, the secure shell client.
strace: This is used to trace signals and system calls.
su: Substitute the user identity using this command.
sudo: This is used for executing commands as a different user.
sum: File cheksums are printed using this command.
suspend: This command is used to suspend the execution of the current shell.
sync: This command is used in order to synchronise data from a disk with the memory.
 


T
tail: Use this command when you want to output only the last part of a file.
tar: This command is used in order to store a list or extract files in an archive.
tee: This command is used for redirecting output into multiple files.
test: This command is used for evaluating conditional expressions.
time: The running time of a program can be measured using this command.
timeout: This command is used to put a time limit on a command.
times: Use this to find the user and system times.
touch: Timestamps on a file can be changed using this.
top: This is used to get a list of the processes that are running on the system.
traceroute: Use this command to Trace Route to a host.
tr: Delete characters, translate or squeeze them.
tsort: This is used for topological sorting.
tty: This is used for printing the filename of terminal on stdin.
 


U

ulimit: This commands limits the user resources.
umask: This is used to determine the file permission for a new file.
umount: This command will unmount a device from the system.
unalias: This command will remove an alias.
uname: This command will print the system information.
unexpand: This command will convert the spaces in a file to tabs.
uniq: This command will uniquify your files.
units: This will convert the units from one scale to another.
unset: This command removes the variable names or the function names.
unshar: This command unpacks the shell archive scripts.
until: This command will execute a command until there is an error.
uptime: This command will show the uptime.
useradd: Use this command when you need a new user account to be created.
userdel: This command will delete an user account from your system.
usermod: Self explanatory, modify an user account.
users: This command gives you a list of users who are currently logged in.
uuencode: This command will encode binary files.
 


V

v: This command lists the contents of a directory.
vdir: Same as above.
vi: This is a text editor.
vmstat: This command will report on the virtual memory statistics.
 


W

wait: This command directs the system to wait for a process to finish.
watch: This command will display or execute a program periodically.
wc: This command prints the word, byte and line counts.
whereis: This command will search a user’s $path, source files and man pages
which: This command searches only for a user’s $path for a program.
while: Use this to execute commands.
who: This command will print the usernames that are currently logged into the system.
whoami: This is a command that prints the current name and user id.
wget: This will retrieve the web pages or files through HTTP, HTTPS or FTP.
write: Use this to send messages to other users.
 


X

xargs: This command execute’s a utility and passes a constructed argument list.
xdg-open: This lets you open an URL or a file in the user's preferred application.
 


Y
yes: This command will print a string until it is interrupted.

Friday 21 February 2014

Linux Printing

Linux Printing


The Common Unix Printing Service (CUPS) is the Linux/Unix implementation of the Internet Printing Protocol (IPP). It is responsible of printing services on a Red Hat Linux system.

system-config-printer

The Red Hat GUI tool to configure printing services is 'system-config-printing'. It is a friendly way to configure CUPS priting services. It configure and activates on boot the cupsd daemon :

$ /etc/init.d/cupsd status

Another way to configure CUPS is using the Web interface on http://localhost:631

Line Print Daemon Commands

Although the system uses CUPS to manage the printing services, users can still use Line Print Daemon (LPD) commands :

$ lpc status
Shows the status of all known printing queues.

$ lpr -Pprinter filename
It sends the file filename to the printing queue 'printer'.

$ lpq

LaserJet12 is ready and printing
Rank Owner Job Files Total Size
active root 373 /etc/fstab 10240 bytes
1st root 374 /etc/inittab 10240 bytes


It shows all printing jobs submitted on the printing queues. In this case the file /etc/fstab is being printed on LaserJet12, the file /etc/inittab will be printed after /etc/fstab.

$ lprm 374
It removes the printing job with id=374, so the file /etc/inittab will not be printed.

Linux Console Access

Linux Console Access


When an user (root or not) logs-in the system console, some additional features like combination keys (Ctrl+Alt+Delete) are supported. This chapter focuses on how to restrict/control the access to the system console and which operations are permitted on it.

Shutdown via Ctrl+Alt+Del

By default the file /etc/init/control-alt-delete.conf sets to reboot the system in response a Ctrl+Alt+Del key combination used at the console for ANY user :

cat /etc/init/control-alt-delete.conf

# control-alt-delete - emergency keypress handling
#
# This task is run whenever the Control-Alt-Delete key combination is
# pressed. Usually used to shut down the machine.

start on control-alt-delete

exec /sbin/shutdown -r now "Control-Alt-Delete pressed"


* To complete disable this functionality comment the line 'exec /sbin/shutdown -r now "Control-Alt-Delete pressed"' putting a hash mark (#) in front it.

* To only allow certain non-root users the right of shutdown via Ctrl+Alt+Del on the console substitute the line ż?

Console Access

/etc/security/access.conf

This file controls the access to the console based on user/groups and depending from where the connection in done using the pam_access module. The format used in this file is three fields separated by a ":" character

permission ("+" access granted,"-" access denied) : user/group : origins

* For example, to deny console access to user kate :

1.- Activate the pam_access module on /etc/pam.d/login adding on the first 'account' line --> "account required pam_access.so"

2.- Configure the access on /etc/security/access.conf :

$ echo "-:kate:ALL" >> /etc/security/access.conf

Now access on console to user kate is denied.

/etc/security/time.conf

This file uses the pam_time.so module to restrict access to the console based on user/groups and time access. The syntax of this file is

services;ttys;users;times

* For example, to allow access to the console to user kate only on Mondays from 12:00-14:00

1.- Activate the pam_time module on /etc/pam.d/login adding on the first 'account' line --> "account required pam_time.so"

2.- Configure the access on /etc/security/time.conf :

$ echo "login;*;kate;Mo1200-1400" >> /etc/security/time.conf

Now access on console to kate is allowed only on Mondays from 12:00 to 14:00

Console Program Access

Disabling console program access

In secured environments where you may not want to allow any user at the console run 'reboot', 'halt' or 'poweroff' commands the corresponding files in /etc/security/console.apps must be removed :

rm -rf /etc/security/console.apps/reboot
rm -rf /etc/security/console.apps/halt
rm -rf /etc/security/console.apps/poweroff

By default any user on console can execute 'reboot', 'halt' or 'poweroff' !!!

To disable access by users to any console program :

rm -rf /etc/security/console.apps/*

Enabling console access for any application via PAM

In order to control the access from console users to system programs in /sbin or /usr/sbin the consolehelper command, that authenticates console users via PAM, must be used :

1.- Create in /usr/bin directory a link from the application name to control to /usr/bin/consolehelper program. For example if the need to control the access to the /usr/sbin/pwck command to certain users :

$ cd /usr/bin
$ ln -s consolehelper pwck



2.- Create the file /etc/security/console.apps/aplication_name in order to allow the aplication_name execution on console. In our particular case :

$ touch /etc/security/console.apps/pwck


3.- Create the PAM configuration file for the application. One easy way to do it is copy /etc/pam.d/halt on /etc/pam.d/application_name :

$ cp /etc/pam.d/halt /etc/pam.d/pwckAdd in the second line --> 'auth required pam_listfile.so onerr=fail item=user sense=allow file=/etc/pwck.allow'

Users on /etc/pwck.allow (john) will be allowed to execute '/usr/bin/pwck', the rest (kate et al) will not be allowed

4.- Verify the result

Login at console as kate ( 'su - kate' is not a console login !!!)
kate-$ pwck

Nothing is done

Login at console as john ( 'su - john' is not a console login !!!)
john-$ pwck
user 'adm': directory '/var/adm' does not exist

Limiting System Resources in Linux

Limiting System Resources in Linux

Because hardware resources are finite it is necessary to limit the system resources in order to provide equal quality of services to all system users. Limits can be implemented in CPU/memory usage via pam_limits or in disk usage via quota.

PAM Limits

The PAM module pam_limits, activated by default for all users, sets limits on the system resources in a user/group session. These limits are configured on /etc/security/limits.conf file :

$ cat /etc/security/limits.conf

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#[domain] [type] [item] [value]
#
#Where:
#[domain] can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#[type] can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#[item] can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#[value] All items support the values '-1', 'unlimited' or 'infinity' indicating no limit, except for priority and nice
#[domain] [type] [item] [value]
#

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4

# End of file


The file content is self explanatory: limits are applied in users/groups or everybody '*' session in 'soft'/'hard' mode to different items like cpu_time, maxlogins, resident memory, etc. with different values.

* As first example configure the maximum number of running processes for user 'john' to 5 :

$ echo "john hard nproc 5" >> /etc/securety/limits.conf

* As user 'john' test the limit :

$ su - john
john-$ for i in `seq 1 15`; do sleep 30 & done

[1] 5352
[2] 5353
[3] 5354
[4] 5355
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable

After the limit of 5 running process has reached no more process are allowed be executed by john : 'fork: retry: Resource temporarily unavailable'

* As second example configure a limit of memory address space 'as' of 100000KB :

$ echo "john hard as 100000" >> /etc/securety/limits.conf

* As user 'john' test the limit. Executes a perl script that allocates memory forever :

$ su - john
john-$ cat membomb.pl

#!/usr/bin/perl -w

my %hash=();
my $i=0;
my $string="::";

while (1 == 1) {
$i++;
$string=$string."::".$i;
$hash{$string}=$string;
}

john-$ ./membomb.pl
Out of memory!


The memory limit does not allow membomb.pl to allocate all memory.

Disk Quotas

Another important resource to be limited is the disk usage because full disk partitions can bring down the system. Quotas on disk space can be applied in different filesystems for users/groups by used inodes (number of files) and/or used disk blocks (total size).

Quota configuration

* Just before starting to use quota make sure that quota rpm is installed on the system :

$ rpm -qa | grep quota
quota-3.17-10.el6.i686


* Also make sure that the running Kernel has been compiled with quota support :

$ grep CONFIG_QUOTA /boot/config-`uname -r`
CONFIG_QUOTA=y
...


1.- Configure quota parameters on the filesystem where the quota is going to be applied. For example if quotas are going to be setup on /home partition (/dev/VolGroup01/VolGroup01Home), quotas must be setup on /home when the partition is mounted adding the parameters 'usrquota,grpquota' on mount parameters in /etc/fstab. Then remount the partition to activate quota :

/dev/VolGroup01/VolGroup01Home /home ext4 defaults,usrquota,grpquota 1 2

$ mount -o remount /home
$ mount
...
/dev/mapper/VolGroup01-VolGroup01Home on /home type ext4 (rw,usrquota,grpquota)


2.- Generate the partition quota database :

$ quotacheck -cugm /home
It generates the files /home/aquota.user and /home/aquota.group used to manage the quota status on /home.

3.- Edit the quota for user/group :

$ edquota -u john

Disk quotas for user john (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/mapper/VolGroup01-VolGroup01Home 12 80000 100000 10 15 20
:wq!


Quotas can be set for the number of files (inodes) and storage capacity used (blocks) for user 'john' on /home partition :

* It has been setup a soft limit of 80000 blocks of 1Kb (=80M) and a hard limit of 100000 blocks (=100M). User 'john' will not be allowed to use more that 100M on /home and he will be warned when more than 80M will be used.

* It has been setup a soft limit of 15 files (inodes) and a hard limit of 20 files (inodes). User 'john' will not be allowed to create more than 20 files on /home and he will be warned when more than 15 files will be used.

4.- Activate quotas :

$ quotaon -aug
This command will be executed automatically by init so at boot time quotas will be applied.

5.- Verify quotas :

$ su - john
john-$ dd if=/dev/zero of=/home/john/file bs=1024 count=1000000

dd: writing `/home/john/file': Disk quota exceeded
99989+0 records in
99988+0 records out
102387712 bytes (102 MB) copied, 3.85976 s, 26.5 MB/s

Only 100M has been written on file /home/john/file : 'Disk quota exceeded'

john-$ for i in `seq 1 30`; do touch $i.txt; done

touch: cannot touch `14.txt': Disk quota exceeded
...
touch: cannot touch `30.txt': Disk quota exceeded

Only 13 files has been allowed to create by user 'john' because of 'john' already own 7 files : 'Disk quota exceeded'

Quotas can be reported using the repquota command :

$ repquota -a

Automated Installation with kiskstart

Automated Installation with kiskstart

Kickstart

Red Hat Linux has developed kickstart tool for automated installations. A kickstart configuration file is used as response file on the installation process in order to get a full automatic installation. Actually there are two methods for creating the kickstart configuration file :


* anaconda-ks.cfg file from /root, contains the kickstart configuration file with the installation parameters used on the local installation. This file is created by Anaconda, the Red Hat installation program, and can be used as template to generate a custom kickstart configuration file .

* system-config-kickstart, creates a custom kickstart configuration file for new installations.

Kickstart configuration file

The following is a commented kickstart configuration file :

$ cat /root/anaconda-ks.cfg

# Kickstart file automatically generated by anaconda.
# Start the installation process
install
# The installation source that can be local 'cdrom', remote http 'url --url http://', remote ftp 'url --url ftp://' or remote nfs 'nfs --server=serverip --dir=/instdir'
url --url http://server/Centos54

# Skip registration process
key --skip

# Language used during the installation process
lang en_US.UTF-8

# Keyboard configuration
keyboard us

# System graphical configuration. It starts X Server boot
xconfig --startxonboot

# System network configuration. It can also be a fixed via dhcp with 'network --device eth0 --bootproto dhcp'
network --device eth0 --bootproto static --ip 192.168.1.10 --netmask 255.255.255.0 --gateway 192.168.1.1 --nameserver 192.168.1.1 --hostname node01

# Sets the root password. This password can be copied from /etc/shadow
rootpw --iscrypted $1$BjKJOwe$1pHW4VDq4a5HmdCFRd.YT/

# Firewall configuration. The firewall is active and allowing connections only to 22/tcp (ssh) port
firewall --enabled --port=22:tcp

# By default, the authconfig command sets up the Shadow Password Suite (--enableshadow) and MD5 encryption (--enablemd5)
authconfig --enableshadow --enablemd5

# SElinux configuration. If do not know what is use 'selinux --disabled'
selinux --enforcing

# System clock configuration
timezone --utc Europe/Madrid

# GRUB configuration
bootloader --location=mbr --driveorder=sda --append="rhgb quiet"


# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
# Remove all Linux partitions on the unique disk on the system : sda
clearpart --linux

# /boot primary partition of 100M size ext3 on sda
part /boot --fstype ext3 --size=100 --asprimary

# PV Physical Volume 6G primary partition on sda
part pv.8 --size=6000 --asprimary

# Volgroup Volgroup00 generation on PV
volgroup VolGroup00 --pesize=32768 pv.8

# Logical Volume LVM 4G partition for /
logvol / --fstype ext3 --name=LogVol00Root --vgname=VolGroup00 --size=4000

# Logical Volume LVM 1024M partition for swap
logvol swap --fstype swap --name=LogVol00Swap --vgname=VolGroup00 --size=1024


# RPMs to be installed: @ means rpms group and - means uninstallation
%packages
@office
@editors
@system-tools
@text-internet
@dialup
@core
@base
@games
@java
@legacy-software-support
@base-x
@graphics
@printing
@kde-desktop
@server-cfg
@sound-and-video
@admin-tools
@graphical-internet
emacs
audit
kexec-tools
device-mapper-multipath
xorg-x11-utils
xorg-x11-server-Xnest
libsane-hpaio
-sysreport
# After the installation we can execute an script/command
%post

Kickstart execution

Once the kickstart configuration file ks.cfg has been created from anaconda-ks.cfg or using system-config-kickstart tool the next step is make it available to the installation process via cdrom, nfs, http, etc. Then the installation process must be executed using the first Centos installation CD, the Centos installation DVD or a diskless installation image from a TFTPBOOT server and at 'boot :' stage depending the method the kickstart is available must be typed :

boot: linux ks=cdrom:/ks.cfg
boot: linux ks=nfs:server:/ks.cfg
boot: linux ks=http:server:/ks.cfg



Then the automated installs start without any interactive questions/answers...

Linux Backup and Recovery

Linux Backup and Recovery


As Linux system administrator one of the most important tasks to be done is the system backup. Hardware/software failures will bring down your system and then you must be prepared to recover it as quickly and efficiently as possible. In order to perform system backup Linux offers several commands/services depending on the type of backup required.

tar

This command is used for creating full backups of an entire part of the system that do not change a lot. For example to backup the /usr/local system directory :

$ tar cvfz usr_local.tar.gz /usr/local

It creates 'c' a compressed 'z' tar file 'usr_local.tar.gz' that contains all /usr/local structure and data preserving files permissions and ownership. As the data is compressed and archived on the tar file the backups uses less space that the original data, and can be transferred to another machines using scp or ftp.

Note: By default selinux file attributes are not preserved on the tar file. In order to preserve selinux atributtes '--selinux' flag must be used.

* In order to recover tar file on the system the following command must be used :

$ tar tvzf usr_local.tar.gz
...
drwxr-xr-x root/root 0 2009-12-04 14:33 usr/local/share/man/man4x/
drwxr-xr-x root/root 0 2009-12-04 14:33 usr/local/libexec/
drwxr-xr-x root/root 0 2009-12-04 14:33 usr/local/include/


It shows the files that are going to be restored on the system without performing the restore at all. It is just a test 't' to be warned where the files will be restored on the system. NOTE: the file path reported is the file system path where the file will be restored: in this case is usr/local/ what means that if the tar file is restored on /tmp directory the files will be written on /tmp/usr/local.

Once sure where the files are going to be written using tar test mode the restore can be done :

$ cd /
$ tar xvfz /root/usr_local.tar.gz
...
usr/local/share/man/man4x/
usr/local/libexec/
usr/local/include/

rsync

As most of we believe rsync is the best tool that can be used to perform backups. It can be used to copy files on the local system or remotely through the network. The main difference between rsync and tar is that rsync only copies the differences between the source and destination, tar always copy all the data structure from the source when the tar is created and all the data is restored on destination.

$ rsync -av /usr/local/ /tmp/destination/

sending incremental file list
...
share/man/mann/
share/perl5/
src/
sent 514 bytes received 45 bytes 1118.00 bytes/sec
total size is 0 speedup is 0.00
Copy recursively /usr/local/* on /tmp/destination/ directory, for example /usr/local/bin is copied on /tmp/destination/bin.

$ rsync -avz /usr/local/ root@remotehost:/tmp/destination/

sending incremental file list
...
share/man/mann/
share/perl5/
src/
sent 514 bytes received 45 bytes 1118.00 bytes/sec
total size is 0 speedup is 0.00
Copy recursively /usr/local/* on /tmp/destination/ directory on remotehost using ssh or rsync credentials. In this case compression is enabled 'z' because the copy is done through the network.

$ rsync -avz --delete /usr/local/ root@remotehost:/tmp/destination/

sending incremental file list
...
share/man/mann/
share/perl5/
src/
sent 514 bytes received 45 bytes 1118.00 bytes/sec
total size is 0 speedup is 0.00
Copy recursively /usr/local/* on /tmp/destination/ directory on remotehost. In this case it also deletes '--delete' the files that are on destination but not on source: it keeps on completely sync /usr/local/ and /tmp/destination/ on the remote host.

Note : rsync keeps file permissions and ownership on the file transmission, but it does not keep selinux attributes. There is not such option as '--selinux' as tar, the selinux relabeling must be done by hand with the 'chcon' command.

* In order to restore the information, just switch the source <-> destination :

$ rsync -n -av /tmp/destination/ /usr/local/
sending incremental file list
./
file1
file2

sent 567 bytes received 54 bytes 1242.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)


With the dry-run option '-n' the rsync is simulated but it is not done. This is a very useful option to test what is going to be copied or deleted by the rsync command just before running it.

$ rsync -av /tmp/destination/ /usr/local/

sending incremental file list
./
file1
file2

sent 639 bytes received 86 bytes 1450.00 bytes/sec
total size is 0 speedup is 0.00


The restore has been done.

Hard link and rsync

A hard link to a file provides the ability to reference the same inode (hardware location) from multiple places within the same filesystem. If there is a hard link and there are other hard links to the original file, only the link is removed and the original file will not be modified. As an example, lets create a 100M file called fileorig :

mkdir -p /tmp/hard
$ cd /tmp/hard
$ dd if=/dev/zero of=fileorig bs=1024 count=100000
$ du -sh ../hard
98M ../hard


Lets create a file called filehlink hard linked to fileorig :

$ ln fileorig filehlink

Verify the link

$ ls -lrti
total 200000
134435 -rw-r--r--. 2 root root 102400000 Dec 4 13:52 fileorig
134435 -rw-r--r--. 2 root root 102400000 Dec 4 13:52 filehlink


It can be seen that both files has the same inode number 134435 and the same size (100M). These file are exactly the same file...

$ du -h ../hard/
98M ../hard/


BUT THE SIZE OF THE DIRECTORY IS STILL 100M !!!. This is because filehlink is just a link to the original file fileorig.

* Using hard link copies in combination with rsync provides the ability of having system full backups only using the size disk consumed by the differences applied by rsync. Lets have a look to the following script :

$ cat /backup/rsync_snapshot.sh

rm -rf tmp.3
mv tmp.2 tmp.3
mv tmp.1 tmp.2
cp -al tmp.0 tmp.1
rsync -av --delete /tmp/ ./tmp.0/


If this script is executed daily, tmp.0, tmp.1, tmp.2 and tmp.3 will appear as daily full backup of /tmp thanks to the hard link copy done by the command 'cp -al' and actualized by 'rsync' using only '2X(size tmp)+(size changes rsync)' instead of 4X(size of tmp) disk space.

The combination of rsync with hard link copies must be seriously considered as the core of a custom made backup system.

tapes

The Advanced Maryland Automatic Network Disk Archiver AMANDA, installed by amanda rpm, is a system tool to manage a network backup system using client-server architecture. This system can be used to rotate automatically full and incremental backups off all amanda-clients on amanda server.

dd

The command 'dd' can be used to clone an entire system, coping bit to bit one disk into another. Suppose that your the system has on disk (/dev/sda) and we want to clone the entire system to another disk :

1.- Shutdown the system and connect a second disk sdb equal or bigger in size that the system disk sda.

2.- Start the system into user single mode 's', adding an s on the kernel loading grub file.

3.- Clone the entire disk sda on sdb using 'dd' command :

$ dd if=/dev/sda of=/dev/sdb

It takes a while depending on the size of the system. It copies sda on sdb bit to bit, so MBR, partitions ,LVM , RAID, filesystems and data are copied on sdb.

4.- Now sdb is ready to be used in other system with the same hardware than the original. Connect disk sdb on the first SATA channel on the new system (--> it will be recognized as sda) and boot it as usual.

Note: As MBR is copied on sdb it is not necessary to install grub on it.

Thursday 20 February 2014

Linux Kernel

Linux Kernel

The Kernel is the core of the operating system: controls the communication between hardware and process using devices drivers modules. It provides an isolate environment for each running process and communicates each process with others process. The Kernel is stored on /boot partition, is loaded in memory by the boot loader and since this moment takes the system control.

/proc

The /proc directory uses a virtual filesystem, files and directories are not stored on the hard disk. This directory is the interface between the kernel and hardware, the information that contains represent the current state of the system process and the hardware controlled by them. Commands like ps read the process information from /proc :

$ sleep 1000 &
[1] 4329


It takes a PID=4329. All the information about how the Kernel manages this process can be seen on /proc/4329 :

$ ls -lrt /proc/4329

...
-rw------- 1 root root 0 Nov 14 17:44 mem
lrwxrwxrwx 1 root root 0 Nov 14 17:44 exe -> /bin/sleep
lrwxrwxrwx 1 root root 0 Nov 14 17:44 cwd -> /root
...
In can be seen the command that originated the process '/bin/sleep', from which directory the command has been launched '/root', the process memory usage, etc . Once the process has finished the execution the directory that contains the process information is deleted.
* In addition some files on /proc can be modified in order to force a change on how the running Kernel is managing the process. For example :

$ echo 1 >> /proc/sys/net/ipv4/ip_forward
It enables IP forwarding on IPv4 (routing)

$ echo rhel6 > /proc/sys/kernel/hostname
It changes the system hostname to rhel6

This changes are applied directly on the running Kernel and they are lost when the system is rebooted. In order to make this changes permanent the file /etc/sysctl.conf can be used. For more info : man sysctl.conf and man sysctl .
* Some files on /proc can be access in order to obtain system information :

$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz

It shows information about system CPUs

$ cat /proc/cmdline
ro root=/dev/mapper/vg_rhel6-lv_root rd_LVM_LV=vg_rhel6/lv_root rd_LVM_LV=vg_rhel6/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=es rhgb quiet

It shows the parameters used by the boot loader in the Kernel initialization

Kernel Modules

The Linux Kernel is modular, his functionality can be extended just adding kernel modules as extensions of the core Kernel. A Kernel module can provide a device driver to control new hardware and can be plugged and removed as needed.

* The Kernel modularity allows that a failure on a Kernel module do not cause the whole system failure.

* Kernel modules keep the initial Kernel core small is size: it decreases the boot time and increases the system performance.
Kernel modules can be managed using the tools installed by the package 'module-init-tools':

To list all loaded Kernel modules use 'lsmod' command :

$ lsmod
Module &nbs Size Used by
xt_CHECKSUM 921 1
...
ip_tables 9541 3 iptable_mangle,iptable_nat,iptable_filter
...
To display more information about a Kernel module use 'modeinfo' command :

$ modinfo ip_tables
filename: /lib/modules/2.6.32-71.el6.i686/kernel/net/ipv4/netfilter/ip_tables.ko
description: IPv4 packet filter
author: Netfilter Core Team
license: GPL
srcversion: DC70E5A33C988577C75C5E0
depends:
vermagic: 2.6.32-71.el6.i686 SMP mod_unload modversions 686



To load/unload a Kernel Module and all its dependencies use 'modprobe' command :

$ modprobe nfs
Now with lsmod (or dmesg) can be verified that the module has been loaded correctly

$ modprobe -lt net
It loads all kernel modules on /lib/modules/`uname -r`/kernel/drivers/net

$ modprobe -r rt2x00usb
It unloads the rt2x00usb Kernel module

To load Kernel modules in the init process the configuration files on /etc/modprobe.d/*.conf, for example to load the ALSA Sound Kernel Modules in the init process the system uses the file /etc/modprobe.d/dist-alsa.conf :

$ cat /etc/modprobe.d/dist-alsa.conf

# ALSA Sound Support
#
# We want to ensure that snd-seq is always loaded for those who want to use
# the sequencer interface, but we can't do this automatically through udev
# at the moment...so we have this rule (just for the moment).
#
# Remove the following line if you don't want the sequencer.

install snd-pcm /sbin/modprobe --ignore-install snd-pcm && /sbin/modprobe snd-seq



* The Kernel modules are located on /lib/modules/`uname -r`/kernel/drivers and are module_name.ko binary files, where the module_name is the name of the module to be used on modprobe command.

Kernel RPMS

The following are the rpms contained in a standard distribution :

kernel
Contains the kernel for single/multicore/multiprocessor system.

kernel-devel
Development kernel version, contains the kernel header files needed to build Kernel modules for the matching Kernel.

kernel-debug
Contains the Kernel with debug options enabled for debugging process.

kernel-debug-devel
Development version of kernel-debug.

kernel-doc
Kernel documentation files. They are installed on /usr/share/doc/kernel-doc-* directory

kernel-headers
Includes the C code files needed to compile the Kernel

kernel-firmware
Contains the devices firmware files.

Kernel Upgrade

If your system is connected to a rpm repository the Kernel upgrade is done automatically or running 'yum install kernel'. Sometimes the newest rpm Kernel version required is not available on the rpm repository, in this case a manually upgrade process must be followed :

1.- Download the latest rpm Kernel version from a trusted site.

2.- Install the latest rpm version.

rpm -ihv kernel*.rpm

* Note : Use always the installations options (-ihv) instead of upgrade options (-Uhv). If upgrade options are used the old kernel will be REMOVED and in case the new Kernel fails the old Kernel will no be available !!!.

Kernel Source Code and Compilation (NOT supported on RHEL6)

The Linux Kernel source code is available via the Kernel Source Code RPM. One of the reasons to use the Kernel Source RPM is to recompile the Linux Kernel with specific options that in the standard Kernel Compilation (used to build Kernel binary RPM) has not been set-up. These are the steps that must be followed in order to recompile the Kernel source code in order to get a customized kernel :

Kernel source code installation

Download and install the Kernel Source Code.

$ rpm -ihv kernel-2.6.32-19.el6.src.rpm

It installs all the files needed to build the Kernel Source Code on /root/rpmbuild/SOURCES and the spec file to build the Kernel Source code on /root/rpmbuild/SPECS/kernel.spec

Build and install the Kernel Source Code with the rpmbuild command (installed by rpm-build rpm)

$ cd /root/rpmbuild/SPECS/
$ rpmbuild -bp kernel.spec
...
it takes a while...
It installs the Kernel source code on /root/rpmbuild/BUILD directory.

* Note: In order to build the Kernel source code 4G of free space on "/" are needed.

Kernel Configuration

Once the Kernel source code has been installed, the next step is customize the Kernel configuration in order to fit our specials requirements :

$ cd /root/rpmbuild/BUILD/kernel-2.6.32/linux-2.6.32.i686/
$ vi Makefile
modify line -->EXTRAVERSION=lso-customWhere is the string that will identify our customized Kernel, in this case 'lso-custom'
:wq!

$ make mrproper
...
It cleans up previous kernel configurations if needed and verify that all files are ready for the Kernel configuration.

$ cp /boot/config-2.6.18-53.el5 /tmp
It makes a backup of the running Kernel configuration just before start the configuration process.

$ make menuconfigIts shows a menu where kernel configuration options like filesystem support, devices drivers, etc. can be added/removed in the kernel compilation. In this case we have added 'KVM Virtualization Support'

* Note: 'make menuconfig' uses the ncurses*.rpm, so these packages must be installed.

Kernel Compilation

Once the Kernel source code has been configured with our special requirements is time to compile the new Kernel.

$ make rpm
...
it takes a while...
Wrote: /root/rpmbuild/RPMS/i386/kernel-2.6.32lso_custom-1.i386.rpm
...


In /root/rpmbuild/RPMS/i386 has been created or customized kernel rpm : kernel-2.6.32lso_custom-1.i386.rpm

Kernel Installation

Last step is install the new customized Kernel on the system with the 'rpm' command, configure manually the boot loader to load the new Kernel and create manually the initial RAM space for the new Kernel. These actions are not needed when the Kernel is installed with the standard binary rpm because of these actions are performed automatically on the rpm installation process.

$ cd /root/rpmbuild/RPMS/i386/
$ rpm -ihv kernel-2.6.32lso_custom-1.i386.rpm


* As mentioned this custom rpm do not update the /etc/grub.conf in order to be booted. This action needs to be done manually :

1.- Creation of the initial RAM disk to boot the new kernel on /boot.

$ cd /boot
$ dracut initramfs-2.6.32lso_custom.el6.i686.img 2.6.32lso_custom


Now in /boot we have the new kernel 'vmlinuz-2.6.32lso_custom' and his initial RAM disk used to be booted 'initramfs-2.6.32lso_custom.el6.i686.img'

2.- Last step is modify /etc/grub.conf in order to boot the new kernel.

$ vi /etc/grub.confAdd the following lines :
title Red Hat Enterprise Linux LSO (2.6.32lso_custom.el6.i686)
root (hd0,0)
kernel /vmlinuz-2.6.32lso_custom ro root=/dev/mapper/vg_rhel6c-lv_root rhgb quiet
initrd /initramfs-2.6.32lso_custom.el6.i686.img

Change the line 'default=0' to 'default=1' to load the new kernel by default
:wq!

If all goes fine in the next reboot the new kernel will be loaded, if not the old kernel will be available.

Top 10 Super Computers using Linux

Top 10 Super Computers using Linux

Here's the latest list of the top 10 supercomputers in the world, basis the ranking by top500. These mighty machines has been ranked on the basis of their performance on the organisation's demanding Linpack benchmark. And guess who is powering all these super machines? Yes, it's the Linux kernel running in their veins!

10. Fermi

Based at Italy's CINECA joint venture, Fermi is the first Blue Gene/Q-based system on our list, clocking in at 1.72 petaflops driven by 163,840 PowerPC cores.

Runs on Linux.

9. Tianhe-1A

The only Chinese entry into this top 10, Tianhe-1A turned in a 2.56 petaflop performance mark, on the strength of its 186,368 Xeon processor cores. It's also the first machine on this list to use co-processors for additional performance -- 100,352 Nvidia 2050 cores, to be precise.

Runs on Linux. 

8. SuperMUC

A hardy perennial of the Top500 list, SuperMUC is based at the Leibniz Supercomputing Center near Munich. Clocking in at 2.89 petaflops, it's powered by 147,456 Intel Sandy Bridge processors.

Runs on Linux. 

7. JUQUEEN

Juelich-based JUQUEEN eclipsed its German rival SuperMUC to capture the fifth spot on the November list, posting a 4.14-petaflop mark on the Linpack test. Unlike SuperMUC, it's powered by a 393,216-core Blue Gene/Q system.

Runs on Linux. 

6. Stampede

Dell's Stampede, which rode its new Intel Xeon Phi processors a total of 204,900 cores' worth to a 2.66 petaflop benchmark. Installed at the University of Texas in Austin, Stampede also packs 112,500 accelerator cores as part of the Xeon Phi platform.

Runs on Linux. 

5. Mira

Also using the Blue Gene/Q architecture is Mira, of the Department of Energy's Argonne National Laboratories. However, it packs substantially more cores than JUQUEEN -- 786,432, to be exact -- in return for a nearly doubled performance return of 8.16 petaflops.

Runs on Linux. 

4. K Computer

Dropping to the third place is the Fujitsu K Computer, at Japan's RIKEN Advanced Institute for Computational Sciences. Using 705,024 SPARC64 cores, it produced a Linpack score of 10.51 petaflops.

Runs on Linux. 

3. Sequoia

The first million-core system, Lawrence Livermore National Laboratories' Sequoia was once the top dog in the super computer list. It cranks out a whopping 16.32 petaflops with its 1,572,864 processor cores. Sequoia is the fourth and last Blue Gene/Q system on the latest list.

Runs on Linux. 

2. Titan

The appropriately named Titan is a Cray XK7 powerhouse, producing 17.59 petaflops of performance using 560,640 AMD Opteron processor cores and 261,632 Nvidia K20x accelerators. It operates at the Oak Ridge National Laboratory.

Runs on Linux. 

1. Tianhe-2

A supercomputer developed by China’s National University of Defense Technology, is the world’s new No. 1 system with a performance of 33.86 petaflop/s on the Linpack benchmark, according to the 41stedition of the twice-yearlyTOP500 list of the world’s most powerful supercomputers. The list was announced June 17 during the opening session of the 2013 International Supercomputing Conference in Leipzig, Germany.

Runs on Linux.

Monday 17 February 2014

ELA_41_Extended Internet Services Daemon(xinetd)

Extended Internet Services Daemon(xinetd)

The Extended Internet Services Daemon service is a TCP-wrapped super service which controls access to a subset of popular network services, including FTP, IMAP, and Telnet. The xinetd service listens for connection requests for all of the active servers with specific configuration file in the /etc/xinetd.d directory. There's also generic configuration file for xinetd services, /etc/xinetd.conf. It controls services such as rsync, gssftp, and telnet-server, when installed.

/etc/xinetd.conf

It contains general configuration settings which affect every service under xinetd's control. It is read when the xinetd service is first started, so for configuration changes to take effect, you need to restart the xinetd service:


# cat /etc/xinetd.conf

defaults
{
instances = 70
per_source = 15
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST
cps = 30 40
}

includedir /etc/xinetd.d
These are some parameters that can be configured on this file to lines control the following aspects of xinetd. For more info 'man xinetd.conf'

instances:
Specifies the maximum number of simultaneous requests that xinetd can process.

log_type:
Configures xinetd to use the authpriv log facility, which writes log entries to the /var/log/secure file. Adding a directive such as FILE /var/log/xinetdlog would create a custom log file called xinetdlog in the /var/log/ directory.

log_on_failure:
Configures xinetd to log failed connection attempts or if the connection was denied.

cps:
Configures xinetd to allow no more than 30 connections per second to any given service. If this limit is exceeded, the service is retired for 40 seconds.

per_source:
This limits the number of connections from each IP address.

includedir /etc/xinetd.d/:
Includes options declared in the service-specific configuration files located in the /etc/xinetd.d/ directory.

/etc/xinetd.d

It contains the configuration files for each service managed by xinetd and the names of the files related to the service. This directory is read only when the xinetd service is started, for any changes to take effect xinetd service must be restarted. as example of this file lets see the /etc/xinetd.d/telnet file installed by 'telnet-server rpm'.

# cat /etc/xinetd.d/telnet

service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}


These lines control various aspects of the telnet service. For more info 'man xinetd.conf'.

service:
Specifies the service name, usually one of those listed in the /etc/services file.

flags:
Sets any of a number of attributes for the connection. REUSE instructs xinetd to reuse the socket for a Telnet connection.

socket_type:
Sets the network socket type to stream.

wait:
Specifies whether the service is single-threaded (yes) or multi-threaded (no).

user:
Specifies which user ID the process runs under.

group:
Group under which the server should run.

server:
Specifies which binary executable to launch.

only_from:
Host name or IP address allowed to use the server. CIDR notation (such as 192.168.0.0/24) is okay.

no_access:
Host name or IP address not allowed to use the server. CIDR notation is okay.

access_times:
Specifies the time range when a particular service may be used. The time range must be stated in 24-hour format notation, HH:MM-HH:MM.

log_on_failure:
Specifies logging parameters for log_on_failure in addition to those already defined in xinetd.conf.

disable:
Specifies whether the service is disabled (yes) or enabled (no).

Logging

The following logging options are available for both /etc/xinetd.conf and the service-specific configuration files within the /etc/xinetd.d/ directory.


ATTEMPT:
Logs the fact that a failed attempt was made (log_on_failure).

DURATION:
Logs the length of time the service is used by a remote system (log_on_success).

EXIT:
Logs the exit status or termination signal of the service (log_on_success).

HOST:
Logs the remote host's IP address (log_on_failure and log_on_success).

PID:
Logs the process ID of the server receiving the request (log_on_success).

USERID:
Logs the remote user using the method defined in RFC 1413 for all multi-threaded stream services (log_on_failure andlog_on_success).

For more info 'man xinetd.conf'.

Access control

Users of xinetd services can use the TCP Wrappers hosts access rules, provide access control via the xinetd configuration files, or a mixture of both.

only_from:
Allows only the specified hosts to use the service.

no_access:
Blocks listed hosts from using the service.

access_times:
Specifies the time range when a particular service may be used. The time range must be stated in 24-hour format notation, HH:MM-HH:MM.

# cat /etc/xinetd.d/telnet

service telnet
{
disable = no
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
no_access = 192.168.10.0/24
log_on_success += PID HOST EXIT
access_times = 09:00-17:00
}


In this example, when a client system from the 192.168.10.0/24 network, such as 192.168.10.100, tries to access the Telnet service, it receives the following message:

Connection closed by foreign host.

In addition, their login attempts are logged in /var/log/messages as follows:

Jun 17 14:58:33 localhost xinetd[5285]: FAIL: telnet address from=192.168.10.100
Jun 17 14:58:33 localhost xinetd[5283]: START: telnet pid=5285 from=192.168.10.100
Jun 17 14:58:33 localhost xinetd[5283]: EXIT: telnet status=0 pid=5184

xinetd and TCP Wrappers

The following is the sequence of events followed by xinetd when a client requests a connection:

First: The xinetd daemon accesses the TCP Wrappers hosts access rules using a libwrap.a library call (files /etc/hosts.allow,deny). If a deny rule matches the client, the connection is dropped. If an allow rule matches the client, the connection is passed to xinetd.

Then: The xinetd daemon checks its own access control rules both for the xinetd service and the requested service. If a deny rule matches the client, the connection is dropped. Otherwise, xinetd starts an instance of the requested service and passes control of the connection to that service.

Binding and Redirection

Xinetd supports binding the service to an IP address and redirecting incoming requests for that service to another IP address, hostname, or port. The xinetd daemon is able to accomplish this redirection by spawning a process that stays alive for the duration of the connection between the requesting client machine and the host actually providing the service, transferring data between the two systems.

service telnet
{
socket_type = stream
wait = no
server = /usr/sbin/in.telnetd
log_on_success += DURATION USERID
log_on_failure += USERID
bind = 111.111.111.111
redirect = 10.0.0.1 23
}


The bind and redirect options in this file ensure that the Telnet service on the machine is bound to the external IP address (111.111.111.111), the one facing the Internet. In addition, any requests for Telnet service sent to 111.111.111.111 are redirected via a second network adapter to an internal IP address (10.0.0.1) that only the firewall and internal systems can access. The firewall then sends the communication between the two systems, and the connecting system thinks it is connected to 111.111.111.111 when it is actually connected to a different machine.

Resource Management and DoS attacks

The xinetd daemon can add a basic level of protection from Denial of Service (DoS) attacks. The following is a list of directives which can aid in limiting the effectiveness of such attacks:

per_source
Defines the maximum number of instances for a service per source IP address. It accepts only integers as an argument and can be used in both xinetd.conf and in the service specific configuration files in the xinetd.d directory.

cps
Defines the maximum number of connections per second. This directive takes two integer arguments separated by white space. The first argument is the maximum number of connections allowed to the service per second. The second argument is the number of seconds that xinetd must wait before re-enabling the service. It accepts only integers as arguments and can be used in either the xinetd.conf file or the service-specific configuration files in the xinetd.d/ directory.

max_load:
Defines the CPU usage or load average threshold for a service. It accepts a floating point number argument. The load average is a rough measure of how many processes are active at a given time. See the uptime, who, and procinfo commands for more information about load average. There are more resource management options available for xinetd. Refer to the xinetd.conf man page for more information.

Managing xinetd services

Some standard Linux services are designed to be executed through xinetd. For example 'rsync' server, installed by rsync RPM, is configured and executed through the xinetd service using /etc/xinetd.d/rsync

The first step in order to run rsync server through xinetd is the RPM installation.

# yum install rsync

Next step is active the rsync server on xinetd service. This can be done writing the directive 'disable = no' into /etc/xinetd.d/rsync file and restarting the xinetd service. The command chkconfig can do both steps automatically :

# chkconfig rsync on

Finally verify the xinetd is listening on port TCP/IP 873, rsync server port:

# netstat -putan | grep 873
tcp ... :::873 ... xinetd/2234

ELA_40_Linux MAIL

Linux Mail

Email messages transaction is done using a client/server topology. An email message is created using a mail client program that sends the message to an email server using SMTP protocol. The server then forwards the message to the recipient's SMTP email server, where the message is then supplied to the recipient's email using POP/IMAP protocols.

SMTP Simple Mail Transfer Protocol is a set of rules for transferring email data used by various mail transfer agents in order to transport emails messages from the source where the email is created to the destination recipient. As many services on the Internet SMTP depends on DNS resolutions and routing in order to delivery the email to recipient SMTP email server. Once the email have reached the SMTP email server, the email is dropped to the final user email client using POP or IMAP protocols.

SMTP Server

As said before the purpose of SMTP server is to transfer email between mail servers. To send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery.

SMTP protocol does not require authentication. It allows anyone on the Internet to send email to anyone else or even to large groups of people. Imposing relay restrictions limits any users on the Internet from sending email through your SMTP server, to other servers on the Internet. Servers that do not impose such restrictions are called open relay servers and are labelled as SPAM SMTP server.

On RHEL6 the default SMTP email server is 'postfix' installed by postfix rpm. The 'postfix' service listen on port 25 TCP/IP, it is configured on /etc/postfix directory files and logs on /var/log/maillog.

# yum install postfix

/etc/postfix/main.conf

The main postfix SMTP server configuration file is /etc/postfix/main.conf. The following are the main directives that can be configured.

# cat /etc/postfix/main.conf

...
# This directive configures from which domain the postfix server is going to be the SMTP server.
mydomain = info.net

...
# It complements the email address with 'mydomain' domain. For example a mail for user 'john' -> 'john@info.net'
myorigin = $mydomain
# In which server interfaces the SMTP server port 25 TCP/IP must be listening. In this case it will be listening on all system interfaces.
inet_interfaces = all

...
# The mydestination parameter specifies the list of domains that this machine considers itself the final destination for.
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain, server.$mydomain, mail.$mydomain

...
# The mynetworks parameter specifies the list of "trusted" SMTP clients that have more privileges than "strangers".
mynetworks = 192.168.01.0/24, 127.0.0.0/8

...
# The home_mailbox parameter specifies the pathname of a mailbox file relative to a user's home directory where the mailbox will be stored
home_mailbox = Maildir/
...


Once configured the postfix service just start it and make sure that it will be started at boot.

# /etc/init.d/postfix restart
# chkconfig postfix on

SMTP Server Security

Firewall

As said before SMTP server listen on port 25 TCP/IP. It also uses port 25 UDP for data transactions so both ports must be open in order to allow SMTP service through a firewall.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 25 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 25 -j ACCEPT

SElinux

The unique SElinux parameter that can be configured is the ability to write on /var/spool/mail by postfix service. In this directory is where are stored the email received by the SMTP server and is enabled by default.

# setsebool -P allow_postfix_local_write_mail_spool 1

Host Based Restrictions

Using the following configuration parameters on /etc/postfix/main.cf and /etc/postfix/access files is possible restrict the access to the SMTP server based on host/IP address.

smtpd_client_restrictions=hash:/etc/postfix/access --> /etc/postfix/main.cf

# echo "192.168.2.0/24 OK" >> /etc/postfix/access
# echo "192.168.2.10 REJECT" >> /etc/postfix/access
# postmap /etc/postfix/access
# /etc/init.d/postfix reload


With this configuration the SMTP server must allow connections from clients on 192.168.2.0/24 LAN except from 192.168.2.10 IP.

Email Domain Forwarding

Sometimes is necessary to forward any incoming email for a secondary domain that our SMTP server recognises as virtual domain to a secondary SMTP.

virtual_alias_domains = example.net --> /etc/postfix/main.cf
virtual_alias_maps = hash:/etc/postfix/virtual --> /etc/postfix/main.cf
# echo "@example.net infonetaccount@example.com" >> /etc/postfix/virtual

# postmap /etc/postfix/virtual
# /etc/init.d/postfix reload


With this configuration any incoming email with '@example.net' address destination will be forwarded to 'infonetaccount@external.com' account on external.com SMTP server.

Email Forwarding : /etc/aliases

For one-to-one email forwarding is much easier the use of the /etc/aliases file.

# echo "root: john" >> /etc/aliases
# echo "sales: charles,john,mike" >> /etc/aliases
# echo "charles: charles@gmail.com" >> /etc/aliases
# newaliases


With this configuration any email coming to root@info.net will be forwarded to john@info.net without leaving our info.net SMTP server. We have also created the email group called 'sales@info.net', any email directed to this address will be forwarded to john, charles and mike email addresses. Also any email coming to charles@info.net will be forwarded to charles@gmail.com on gmail.com SMTP server.

SMTP and DNS

When a SMTP server has to send an email to an external SMTP server it relies on DNS name resolution to send the email to the correct SMTP server IP. For example if our info.net SMTP server has to send a message to charles@gmail.com account, our SMTP server will look for the domain gmail.com MX registry using the local DNS configured on /etc/resolv.conf as said on DNS lesson.

# dig gmail.com mx
...
gmail.com. 345 IN MX 10 alt1.gmail-smtp-in.l.google.com.
...


So the email to charles@gmail.com will be forwarded by our SMTP server to alt1.gmail-smtp-in.l.google.com port 25 TCP/IP where the gmail.com SMTP server is running.

Taking into account the strong relation between SMTP and DNS, in order to make your SMTP server public on the Internet and receive emails from others servers, your domain DNS server must have the MX registry pointing to the SMTP server IP. Of course your DNS must be also public to the Internet.

When name resolution is not working, postfix doesn't know where to send your outbound e-mail. These messages are placed in a queue that tries to resend your e-mail at regular intervals. Messages like following are written to /var/log/maillog in this situation. .

550 5.1.2 mike@gmail.com ... Host unknown

All mails queued on the SMTP server can be displayed with the command mailq. Some info about the reason of why they are queued is displayed also.

# mailq

SMTP Open Relay

An Open Relay SMTP server is configured in a way that processes a mail message from any client on the Internet (Open) where neither the sender or the recipient is a local user (Relay) . An Open Relay SMTP server can be used by spammers in order to send SPAM emails to anywhere to the Internet, making the SMTP server labelled as SPAM source and then all emails coming from it will be labelled as SPAM.

By default postfix SMTP server on RHEL6 systems is NOT configured as an Open Relay SMTP server. It only allows RELAY from clients on the internal network specified on 'mynetwork' configuration parameter on /etc/postfix/main.cf.

mynetworks = 192.168.1.0/24, 127.0.0.0/8

*** On Lab1 can be seen the procedure to test if a SMTP server is configured as Open Relay. ***

POP/IMAP Server

POP Post Office Protocol and IMAP Internet Message Access Protocol are two protocols used by email client applications to retrieve email from mail servers. While POP downloads all e-mail to the client, an IMAP server maintains all mail messages on the server. IMAP is commonly used by businesses that service users who log in from different locations. It's also the most common mail delivery protocol for Web-based mail services.

On RHEL6 systems both protocols are handled by 'dovecot' service installed by dovecot rpm.

# yum install dovecot

The /etc/dovecot/dovecot.conf file is used to configure POP (port 111 TCP/UDP) , IMAP (port 143 TCP/UDP) services and his secure versions POPs (port 995 TCP/UDP) and IMAPs (port 993 TCP/UDP) protocols.

# cat /etc/dovecot/dovecot.conf

...
protocols = imap pop3
...
mail_location = maildir:~/Maildir
...


# /etc/init.d/dovecot restart
# chkconfig dovecot on

Firewall

Open the corresponding TCP/UDP ports on the firewall to allow POP/IMAP dovecot services to run through the system firewall.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 995 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 995 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 993 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 993 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 143 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 143 -j ACCEPT

Email clients

In order to use an email client that uses the SMTP server to send and receive emails 'evolution' GUI email client tools can be used installing evolution rpm.

# yum install evolution

The client email must be configured to use the SMTP server for outgoing email and dovecot (POP/IMAP) to retrieve the email from the SMTP server. It is also possible the use of command line email clients as 'mail' command installed by default on the RHEL6 server installation by 'mailx' rpm.

# echo "Test message" | mail -s "Test subject" root@info.net

It uses the system SMTP server to send and receive emails. It can also be used to read emails just typing 'mail' command. It will open the local mailbox for the user that has executed the mail command.