Installing and Configuring a Linux VPN Server (Part 2)

In the first article in this two-part series we took a look at the process of installing The IPSec VPN software FreeS/WAN on a Red Hat Linux server. In this article we continue the process, taking a look at how the service needs to be configured, are ultimately and a secure tunnel established.

The configuration of FreeS/WAN is not terribly difficult but can be a little bit tricky. You will need to configure some general start-up parameters and create connection objects that define the tunnels you wish to implement. To start, draw a basic diagram of the implementation scenario, which can be added to later. There are many variables that need to be documented at this point including the interfaces and their IP addresses, private subnet addresses, and the address of the next-hop Internet gateways.

For this example, assume that what we’re trying to connect are two single subnet networks that are connected to the Internet using either straight routing or masquerade. On the test network, we’re using masquerade to NAT private internal IP addresses (192.168.x.y) to public external interfaces, with this configured correctly on both gateway systems running FreeS/WAN (note that for illustration purposes, the entire test network is running private addresses). Internal clients should ultimately point to the IPSec server’s internal interface as their default gateway. Given that IPSec isn’t configured with any tunnels yet, we’ll assume that your private internal clients can ping Internet systems from both networks.

Installing and Configuring a Linux VPN Server

The best thing about Linux is that you can make it do just about anything. For many of you, Linux is possibly already your router, firewall, NAT server, and more. With a little work, you can easily extend your Linux setup to build a cost-effective WAN replacement in the form of IPSec VPN tunnels. In the article, the first of a two-part series, I’ll walk you through the installation process that will prepare you towards deploying your own Linux-based IPSec VPN servers. In the second part of the series we’ll take a look at configuring those servers so that you have the ability to allow encrypted communications to take place between 2 or more locations using the Internet as your virtual WAN.

When considering a VPN solution, it’s usually important to insist on two things. The first is that it supports at least end-toend tunnels and roaming users, the second being that it should be standards-based (that means IPSec support) to make interoperability with other systems possible when required. This article covers how to install and configure FreeS/WAN to create secure, IPSec-based VPN tunnels between locations. If your needs are relatively simple, you’ll have a secure Linux-based VP solution securing your traffic between locations in no time. If you want to get fancy, extending the design outlined here isn’t much more difficult at all.

The main purpose of deploying an IPSec-based VPN tunnel solution is as a replacement or backup for your current (and probably expensive) WAN links. Companies spend thousand of dollars on WAN infrastructure that can potentially be avoided with the proper implementation of a VPN over a standard (but hopefully high-speed) Internet link. The idea is to use Linux and an unsecured network (such as the Internet) as a vehicle for secure communications between two or more locations. This will provide easy and seamless connections to remote network servers and resources for our local network users.

Basic Linux Shell Scripting Part 3

In this 3rd and final article in my Shell Scripting series we will study, through example, methods of creating simple useful shell scripts. Some of the topics I will cover will be review of the previous two articles, others will cover some new things. The idea is to take what you have already learned and begin applying it to real world situations.

What do I create my scripts in?

Writing your own shell scripts requires you to know the very basic everyday Linux commands. For example, you should know how to copy, move, create new files, etc. The one thing you must know how to do is to use a text editor. There are three major text editors in Linux, vi, emacs and pico. If you are not familiar with vi or emacs, go for pico or some other easy to use text editor.

CAUTION: It is critical that you take care to not perform any scripting functions as the root user. To avoid this simply type the following commands:

Adduser scriptuser
Passwd scriptuser
su scriptuser

This will create a user that you can dedicate to running scripts.

Your First BASH Program

Our first program will be the classical “Hello World” program. Yes, if you have programmed before, you must be sick of this by now. However, this is traditional, and who am I to change tradition? The “Hello World” program simply prints the words “Hello World” to the screen. So fire up your text editor, and type the following inside it:

#!/bin/bash
echo "Hello World"

The first line tells Linux to use the bash interpreter to run this script. In this case, bash is in the /bin directory. If bash is in a different directory on your system, make the appropriate changes to the line. Explicitly specifying the interpreter is very important, so be sure you do it as it tells Linux which interpreter to use to run the instructions in the script. The next thing to do is to save the script. We will call it hello.sh. With that done, you need to make the script executable:

shell$ chmod 700 ./hello.sh

shell$ ./hello.sh
Hello World

There it is! Your first program! Boring and useless as it is, this is how everyone starts out. Just remember the process here. Write the code, save the file, and make it executable with chmod.

A More Useful Program

We will write a program that will move all files into a directory, and then delete the directory along with its contents, and then recreate the directory. This can be done with the following commands:

shell$ mkdir trash
shell$ mv * trash
shell$ rm -rf trash
shell$ mkdir trash

Instead of having to type all that interactively on the shell, write a shell program instead:

#!/bin/bash
mkdir trash
mv * trash
rm -rf trash
mkdir trash
echo "Deleted all files!"

Save it as clean.sh and now all you have to do is to run clean.sh and it moves all files to a directory, deletes them, recreates the directory, and even prints a message telling you that it successfully deleted all files. So remember, if you find that you are doing something that takes a while to type over and over again, consider automating it with a shell program. You may notice that this is very similar to batch file programming under DOS. In fact, UNIX shells scripts are the grandfather of Batch files.

Comments

Comments help to make your code more readable. They do not affect the output of your program. They are specially made for you to read. All comments in bash begin with the hash symbol: “#”, except for the first line (#!/bin/bash). The first line is not a comment. Any lines after the first line that begin with a “#” is a comment. Take the following piece of code:

#!/bin/bash
# this program counts from 1 to 10:
for i in 1 2 3 4 5 6 7 8 9 10; do
echo $i
done

Even if you do not know bash scripting, you immediately know what the above program does, because of the comment. It is good practice to make use of comments. You will find that if you need to maintain your programs in the future, having comments will make things easier.

Variables

Variables are basically “boxes” that hold values. You will want to create variables for many reasons. You will need it to hold user input, arguments, or numerical values. Take for instance the following piece of code:

#!/bin/bash
x=12
echo "The value of variable x is $x"

What you have done here, is to give x the value of 12. The line echo “The value of variable x is $x” prints the current value of x. When you define a variable, it must not have any whitespace in between the assignment operator: “=”. Here is the syntax:

variable_name=this_value

The values of variables can be accessed by prefixing the variable name with a dollar symbol: “$”. As in the above, we access the value of x by using echo $x.

There are two types of variables. Local variables, and environmental variables. Environmental variables are set by the system and can usually be found by using the env command. Environmental variables hold special values. For instance, if you type:

shell$ echo $SHELL
/bin/bash

You get the name of the shell you are currently running. Environmental variables are defined in /etc/profile and ~/.bash_profile. The echo command is good for checking the current value of a variable, environmental, or local. If you are still having problems understanding why we need variables, here is a good example:

#!/bin/bash
echo "The value of x is 12."
echo "I have 12 pencils."
echo "He told me that the value of x is 12."
echo "I am 12 years old."
echo "How come the value of x is 12?"

Okay, now suppose you decide that you want the value of x to be 8 instead of 12. What do you do? You have to change all the lines of code where it says that x is 12. But wait… there are other lines of code with the number 12. Should you change those too? No, because they are not associated with x. Confusing right? Now, here is the same example, only it is using variables:

#!/bin/bash
x=12 # assign the value 12 to variable x
echo "The value of x is $x."
echo "I have 12 pencils."
echo "He told me that the value of x is $x."
echo "I am 12 years old."
echo "How come the value of x is $x?"

Here, we see that $x will print the current value of variable x, which is 12. So now, if you wanted to change the value of x to 8, all you have to do, is to change the line x=12 to x=8, and the program will automatically change all the lines with $x to show 8, instead of 12. The other lines will be unaffected. Variables have other important uses as well, as you will see later on.

Control Structures

Control structures allow your program to make decisions and to make them more compact. More importantly as well, it allows us to check for errors. So far, all we have done is write programs that start from the top, and go all the way to the bottom until there are no more commands left in the program to run. For instance:

#!/bin/bash
cp /etc/foo .
echo "Done."

This little shell program, call it bar.sh, copies a file called /etc/foo into the current directory and prints “Done” to the screen. This program will work, under one condition. You must have a file called /etc/foo. Otherwise here is what happens:

shell$ ./bar.sh
cp: /etc/foo: No such file or directory
Done.

So you can see, there is a problem. Not everyone who runs your program will have /etc/foo in their system. It would perhaps be better if your program checked if /etc/foo existed, and then if it did, it would proceed with the copying, otherwise, it would quit. In pseudo code, this is what it would look like:

if /etc/code exists, then
copy /etc/code to the current directory
print "Done." to the screen.
otherwise,
print "This file does not exist." to the screen
exit

Can this be done in bash? Of course! The collection of bash control structures are, if, while, until, for and case. Each structure is paired, meaning it starts with a starting “tag” and ends with an ending “tag”. For instance, the if structure starts with if, and ends with fi. Control structures are not programs found in your system. They are a built in feature of bash. Meaning that from here on, you will be writing your own code, and not just embedding programs into your shell program.

if … else … elif … fi

One of the most common structures is the if structure. This allows your program to make decisions, like, “do this if this conditions exists, else, do something else”. To use the if structure effectively, we must make use of the test command. test checks for conditions, that is, existing files, permissions, or similarities and differences. Here is a rewrite on bar.sh:

#!/bin/bash
if test -f /etc/foo
then
# file exists, so copy and print a message.
cp /etc/foo .
echo "Done."
else
# file does NOT exist, so we print a message and exit.
echo "This file does not exist."
exit
fi

Notice how we indent lines after then and else. Indenting is optional, but it makes reading the code much easier in a sense that we know which lines are executed under which condition. Now run the program. If you have /etc/foo, then it will copy the file, otherwise, it will print an error message. test checks to see if the file /etc/foo exists. The -f checks to see if the argument is a regular file. Here is a list of test’s options:

-d check if the file is a directory
-e check if the file exists
-f check if the file is a regular file
-g check if the file has SGID permissions
-r check if the file is readable
-s check if the file’s size is not 0
-u check if the file has SUID permissions
-w check if the file is writeable
-x check if the file is executable

else is used when you want your program to do something else if the first condition is not met. There is also the elif which can be used in place of another if within the if. Basically elif stands for “else if”. You use it when the first condition is not met, and you want to test another condition.

If you find that you are uncomfortable with the format of the if and test structure, that is:

if test -f /etc/foo
then

then, you can do it like this:

if [ -f /etc/foo ]; then

The square brackets form test. If you have experience in C programming, this syntax might be more comfortable. Notice that there has to be white space surrounding both square brackets. The semicolon: “;” tells the shell that this is the end of the command. Anything after the semicolon will be run as though it is on a separate line. This makes it easier to read basically, and is of course, optional. If you prefer, just put then on the next line.

When using variables with test, it is a good idea to have them surrounded with quotation marks. Example:

if [ “$name” -eq 5 ]; then

while … do … done

The while structure is a looping structure. Basically what it does is, “while this condition is true, do this until the condition is no longer true”. Let us look at an example:

#!/bin/bash
while true; do
echo "Press CTRL-C to quit."
done

true is actually a program. What this program does is continuously loop over and over without stopping. Using true is considered to be slow because your shell program has to call it up first and then run it. You can use an alternative, the “:” command:

#!/bin/bash
while :; do
echo "Press CTRL-C to quit."
done

This achieves the exact same thing, but is faster because it is a built in feature in bash. The only difference is you sacrifice readability for speed. Use whichever one you feel more comfortable with. Here is perhaps, a much more useful example, using variables:

#!/bin/bash
x=0; # initialize x to 0
while [ "$x" -le 10 ]; do
echo "Current value of x: $x"
# increment the value of x:
x=$(expr $x + 1)
sleep 1
done

As you can see, we are making use of the test (in its square bracket form) here to check the condition of the variable x. The option -le checks to see if x is less than, or equal to the value 10. In English, the code above says, “While x is less than 10 or equal to 10, print the current value of x, and then add 1 to the current value of x.”. sleep 1 is just to get the program to pause for one second. You can remove it if you want. As you can see, what we were doing here was testing for equality. Check if a variable equals a certain value, and if it does, act accordingly. Here is a list of equality tests:

Checks equality between numbers:
x -eq y Check is x is equals to y
x -ne y Check if x is not equals to y
x -gt y Check ifx is greater than y
x -lt y Check if x is less than y

Checks equality between strings:
x = y Check if x is the same as y
x != y Check if x is not the same as y
-n x Evaluates to true if x is not null
-z x Evaluates to true if x is null.

The above looping script we wrote should not be hard to understand, except maybe for this line:

x=$(expr $x + 1)

The comment above it tells us that it increments x by 1. But what does $(…) mean? Is it a variable? No. In fact, it is a way of telling the shell that you want to run the command expr $x + 1, and assign its result to x. Any command enclosed in $(…) will be run:

#!/bin/bash
me=$(whoami)
echo "I am $me."

Try it and you will understand what I mean. The above code could have been written as follows with equivalent results:

#!/bin/bash
echo "I am $(whoami)."

You decide which one is easier for you to read. There is another way to run commands or to give variables the result of a command. This will be explained later on. For now, use $(…).

until … do … done

The until structure is very similar to the while structure. The only difference is that the condition is reversed. The while structure loops while the condition is true. The until structure loops until the condition is true. So basically it is “until this condition is true, do this”. Here is an example:

#!/bin/bash
x=0
until [ "$x" -ge 10 ]; do
echo "Current value of x: $x"
x=$(expr $x + 1)
sleep 1
done

This piece of code may look familiar. Try it out and see what it does. Basically, until will continue looping until x is either greater than, or equal to 10. When it reaches the value 10, the loop will stop. Therefore, the last value printed for x will be 9.

for … in … do … done

The for structure is used when you are looping through a range of variables. For instance, you can write up a small program that prints 10 dots each second:

#!/bin/bash
echo -n "Checking system for errors"
for dots in 1 2 3 4 5 6 7 8 9 10; do
echo -n "."
done
echo "System clean."

In case you do not know, the -n option to echo prevents a new line from automatically being added. Try it once with the -n option, and then once without to see what I mean. The variable dots loops through values 1 to 10, and prints a dot at each value. Try this example to see what I mean by the variable looping through the values:

#!/bin/bash
for x in paper pencil pen; do
echo "The value of variable x is: $x"
sleep 1
done

When you run the program, you see that x will first hold the value paper, and then it will go to the next value, pencil, and then to the next value, pen. When it finds no more values, the loop ends.

Here is a much more useful example. The following program adds a .html extension to all files in the current directory:

#!/bin/bash
for file in *; do
echo "Adding .html extension to $file..."
mv $file $file.html
sleep 1
done

If you do not know, * is a wild card character. It means, “everything in the current directory”, which is in this case, all the files in the current directory. All files in the current directory are then given a .html extension. Recall that variable file will loop through all the values, in this case, the files in the current directory. mv is then used to rename the value of variable file with a .html extension.

case … in … esac

The case structure is very similar to the if structure. Basically it is great for times where there are a lot of conditions to be checked, and you do not want to have to use if over and over again. Take the following piece of code:

#!/bin/bash
x=5 # initialize x to 5
# now check the value of x:
case $x in
0) echo "Value of x is 0."
;;
5) echo "Value of x is 5."
;;
9) echo "Value of x is 9."
;;
*) echo "Unrecognized value."
esac

The case structure will check the value of x against 3 possibilities. In this case, it will first check if x has the value of 0, and then check if the value is 5, and then check if the value is 9. Finally, if all the checks fail, it will produce a message, “Unrecognized value.”. Remember that “*” means “everything”, and in this case, “any other value other than what was specified”. If x holds any other value other than 0, 5, or 9, then this value falls into the *’s category. When using case, each condition must be ended with two semicolons. Why bother using case when you can use if? Here is the equivalent program, written with if. See which one is faster to write, and easier to read:

#!/bin/bash
x=5 # initialize x to 5
if [ "$x" -eq 0 ]; then
echo "Value of x is 0."
elif [ "$x" -eq 5 ]; then
echo "Value of x is 5."
elif [ "$x" -eq 9 ]; then
echo "Value of x is 9."
else
echo "Unrecognized value."
fi

Quotations

Quotation marks play a big part in shell scripting. There are three types of quotation marks. They are the double quote: “, the forward quote: ‘, and the back quote: `. Does each of them mean something? Yes.

The double quote is used mainly to hold a string of words and preserve whitespace. For instance, “This string contains whitespace.”. A string enclosed in double quotes is treated as one argument. For instance, take the following examples:

shell$ mkdir hello world
shell$ ls -F
hello/ world/

Here we created two directories. mkdir took the strings hello and world as two arguments, and thus created two directories. Now, what happens when you do this:

shell$ mkdir "hello world"
shell$ ls -F
hello/ hello world/ world/

It created a directory with two words. The quotation marks made two words, into one argument. Without the quotation marks, mkdir would think that hello was the first argument, and world, the second.

Forward quotes are used primarily to deal with variables. If a variable is enclosed in double quotes, its value will be evaluated. If it is enclosed in forward quotes, its value will not be evaluated. To make this clearer, try the following example:

#!/bin/bash
x=5 # initialize x to 5
# use double quotes
echo "Using double quotes, the value of x is: $x"
# use forward quotes
echo 'Using forward quotes, the value of x is: $x'

See the difference? You can use double quotes if you do not plan on using variables for the string you wish to enclose. In case you are wondering, yes, forward quotes can be used to preserve whitespace just like double quotes:

shell$ mkdir 'hello world'
shell$ ls -F
hello world/

Back quotes are completely different from double and forward quotes. They are not used to preserve whitespace. If you recall, earlier on, we used this line:

x=$(expr $x + 1)

As you already know, the result of the command expr $x + 1 is assigned to variable x. The exact same result can be achieved with back quotes:

x=`expr $x + 1`

Which one should you use? Whichever one you prefer. You will find the back quote used more often than the $(…). However, I find $(…) easier to read, especially if you have something like this:

$!/bin/bash
echo "I am `whoami`"

Arithmetic with BASH

BASH allows you to perform arithmetic expressions. As you have already seen, arithmetic is performed using the expr command. However, this, like the true command, is considered to be slow. The reason is that in order to run true and expr, the shell has to start them up. A better way is to use a built in shell feature which is quicker. So an alternative to true, as we have also seen, is the “:” command. An alternative to using expr, is to enclose the arithmetic operation inside $((…)). This is different from $(…). The number of brackets will tell you that. Let us try it:

#!/bin/bash
x=8 # initialize x to 8
y=4 # initialize y to 4
# now we assign the sum of x and y to z:
z=$(($x + $y))
echo "The sum of $x + $y is $z"

As always, whichever one you choose, is purely up to you. If you feel more comfortable using expr to $((…)), by all means, use it.

bash is able to perform, addition, subtraction, multiplication, division, and modulus. Each action has an operator that corresponds to it:

ACTION OPERATOR
Addition +
Subtraction –
Multiplication *
Division /
Modulus %

Everyone should be familiar with the first four operations. If you do not know what modulus is, it is the value of the remainder when two values are divided. Here is an example of arithmetic in bash:

#!/bin/bash
x=5 # initialize x to 5
y=3 # initialize y to 3

add=$(($x + $y)) # add the values of x and y and assign it to variable add
sub=$(($x - $y)) # subtract the values of x and y and assign it to variable sub
mul=$(($x * $y)) # multiply the values of x and y and assign it to variable mul
div=$(($x / $y)) # divide the values of x and y and assign it to variable div
mod=$(($x % $y)) # get the remainder of x / y and assign it to variable mod

# print out the answers:
echo "Sum: $add"
echo "Difference: $sub"
echo "Product: $mul"
echo "Quotient: $div"
echo "Remainder: $mod"

Again, the above code could have been done with expr. For instance, instead of add=$(($x + $y)), you could have used add=$(expr $x + $y), or, add=`expr $x + $y`.

Reading User Input

Now we come to the fun part. You can make your program so that it will interact with the user, and the user can interact with it. The command to get input from the user, is read. read is a built in bash command that needs to make use of variables, as you will see:

#!/bin/bash
# gets the name of the user and prints a greeting
echo -n "Enter your name: "
read user_name
echo "Hello $user_name!"

The variable here is user_name. Of course you could have called it anything you like. read will wait for the user to enter something and then press ENTER. If the user does not enter anything, read will continue to wait until the ENTER key is pressed. If ENTER is pressed without entering anything, read will execute the next line of code. Try it. Here is the same example, only this time we check to see if the user enters something:

#!/bin/bash
# gets the name of the user and prints a greeting
echo -n "Enter your name: "
read user_name

# the user did not enter anything:
if [ -z “$user_name” ]; then
echo “You did not tell me your name!”
exit
fi

echo “Hello $user_name!”

Here, if the user presses the ENTER key without typing anything, our program will complain and exit. Otherwise, it will print the greeting. Getting user input is useful for interactive programs that require the user to enter certain things. For instance, you could create a simple database and have the user enter things in it to be added to your database.

Functions

Functions make scripts easier to maintain. Basically it breaks up the program into smaller pieces. A function performs an action defined by you, and it can return a value if you wish. Before I continue, here is an example of a shell program using a function:

#!/bin/bash
# function hello() just prints a message
hello()
{
echo "You are in function hello()"
}

echo “Calling function hello()…”
# call the hello() function:
hello
echo “You are now out of function hello()”

Try running the above. The function hello() has only one purpose here, and that is, to print a message. Functions can of course be made to do more complicated tasks. In the above, we called the hello() function by name by using the line:

hello

When this line is executed, bash searches the script for the line hello(). It finds it right at the top, and executes its contents.

Functions are always called by their function name, as we have seen in the above. When writing a function, you can either start with function_name(), as we did in the above, or if you want to make it more explicit, you can use the function function_name(). Here is an alternative way to write function hello():

function hello()
{
echo "You are in function hello()"
}

Functions always have an empty start and closing brackets: “()”, followed by a starting brace and an ending brace: “{…}”. These braces mark the start and end of the function. Any code enclosed within the braces will be executed and will belong only to the function. Functions should always be defined before they are called. Let us look at the above program again, only this time we call the function before it is defined:

#!/bin/bash
echo "Calling function hello()..."
# call the hello() function:
hello
echo "You are now out of function hello()"

# function hello() just prints a message
hello()
{
echo “You are in function hello()”
}

Here is what we get when we try to run it:

shell$ ./hello.sh
Calling function hello()...
./hello.sh: hello: command not found
You are now out of function hello()

As you can see, we get an error. Therefore, always have your functions at the start of your code, or at least, before you call the function. Here is another example of using functions:

#!/bin/bash
# admin.sh - administrative tool

# function new_user() creates a new user account
new_user()
{
echo “Preparing to add a new user…”
sleep 2
adduser # run the adduser program
}

echo “1. Add user”
echo “2. Exit”

echo “Enter your choice: ”
read choice

case $choice in
1) new_user # call the new_user() function
;;
*) exit
;;
esac

In order for this to work properly, you will need to be the root user, since adduser is a program only root can run. Hopefully this example (short as it is) shows the usefulness of functions.

Trapping

You can use the built in command trap to trap signals in your programs. It is a good way to gracefully exit a program. For instance, if you have a program running, hitting CTRL-C will send the program an interrupt signal, which will kill the program. trap will allow you to capture this signal, and will give you a chance to either continue with the program, or to tell the user that the program is quitting. trap uses the following syntax:

trap action signal

action is what you want to do when the signal is activated, and signal is the signal to look for. A list of signals can be found by using the command trap -l. When using signals in your shell programs, omit the first three letters of the signal, usually SIG. For instance, the interrupt signal is SIGINT. In your shell programs, just use INT. You can also use the signal number that comes beside the signal name. For instance, the numerical signal value of SIGINTis 2. Try out the following program:

#!/bin/bash
# using the trap command

# trap CTRL-C and execute the sorry() function:
trap sorry INT

# function sorry() prints a message
sorry()
{
echo “I’m sorry Dave. I can’t do that.”
sleep 3
}

# count down from 10 to 1:
for i in 10 9 8 7 6 5 4 3 2 1; do
echo $i seconds until system failure.”
sleep 1
done
echo “System failure.”

Now, while the program is running and counting down, hit CTRL-C. This will send an interrupt signal to the program. However, the signal will be caught by the trap command, which will in turn execute the sorry() function. You can have trap ignore the signal by having “”” in place of the action. You can reset the trap by using a dash: “-“. For instance:

# execute the sorry() function if SIGINT is caught:
trap sorry INT

# reset the trap:
trap – INT

# do nothing when SIGINT is caught:
trap ” INT

When you reset a trap, it defaults to its original action, which is, to interrupt the program and kill it. When you set it to do nothing, it does just that. Nothing. The program will continue to run, ignoring the signal.

AND & OR

We have seen the use of control structures, and how useful they are. There are two extra things that can be added. The AND: “&&” and the OR “||” statements. The AND statement looks like this:

condition_1 && condition_2

The AND statement first checks the leftmost condition. If it is true, then it checks the second condition. If it is true, then the rest of the code is executed. If condition_1 returns false, then condition_2 will not be executed. In other words:

if condition_1 is true, AND if condition_2 is true, then…

Here is an example making use of the AND statement:

#!/bin/bash
x=5
y=10
if [ "$x" -eq 5 ] && [ "$y" -eq 10 ]; then
echo "Both conditions are true."
else
echo "The conditions are not true."
fi

Here, we find that x and y both hold the values we are checking for, and so the conditions are true. If you were to change the value of x=5 to x=12, and then re-run the program, you would find that the condition is now false.

The OR statement is used in a similar way. The only difference is that it checks if the leftmost statement is false. If it is, then it goes on to the next statement, and the next:

condition_1 || condition_2

In pseudo code, this would translate to the following:

if condition_1 is true, OR if condition_2 is true, then…

Therefore, any subsequent code will be executed, provided at least one of the tested conditions is true:

#!/bin/bash
x=3
y=2
if [ "$x" -eq 5 ] || [ "$y" -eq 2 ]; then
echo "One of the conditions is true."
else
echo "None of the conditions are true."
fi

Here, you will see that one of the conditions is true. However, change the value of y and re-run the program. You will see that none of the conditions are true.

If you think about it, the if structure can be used in place of AND and OR, however, it would require nesting the if statements. Nesting means having an if structure within another if structure. Nesting is also possible with other control structures of course. Here is an example of a nested if structure, which is an equivalent of our previous AND code:

#!/bin/bash
x=5
y=10
if [ "$x" -eq 5 ]; then
if [ "$y" -eq 10 ]; then
echo "Both conditions are true."
else
echo "The conditions are not true."
fi
fi

This achieves the same purpose as using the AND statement. It is much harder to read, and takes much longer to write. Save yourself the trouble and use the AND and OR statements.

Using Arguments

You may have noticed that most programs in Linux are not interactive. You are required to type arguments, otherwise, you get a “usage” message. Take the more command for instance. If you do not type a filename after it, it will respond with a “usage” message. It is possible to have your shell program work on arguments. For this, you need to know the “$#” variable. This variable stands for the total number of arguments passed to the program. For instance, if you run a program as follows:

shell$ foo argument

$# would have a value of one, because there is only one argument passed to the program. If you have two arguments, then $# would have a value of two. In addition to this, each word on the command line, that is, the program’s name (in this case foo), and the argument, can be referred to as variables within the shell program. foo would be $0. argument would be $1. You can have up to 9 variables, from $0 (which is the program name), followed by $1 to $9 for each argument. Let us see this in action:

#!/bin/bash
# prints out the first argument
# first check if there is an argument:
if [ "$#" -ne 1 ]; then
echo "usage: $0 "
fi

echo “The argument is $1”

This program expects one, and only one, argument in order to run the program. If you type less than one argument, or more than one, the program will print the usage message. Otherwise, if there is an argument passed to the program, the shell program will print out the argument you passed. Recall that $0 is the program’s name. This is why it is used in the “usage” message. The last line makes use of $1. Recall that $1 holds the value of the argument that is passed to the program.

Redirection and Piping

As we covered before, redirection and piping are very common. Here are some examples and a little review in the context of usable scripts.

shell$ echo "Hello World"
Hello World

Redirection allows you to redirect the output somewhere else, most likely, a file. The “>” operator is used to redirect output.

shell$ echo "Hello World" > foo.file
shell$ cat foo.file
Hello World

Here, the output of echo “Hello World” is redirected to a file called foo.file. When we read the contents of the file, we see the output there. There is one problem with the “>” operator. It will overwrite the contents of any file. What if you want to append to it? Then you must use the append operator: “>>”. It is used in the exact same was as the redirection operator, except that it will not overwrite the contents of the file, but add to it.

Piping allows you to take the output from a program, and then run the output through another program. Piping is done using the pipe operator: “|”.:

shell$ cat /etc/passwd | grep shell
shell:x:1002:100:X_console,,,:/home/shell:/bin/bash

Here we read the entire /etc/passwd and then pipe the output to grep, which in turn, searches for the string shell and then prints the entire line containing the string, to the screen. You could have mixed this with redirection to save the final output to a file:

shell$ cat /etc/passwd | grep shell > foo.file
shell$ cat foo.file
shell:x:1002:100:X_console,,,:/home/shell:/bin/bash

It worked. /etc/passwd is read, and then the entire output is piped into grep to search for the string shell. The final output is then redirected and saved into foo.file. You will find redirection and piping to be useful tools when you write your shell programs.

Temporary Files

Temporary files are useful when you want to store information, or persist information to disk for a short period of time. A temporary file can be created with the $$ symbol. This symbol uses a random number generator to either prefix or suffix the file and ensure it has a unique name.

shell$ touch hello
shell$ ls
hello
shell$ touch hello.$$
shell$ ls
hello hello.689

There it is, your temporary file

Return Values

Most programs return a value depending upon how they exit. For instance, if you look at the manual page for grep, it tells us that grep will return a 0 if a match was found, and a 1 if no match was found. Why do we care about the return value of a program? For various reasons. Let us say that you want to check if a particular user exists on the system. One way to do this would be to grep the user’s name in the /etc/passwd file. Let us say the user’s name is foobar:

shell$ grep "foobar" /etc/passwd
shell$

No output. That means that grep could not find a match. But it would be so much more helpful if a message saying that it could not find a match was printed. This is when you will need to capture the return value of the program. A special variable holds the return value of a program. This variable is $?. Take a look at the following piece of code:

#!/bin/bash
# grep for user foobar and pipe all output to /dev/null:
grep "foobar" /etc/passwd > /dev/null 2>&1
# capture the return value and act accordingly:
if [ "$?" -eq 0 ]; then
echo "Match found."
exit
else
echo "No match found."
fi

Now when you run the program, it will capture the return value of grep. If it equals to 0, then a match was found and the appropriate message is printed. Otherwise, it will print that there was no match found. This is a very basic use of getting a return value from a program. As you continue practicing, you will find that there will be times when you need the return value of a program to do what you want.

If you happen to be wondering what 2>&1 means, it is quite simple. Under Linux, these numbers are file descriptors. 0 is standard input (eg: keyboard), 1 is standard output (eg: monitor) and 2 is standard error (eg: monitor). All normal information is sent to file descriptor 1, and any errors are sent to 2. If you do not want to have the error messages pop up, then you simply redirect it to /dev/null. Note that this will not stop information from being sent to standard output. For example, if you do not have permissions to read another user’s directory, you will not be able to list its contents:

shell$ ls /root
ls: /root: Permission denied
shell$ ls /root 2> /dev/null
shell$

As you can see, the error was not printed out this time. The same applies for other programs and for file descriptor 1. If you do not want to see the normal output of a program, that is, you want it to run silently, you can redirect it to /dev/null. Now if you do not want to see either standard input or error, then you do it this way:

shell$ ls /root > /dev/null 2>&1

You should recall that this is IO Redirection, and the null device represents nothing or nowhere.

Now what if you want your shell script to return a value upon exiting? The exit command takes one argument. A number to return. Normally the number 0 is used to denote a successful exit, no errors occurred. Anything higher or lower than 0 normally means an error has occurred. This is for you, the programmer to decide. Let us look at this program:

#!/bin/bash
if [ -f "/etc/passwd" ]; then
echo "Password file exists."
exit 0
else
echo "No such file."
exit 1
fi

By specifying return values upon exit, other shell scripts you write making use of this script will be able to capture its return value. This is similar to programming with functions in other languages.

Porting BASH Scripts

The most well written scripts are portable. That means that they work on many different version of Linux and Unix with little modification. The biggest mistake you can make when considering portability is to use applications that only exist on a single version of Linux. The foo program is an example. It servers the same purpose as echo, but does not exist on all flavors of Linux.

Basic Linux Shell Scripting Part 2

This article will focus on an important part of Shell Scripting – Redirection and Piping. Simply put, redirection allows you to use the output of one command as the input for another. This allows you to perform tasks such as keyword searching a directory listing, or making decisions based on the content of a file. There are a few basic things you must understand to perform redirection.

Standard Input

Standard Input or [stdin] refers to the place where input for a command usually comes, you keyboard. Unless you specify otherwise, input always comes from standard input. Standard input is redirected using the < symbol.

Standard Output

Line standard input, standard output is the place where, unless modified, the output of commands is sent. This is by default your console, tty, or vty session. To send the output of a command elsewhere, you can use the > or >> symbols. Note that the >> symbol will append data, while the > symbol will overwrite what is already there.

Standard Error

Standard error is the place where errors are displayed for the user to review. This also defaults to your current display session. To record the errors of a command, you can use the 2> symbol. This can be useful if you want to run a command and keep a log of all errors the command generates.

Consider this example of redirection. The first command generates a list of all files on the computer, redirecting the contents to the file dirlist.out and the errors to the file dirlist.err. It is also set to run as a background process, allowing us to continue to work. During it’s execution I ran [ps] twice. The first time you can see the job running, the second one indicates its completion. Then I made a typo. Finally is a list of the two files used in the first command.

Piping

Piping allows you to take the output of one command and use it as the input to another. This can be used to “glue” commands together to perform combined operations. You have already done this to some extent when you have run the [ls | more] command to page through longer directory listings. The pipe operator is the [|] symbol, usually located above backslash “\”. This symbol performs a dual redirection, redirecting the output of the command on the left side of the symbol to the input for the command on the right side. Consider this example.

Regular Expressions

Regular expressions were once the primary way that searches were performed in programming languages. They are common in languages such as C and PERL. Regular expressions are becoming increasingly common in modern programming languages such as Java and Visual Basic.NET. Modern Regular Expressions are even standardized in ISO/IEC 9945-2:1993.

Regular Expressions: Regular expressions are a context-independent syntax that can represent a wide variety of character sets and character set orderings, where these character sets are interpreted according to the current locale.

Mastering regular expressions is a worthwhile goal if you plan on working extensively with UNIX, or programming in any modern language. There have been many books published on this topic, with focus books on the various regular expression packages such as those that come with Java, Perl (perlre), Visual Basic.NET and C#.

The GREP utility is produced by GNU and it’s primary function is to search through input for defined expressions. When using the GREP utility, and expression is encased within two forward slashes, //. The values between these symbols represent the search argument. In it’s simplest form, a regular expression could be /findme/. This would locate the word “findme”. It’s not until we begin adding wildcard symbols and logic expression that the regular expression begins gain power.

Consider the following characters.
1. . (period) – means any single character in this position
2. * – 0 or more occurrences of the preceding character
3. [ ] – a range of acceptable characters
4. [^ ] – a range of unacceptable characters
5. ^ – characters must be at the beginning of the line
6. $ – characters must be at the end of the line
7. \< \> – indicated that the characters are treated as a word

Consider the following examples.
1. d.g will find dog but not doug.
2. B* will find all words with the letter B.
3. [df].g will find dog and fog, but not bog.
4. [^df].g will find log and bog, but not dog and fog.
5. ^ef will find effort but not referee.
6. $in will find bin but not inside
7. [^\] will not find concatenate.

Using GREP

One of the most powerful commands you can have in your Linux command arsenal is GREP. GREP supports a number of input parameters, each of which can be used to alter the method of search. The man page and the –help argument can be used to obtain detailed description of each option. GREP is most often used as the target of a pipe from another command. Here is an example of GREP being used to parse through a file to locate a particular line. This example locates all commands the history commands that contain [ls].

Here are some common questions and answers about grep usage.**

1. How can I list just the names of matching files?

grep -l ‘main’ *.c

lists the names of all C files in the current directory whose contents mention `main’.

2. How do I search directories recursively?

grep -r ‘hello’ /home/gigi

searches for `hello’ in all files under the directory `/home/gigi’. For more control of which files are searched, use find, grep and xargs. For example, the following command searches only C files:

find /home/gigi -name ‘*.c’ -print | xargs grep ‘hello’ /dev/null

This differs from the command:

grep -r ‘hello’ *.c

which merely looks for `hello’ in all files in the current directory whose names end in `.c’. Here the `-r’ is probably unnecessary, as recursion occurs only in the unlikely event that one of `.c’ files is a directory.

3. What if a pattern has a leading `-‘?

grep -e ‘–cut here–‘ *

searches for all lines matching `–cut here–‘. Without `-e’, grep would attempt to parse `–cut here–‘ as a list of options.

4. Suppose I want to search for a whole word, not a part of a word?

grep -w ‘hello’ *

searches only for instances of `hello’ that are entire words; it does not match `Othello’. For more control, use `\<' and `\>‘ to match the start and end of words. For example:

grep ‘hello\>’ *

searches only for words ending in `hello’, so it matches the word `Othello’.

5. How do I output context around the matching lines?

grep -C 2 ‘hello’ *

prints two lines of context around each matching line.

6. How do I force grep to print the name of the file?

Append `/dev/null’:

grep ‘eli’ /etc/passwd /dev/null

gets you:

/etc/passwd:eli:DNGUTF58.IMe.:98:11:Eli Smith:/home/do/eli:/bin/bash

7. Why do people use strange regular expressions on ps output?

ps -ef | grep ‘[c]ron’

If the pattern had been written without the square brackets, it would have matched not only the ps output line for cron, but also the ps output line for grep. Note that some platforms ps limit the ouput to the width of the screen, grep does not have any limit on the length of a line except the available memory.

8. Why does grep report “Binary file matches”?

If grep listed all matching “lines” from a binary file, it would probably generate output that is not useful, and it might even muck up your display. So GNU grep suppresses output from files that appear to be binary files. To force GNU grep to output lines even from files that appear to be binary, use the `-a’ or `–binary-files=text’ option. To eliminate the “Binary file matches” messages, use the `-I’ or `–binary-files=without-match’ option.

9. Why doesn’t `grep -lv’ print nonmatching file names?

`grep -lv’ lists the names of all files containing one or more lines that do not match. To list the names of all files that contain no matching lines, use the `-L’ or `–files-without-match’ option.

10. I can do OR with `|’, but what about AND?

grep ‘paul’ /etc/motd | grep ‘franc,ois’

finds all lines that contain both `paul’ and `franc,ois’.

11. How can I search in both standard input and in files?

Use the special file name `-‘:

cat /etc/passwd | grep ‘alain’ – /etc/motd

12. How to express palindromes in a regular expression?

It can be done by using the back referecences, for example a palindrome of 4 chararcters can be written in BRE.

grep -w -e ‘\(.\)\(.\).\2\1’ file

It matches the word “radar” or “civic”.

Guglielmo Bondioni proposed a single RE that finds all the palindromes up to 19 characters long.

egrep -e ‘^(.?)(.?)(.?)(.?)(.?)(.?)(.?)(.?)(.?).?\9\8\7\6\5\4\3\2\1$’ file

Note this is done by using GNU ERE extensions, it might not be portable on other greps.

13. Why are my expressions whith the vertical bar fail?

/bin/echo “ba” | egrep ‘(a)\1|(b)\1’

The first alternate branch fails then the first group was not in the match this will make the second alternate branch fails. For example, “aaba” will match, the first group participate in the match and can be reuse in the second branch.

14. What do grep, fgrep, egrep stand for ?

grep comes from the way line editing was done on Unix. For example, ed uses this syntax to print a list of matching lines on the screen.

global/regular expression/print

g/re/p

fgrep stands for Fixed grep, egrep Extended grep.

** Source: http://www.gnu.org/software/grep/doc/grep_13.html#SEC1

Conclusion

So as you can probably imagine, what you can do with tools such as GREP, Input and Output redirection and Piping is only limited by your creativity and knowledge of the system you are working on. In my next article we will take our basic shell scripts, as well as these commands and begin creating some functional administrative scripts.

** Source: http://www.gnu.org/software/grep/doc/grep_13.html#SEC1

Basic Linux Shell Scripting Concepts

If you have worked with any flavor of Unix you have no doubt encountered the shell script. Shell scripts are one of the most important aspects of a Unix system. In fact, virtually all operations that take place on a Unix computer are accomplished via shell scripts. Shell scripts can range in complexity from simple to massive. In my installation article you encountered the ./configure and ./make scripts. If you use VI to review the source of these scripts, it can be overwhelming. In this article I will unravel some of the basics of shell scripting. This is part 1 of a three part series on shell scripting. The next article will address manipulation of command input and output.

What is a Script?

A script is a series of statements that are used to construct an operation. Scripts are often used to automate repetitive tasks and to simplify complex ones. One of the biggest advantages that Unix has traditionally held over systems such as Windows and Netware is its ability to automate tasks without the need for complex programming languages. That is changing however. On Windows, the Windows Script Host (WSH) is used to provide command line access to languages such as VBScript, and Jscript, as well as a host of others. This replaces the primitive batch file mechanism that many people are familiar with. Netware provides a powerful logon script language. Regardless of the source of implementation, scripts generally provide similar functionality. Below is a list of common features found in script engines:

Temporary storage of information: Maintain state of counters, names, etc. Usually accomplished through variables.

Conditional Logic: Allows you to change the path of execution based on changing conditions and values. This is accomplished via IF statements and LOOP constructs.

Arrays: Tables of data based in memory.

Access to key operating system functions: Allows the manipulation of the computer and its contents outside of the scope of the file system. On Windows this is done via ADSI (Active Directory Scripting Interface) and Network and File System objects. On Unix everything is in the file system, so this is done by reading from and writing to files.

Command Line Execution: Administrators can run scripts by executing a command at a prompt. In Windows this is done via WSH and on Unix by the Execute (x) bit.

Configuring a Linux Newsgroup (NNTP) Server

This is actually a case-study on a real Linux implementation that I did for a customer of mine. It goes to show how a little knowledge can go a long way in solving a problem and saving someone money. Funny thing about customers is that the more money you save them on licensing the more they tend to pay you for your time.
I was approached by a small software development company to create a newsgroup server for their customers. This organization has several small applications that they sell to call centers and helpdesk organizations. They wanted a place for administrators and users to post questions, and read about the latest updates and patches. I stumbled across this job quite by accident and after some discussions with them we agreed to setup a Linux Red Hat 7.3 Server, running the INND (Intenet News Daemon) package provided by ISC (www.isc.org). What I am going to do is walk you through the entire project including server configuration, installation and configuration of INND, and a few other things as well. Let’s get started.

Server Configuration

Whenever you attempt to setup any type of server you have to keep a few things in mind. How much traffic will this server see? What kind of hardware will you need? What kind of storage will you need? What about backups and fault tolerance? It is estimated that this server would receive about 10-20 posts per day, and that the storage would probably not exceed 100 MB of news at any one time. Although this does not seem like much, it’s typical for any small news server. Messages tend to be very small, and with message aging, old messages would be deleted, helping to minimize storage needed. Below are the details of the server used.

Table 1

The general design was to provide a base installation with the innd package loaded. News articles would be loaded on a separate partition mirrored to the extra drive. A second partition on the extra drive would be used for weekly full system backups using tar. The partitioning scheme was as follows.

Table 2

The RAID mirror set was created using Disk Druid during the installation. Both drives were on the same SCSI 2 Wide chain which made for fast read write access.

Introduction to Linux Process Management

In this article we will cover the basics of process management in Linux. This topic is of particular importance if you are responsible for administering a system which has not yet been proven stable, that is not fully tested in its configuration. You may find that as you run software, problems arise requiring administrator intervention. This is the world of process management

Process Management

Any application that runs on a Linux system is assigned a process ID or PID. This is a numerical representation of the instance of the application on the system. In most situations this information is only relevant to the system administrator who may have to debug or terminate processes by referencing the PID. Process Management is the series of tasks a System Administrator completes to monitor, manage, and maintain instances of running applications.

Multitasking

Process Management beings with an understanding concept of Multitasking. Linux is what is referred to as a preemptive multitasking operating system. Preemptive multitasking systems rely on a scheduler. The function of the scheduler is to control the process that is currently using the CPU. In contrast, symmetric multitasking systems such as Windows 3.1 relied on each running process to voluntary relinquish control of the processor. If an application in this system hung or stalled, the entire computer system stalled. By making use of an additional component to pre-empt each process when its “turn” is up, stalled programs do not affect the overall flow of the operating system.

Each “turn” is called a time slice, and each time slice is only a fraction of a second long. It is this rapid switching from process to process that allows a computer to “appear’ to be doing two things at once, in much the same way a movie “appears” to be a continuous picture.

Types of Processes

There are generally two types of processes that run on Linux. Interactive processes are those processes that are invoked by a user and can interact with the user. VI is an example of an interactive process. Interactive processes can be classified into foreground and background processes. The foreground process is the process that you are currently interacting with, and is using the terminal as its stdin (standard input) and stdout (standard output). A background process is not interacting with the user and can be in one of two states – paused or running.

The following exercise will illustrate foreground and background processes.
1. Logon as root.
2. Run [cd \]
3. Run [vi]
4. Press [ctrl + z]. This will pause vi
5. Type [jobs]
6. Notice vi is running in the background
7. Type [fg %1]. This will bring the first background process to the foreground.
8. Close vi.

The second general type of process that runs on Linux is a system process or Daemon (day-mon). Daemon is the term used to refer to process’ that are running on the computer and provide services but do not interact with the console. Most server software is implemented as a daemon. Apache, Samba, and inn are all examples of daemons.

Any process can become a daemon as long as it is run in the background, and does not interact with the user. A simple example of this can be achieved using the [ls –R] command. This will list all subdirectories on the computer, and is similar to the [dir /s] command on Windows. This command can be set to run in the background by typing [ls –R &], and although technically you have control over the shell prompt, you will be able to do little work as the screen displays the output of the process that you have running in the background. You will also notice that the standard pause (ctrl+z) and kill (ctrl+c) commands do little to help you.

Managing Shared Libraries and Installing Software from Tarballs

Many Linux applications do not install properly, or do not function without the presence of required shared libraries.  These libraries are nothing more than files that must be in the correct place.  Since many distributions of Linux vary the exact location of many files, soft links play a big role in helping applications run, and it allows an applications installation script to locate the library natively on the file system and then create links to it that the application can use.

You can list the libraries that an application requires by using the [ldd] command.  Be sure to supply the name of the binary you want to create a listing for.  The following command will list all the libraries required for the [ls] command.

ldd /bin/l

If you are attempting to install an application, you may receive an error stating that dependencies are not present.  This means that the application requires certain shared libraries to function, and those libraries are not installed.  In this case you have two choices.  You can install the application (rpm) using the –nodeps switch to ignore the missing dependencies and install anyway.   The application will probably not work until you get the missing pieces installed.

Installing an Application Using a Tarball

Tarballs have been a traditional method of making applications available and are still widely in use today.  By definition, a tarball is an application that has had all of its files packaged into a single file for distribution.  Before use, the application must be extracted from the tarball.  Once extracted, the user would then run a packaged script to configure the application and complete the installation.  Most tarballs are also zipped using a program such as [zip] or [gzip].  Zipping has the effect of reducing the overall size of the package, and thus the overall download time.  Note that an application is first tarballed, then zipped as the [zip] and [gzip] commands only work on a single file.  Below are the contents of a CD containing the Samba 2.2 RPM and a tarball for Apache 2.0.

In order to unpack Apache, we must perform a few steps.

1. Copy the files from the CD to a temporary directory on the hard drive.

2. Unzip the files.

3. Untar the files.

Figure 

A few conclusions can be drawn from this process.

  • When tarballs are unzipped, it does not create an additional file, it modifies the existing one.
  • Once a tarball is extracted, the extracted files are placed in the same directory as the tarball.  This can be important as the installation instructions for some files require that the files be untared in a particular location.
  • The –xf switch, extracts the files.  A common switch is –xvfz, which will extract the files, giving you a verbose listing of the files being extracted, and will unzip the file before doing so.

Now you have to follow the installation instructions for your particular software package.  In the case of Apache 2.0, these can be found in the httpd-2.0.35/INSTALL file, which directs you to read httpd-2.0.35/docs/manual/install.html.

Note:  To complete the installation of Apache, you will need a C++ compiler on your system.  The install script is pretty straight forward, but unless you installed the cc or gcc compiler during installation, you will need to get them.  The gcc compiler ships on Disc 2 of Red Hat, but is broken into a dozen or so smaller RPM’s.  It’s a challenge to install.

Conclusion

That’s pretty much the skinny on installing software.  If you are on Debian, or some platform that supports Debian packages, you will have a tool and command set very similar to RPM.  Unfortunately there is not yet a standardized installation interface, as there is on Windows, so you my find installing some packages challenging, depending on the form they come in and how readily available tools such as compilers are.  Tarballs are more challenging to install than RPM’s, but usually have some common features.  There is usually a README file, and INSTALL file and a [./configure] command.  The INSTALL will tell you what arguments to provide to the [./configure] command.  This command is nothing more than a shell script that compiles the program, and places the appropriate files in the appropriate locations.

Using RPMs to Install Linux Software

Perhaps one of the most challenging and intimidating tasks a new Linux administrator can perform is to manage and install software. Although application developers are moving more and more towards the newer, easier to use technologies, many applications are still installed using very manual techniques. Below is a list of some of the more common application installation methods.

RPM (RedHat Package Manager): Similar to the Add/Remove Programs in the Windows control panel. This is becoming the de-facto standard for application installation. RPM maintains a database of applications, allowing for easy installation and removal.

DPKG (Debian Package): Application standard for Debian Linux. Very similar to RPM in functionality, but not nearly as common.

Tarball: Refers to a package that has been shipped as a single file. Tarballs may also be zipped. A zipped Tarball is very similar to a self extracting WinZip archive. Files are unpackaged to a location, and the administrator must manually create links, or desktop items to launch applications.

Library: Similar to a Windows DLL, a library is a collection of code that performs common tasks, such as transmit data, or write to a device. Many applications rely on the presence of certain libraries to run. This is referred to as a dependency.

Binary Package: Refers to an application that has been compiled from source code to executable code. Since Linux can run on multiple processor architectures, you must find a binary package for your processor.

Source Code Package: A package that is shipping in it native source code. Before the application can be run it must be compiled on your computer. This involved some extra work on your part. SRPM files are RPM packages containing source code.

Understanding RPM

An RPM package is very similar to an MSI file in Windows. It is a single file that contains the application in binary format, along with information used to query the application, and in some cases, contains signatures that can be used to validate the integrity of the application. RPM’s have a very specific naming format which is interpreted as follows;

[Package Name]-[Version(1.2.3)]-[Release]-[Architecture]-rpm

By looking at the value of architecture (i386/ppc/alpha) you can determine if a package is correct for your processor.

When an RPM package is installed, database information is maintained in the file system in a binary format. There is no reason to modify this data. In order to install software using RPM you must be logged on as a root level account. This is necessary as RPM sometimes updates shared system libraries.

SAMBA Configuring NetBIOS Support in Linux

Many people are under the misimpression that simply having a network protocol such as TCP/IP in common is sufficient for two operating systems to communicate. The fact is that nothing is further from the truth. Although a common protocol is required, that is only part of the picture. Let’s look at a simple example – the web.

In order to browse a web site you must have a network/transport layer protocol in common. This protocol provides connectivity and routing functionality, but it does not allow to applications to communicate. In order for a web client, such as Netscape to browse a web server such as Apache, there must be a common application layer protocol. In this case, the application layer protocol is HTTP. HTTP provides the basic set of commands for retrieving and posting information on the web, but it does not actually transport the data. That is provided by the lower layer protocols. File sharing is no exception. In order to browse the file system of a remote system, there must be a common application layer protocol. On Windows systems, this protocol is the Server Message Block (SMB) protocol.

Windows provides a SMB client built into all Windows Operating Systems, and hidden in the functionality of Explorer. This client is managed via the Workstation Service and the Client for Microsoft Networks. Windows also provides an SMB server in the form of File and Print Sharing for Microsoft Networks and the Server Service. If any of these components is uninstalled or disabled, then SMB, and thus file sharing, functionality is not available.

The common Open Source alternative/supplement to the Client for Microsoft Networks, and File and Print Sharing for Microsoft Networks is Samba. Samba provides a Server and Client component that when installed, allow a Linux computer to appear in “My Network Places” and expose shares, as well as connect to and work with shares based on Windows.