Tuesday, December 28, 2010

Scope of the Perl special variables $1 to $9

 Perl contains local special variables $1 to $9.  These variables contains the sub patterns which are matched from the earlier pattern.  For example:
#!/usr/bin/perl

my $var="Perl is beautiful";
my $var1="Perl is beautiful";
$var =~ m/(\w+) (.*)/;
print "The first word is: ", $1, "\n";
print "The remaining part is: ", $2, "\n";

Thursday, December 23, 2010

How to prepare Oracle insert queries for data present in flat file

    In this article, we are going to see how to load text data present in a file into an Oracle table. Data can be loaded into an Oracle table from a flat file in  two ways:

1. Using SQL Loader.
2. By Preparing Insert queries.

 Let us discuss about the second method of preparing insert queries:

 Assume we have an EMP table in the Oracle database with the following fields:
EMP_ID          NUMBER(4)
EMP_NAME        VARCHAR2(20)
EMP_SAL         NUMBER(5)

Sunday, November 7, 2010

5 important things to follow to be a fast learner

  Learning is a process, mandatory in every professional's life. When you get into an organization, you are always expected to learn faster and be productive soon. Is it that a person's learning is always proportional to his work experience OR can one learn faster? Let us see the 5 most important things for a professional which will help a person to learn faster in a work place:

1. Don't give up easily:  This is the most important of all the things one needs to follow. When you encounter a scenario wherein you are not able to solve something or not getting a solution, don't give up. Try as much as you can. Every time you feel like giving up, tell yourself every problem has a solution. Only when you finally feel you have tried your level best and need help now, go to your colleagues for help. Let me tell you the disadvantage of giving up early. 

Thursday, October 28, 2010

5 most important reasons why a Unix developer should use VIM


  VIM is a text editor, an extension of vi.   VIM is called so because it has lot of Vi Improved features, and hence the name. We saw how to install VIM in one our earlier articles: VIM Installation. Let us see the 5 most compelling reasons why a UNIX developer should use VIM:

1.  File already open alert: Every Unix programmer will have at least one bad experience of overwriting a file which is being opened in  more than one terminal. This CANNOT happen in vim. If a file is opened in an terminal, when the same file is already opened in a different terminal, a warning message appears. This is very helpful feature for the programmers.

Monday, October 18, 2010

User Account & Shell

    When a new user account is created in UNIX, a lot of attributes are defined for the new user. The following are the ones :
  • User Name
  • User Id
  • Group Id
  • Home directory
  • Default Shell
   User name is the name given to the new user in the system. In any account, to find out the user who is currently working or logged in:

Wednesday, September 15, 2010

5 different ways to do file listing without ls command

  ls has to be the command which every UNIX user has used the most. Well, what if ls command does not exist. Can we still list the files and directories without the ls command? In other words, can we simulate the ls command? There are many different ways to achieve that. Lets us see some of them.

1. The simplest of all is using the echo command:
echo *
  In case, if you also want to list the hidden files as well,

Monday, September 13, 2010

EXINIT vs .exrc

In one of our earlier articles, we discussed the use of .exrc file. In this article, we will discuss about the EXINIT variable and how it is related to .exrc file.

  EXINIT is an environment variable which is used by the vi command. When the vi command opens a file, it first reads the EXINIT variable, if present,  and applies the settings accordingly.

Tuesday, August 31, 2010

Unix File Descriptors

 Unix has 3  file descriptors:
  • 0 - Input
  • 1 - Output
  • 2 - Error
Every file descriptor is associated with a value. The above list shows the 3 file descriptors and their associated value. Before we see too much into this, let us go straight to see some examples.

Output File Descriptor:
#cat f1
I am an Indian,
And I love Unix.
#
 In the above example, the cat command displayed the contents of the file f1. The output of the cat command got displayed or got re-directed to the terminal which is the output file descriptor by default. To prove it, lets try this:

Monday, August 23, 2010

find files modified in the last X days and X hours and X mins

  find in Unix, as we know, is a command which nobody can live without. In this article, we are going to discuss only about finding the files with respect to the modification time, say files modified in the last X mins or Xhours. These can be done with the find command options available. However, in cases where the time is not a full hour, say to find files modified in the last 30hours, some Unix flavors do not have a direct option. Let us see in this article how to get these things done.

 The basic syntax of the find command is:

find path options
where path is the path to search
        and options are the options given to find command.

 In all our below examples, the path is our current directory and hence we use .(dot).

1. To find files modified in the last 5 days:
find . -mtime -5

Monday, July 26, 2010

5 different ways of doing Arithmetic operations in UNIX

 In this article, we will see the different  options available in performing arithmetic operations in UNIX . These will come handy while working at the command prompt or for writing scripts. For the sake of simplicity,we will take the example of adding two numbers.

1. The first example makes use of the expr command. All the other arithmetic operators(-,/,%) can be used in the same way except for *(multiplication).
$ x=3
$ y=4
$ expr $x + $y
7
  To multiply the numbers, precede the * with a \. This is done in order to prevent the shell from interpreting the * as a wild card.

Monday, July 12, 2010

How to access a child shell env variable in parent shell?

    An environment variable is one which can be accessed in all the child or sub shells of the environment, however a local variable can be accessed in only the shell in which it is defined. We saw this in detail in one of our earlier articles, the difference between an environment variable and a local variable.


   Let's try something to understand this:
$ NUM=20
$ echo $NUM
20
$ ksh
$ echo $NUM

$
  In the above example, NUM is a local variable set to 20. On entering the sub-shell or child shell, NUM is not recognized because it is a local variable in the parent shell.
$ export NUM=20
$ echo $NUM
20
$ ksh
$ echo $NUM
20
$ export VAR=30
$ echo $VAR
30
$ exit   
$ echo $VAR

$
  In the above example, NUM is set as an environment variable using export, and on entering the sub-shell(ksh) we are able to access it. However, the environment variable VAR set in the child shell is not accessible when we went back(exit) to the parent shell.

 In UNIX, any environment variables defined in a shell can only be accessed in the child or sub shells, not in the parent shell. However, there are instances when we would like to access an environment variable in the parent shell which is being set in the child shell.


Let us write a sample scripts to simulate the problem. We are going to write 2 scripts, 'first' and 'second'. The contents of files 'first' and 'second' are as shown below :
$ cat first
#!/usr/bin/ksh
echo "In parent"
second    
echo "The value of MSG is : $MSG"
$ cat second
#!/usr/bin/ksh
echo "In child"
export MSG="ENVIRON"
On running the script 'first', the following happens:
$ first
In parent
In child
The value of MSG is :
  The script 'second' is being called from 'first'. Hence, 'second' is invoked as a sub-shell from 'first'. The variable MSG being set in the 'second' as explained before is not accessible in the parent 'first'.


Solution:
Modifying the scripts 'first' and 'second' as shown below:
$ cat first
#!/usr/bin/ksh
echo "In parent"
second   
. env_file
echo "The value of MSG is : $MSG"
$ cat second
#!/usr/bin/ksh
echo "In child"
export MSG="ENVIRON"
printenv | sed 's/^/export /;s/=/=\"/;s/$/\"/' > env_file
Now, on running the script 'first', the following happens:
$ first
In parent
In child
The value of MSG is : ENVIRON
  So, we made the parent shell 'first' access the child shell 'second' environment variables.

What did we do:

1. In the script 'second', after setting the environment variable, the list of all environment variables(printenv) is copied to a temporary file env_file.

2. The sed command is used to add the word 'export' at the beginning of every environment variable, and to wrap the variable value with double-quotes. This will enable us to simply run the file in the parent shell to export the variables.

3. In the parent 'first', the script 'second' is called. After the call to the 'second', all the environment variables are being sourced using the dot(.) command, and hence all the variables are now available in the parent as well.

4. On printing the variable VAR, got the value being set in the child and hence accessed the child shell environment variable in the parent.


Enjoy with Shells!!!

PS: This article is written with ksh as example. On trying the above say in csh/tcsh, appropriate modifications need to be made to make it work since the syntax of setting environment variables and sourcing a file are different.

Monday, July 5, 2010

How to read database table definition from Unix Shell?

  A UNIX developer or an admin gets to refer to a database table to check for a particular field type, or to check for the columns of a table. Instead of connecting to sqlplus manually every time to get the column names or to connect to toad, we can write a simple shell script which will read or get the table definition. We will see in this article how to create a simple script for this requirement and use it like a command. 

Let us write a script in a file named 'getTable':
#!/usr/bin/ksh

if [ $1 = "-c" ]; then
   cols=1
   table=$2
else
   cols=0
   table=$1
fi

USER="scott"
PASSWD="tiger"
INST="school"

x=`sqlplus -s $USER/$PASSWD@$INST  <<EOF
set heading off
spool result.txt
desc $table
spool off
EOF`

head -1 result.txt | grep -q ERROR

if [ $? -eq 0 ]; then
   echo Invalid table $table
   exit
fi

if [ $cols -eq 1 ]; then
  awk 'NR>2{print $1}' result.txt
else
  awk 'NR>2{printf "%-30s  %-15s\n", $1, !($4)?$2:$4;}' result.txt
fi

\rm result.txt
How to run the script:

1. To get the columns and field type's, say, for a table 'employee', the script 'getTable' can be run as shown below.(In order to know, how to run a script like a command, refer this.)
#getTable employee
EMP_ID         NUMBER(3)
EMP_NAME       VARCHAR2(20)
EMP_DOB        DATE
2. In order to get only the columns of the employee table:
#getTable -c employee
EMP_ID
EMP_NAME
EMP_DOB  
3.  If one is looking for the definition of a particular field, you can do a 'grep' on the output as well.
#getTable employee | grep NAME
EMP_NAME         VARCHAR2(20)

4.  If the table name is invalid, an appropriate error message will be provided:
#getTable employees
Invalid table employees

The script is self explanatory. However, for beginners, the script does the following:

1.  Establishes a connection to sqlplus.
2.  Gets the definition of the table.
3.  If table not defined, errors and quits.
4.  If -c option specified, displays only the column names, else displays the columns along with their field types.
5.  The first 2 lines contain the header information and hence discarded.
6.  Keep in mind, the desc command output contains 3 columns, column name, NOT NULL field and field type.


Happy Shell Scripting!!!


Note: The example taken here is of Oracle database.

Monday, June 28, 2010

what is SUID?

  SUID, Set User-ID, is one of the most beautiful concepts in UNIX. The common definition given for SUID is, it is an advanced file permission which allows an user to execute a script as if the owner of the script is executing it, and the famous example used for SUID is the passwd command. 

Let us do a case study right away to understand what exactly is SUID bit and how to use it.
 
A user 'blogger1' writes a simple script:
#cat test.sh
#!/usr/bin/ksh
dt=`date`
echo $USER  $dt  >> ~blogger1/log
echo "Updated the log file sucessfully."
  The above example shows a simple shell script which  writes the username and date-time to a log file and echoes a confirmation statement . On running the script, everything happens as expected for the user 'blogger1' as shown below.
#chmod 755 test.sh
#ls -l test.sh log
-rwxr-xr-x  1 blogger1 blogger1 60 Jun 24 21:44 test.sh
-rw-r--r--  1 blogger1 blogger1  0 Jun 24 21:44 log
#./test.sh
Updated the log file sucessfully.
#cat log
blogger1 Fri Jun 25 08:01:16 IST 2010
   Now, the user 'blogger2'  has been told to run this script everyday. Since the script 'test.sh' has executable permission for others, 'blogger2' can execute the script.

The user 'blogger2' logs into his account and does:
#PATH=$PATH:~blogger1
#which test.sh
/home/blogger1/test.sh
#test.sh
test.sh: line 3: /home/blogger1/log: Permission denied
#
   The 'blogger2' updated his path to where the script test.sh is present. On trying to run the script, 'blogger2' got an error stating "Permission denied" on the log file. The error occurred because the 'log' file has write permission only for the owner which is 'blogger1'. When 'blogger2' runs the script, effectively it means 'blogger2' is trying to write on the file 'log' on which he has no permission, and hence the error.

  At the outset, a solution can be thought of to give the 'write' permission to the user 'blogger2' on the file 'log'.  Let's try and see.

The user  'blogger1' gives the write permission on the log file:
#chmod o+w log
#ls -l log
-rw-r--rw-  1 blogger1 blogger1  20 Jun 24 21:44 log
Now, the user 'blogger2' tries to run the script:
#test.sh
Updated the log file sucessfully.
#cat ~blogger1/log
blogger1 Fri Jun 25 08:01:16 IST 2010
blogger2 Fri Jun 25 08:41:16 IST 2010
   The script ran successfully. However, the problem is not solved, instead it got bigger. Though the 'write' permission was given on the 'log' file to enable 'blogger2' run the script, effectively it will now enable the 'blogger2' to simply open the 'log' file and start editing as per his wish. This is because 'blogger2' has the 'write' permission and everything is his now.

  So, we want a solution wherein 'blogger2' does not get to edit the 'log' file directly, however still can run the script 'test.sh' which updates the 'log' file. This means we would like to have some kind of permission by which the effective user(blogger2) gets the permissions of the real user(blogger1) on running the test.sh script.
 
 This is where SUID comes in. When the SUID(s) bit is set on an executable, whoever runs the executable gets the same permission as the owner of the file. The SUID can be set on a file by adding the 's' bit as shown below:
#chmod o-w log
#chmod u+s test.sh
#ls -l test.sh log
-rwsr-xr-x  1 blogger1 blogger1 60 Jun 24 21:44 test.sh
-rw-r--r--  1 blogger1 blogger1 20 Jun 24 21:44 log
#
     Once the SUID is applied, it means any user who runs the executable will get the permissions of the owner of the file while running it. So, when 'blogger2' tries to run the test.sh, UNIX treats 'blogger2' with the same permission as the owner 'blogger1' has on the 'log' file, and hence 'blogger2' can update the 'log' file successfully through the script.

The 'blogger2' now tries to run the script:
#test.sh
Updated the log file sucessfully.
#cat ~blogger1/log
blogger1 Fri Jun 25 08:01:16 IST 2010
blogger2 Fri Jun 25 08:41:11 IST 2010
blogger2 Fri Jun 25 08:58:14 IST 2010
#
    This is how the SUID bit works. The same concept is being used in the passwd command as well. The passwd command can be used by any user to set/change his password. When the passwd command is run, it internally updates the system file /etc/passwd on which only the root user has the 'write' permission.  By making the passwd executable SUID enabled, any user can change his password effectively updating /etc/passwd file.


Enjoy with SUID!!!

P.S. SUID is one of the most DANGEROUS features in UNIX. I will explain why so in one of my future articles.

Monday, June 21, 2010

5 different ways to count the total lines in a file

 In one of our earlier articles, we saw the different ways to find the length of a variable. In the same lines, we will see the different ways to get the count of total lines in a file.


1. The most common method used is the piping of wc command with cat.
# cat file | wc -l
2. grep has the '-c' option to do line count of patterns matched. The '.*' tells to match anything which essentially means to match every line.


# grep -c '.*' file
3.  sed uses '=' operator for line numbering. The '$' tells to give the count only for the last line.
# sed -n '$=' file
4. awk can be used to get the file count using NR variable. NR contains the line number and   by printing NR at the end block, we get the line number of the last line, which essentially is the total lines in a file.
# awk 'END {print NR}' file
5. perl also has the same END block like awk. However, $. denotes line number in perl.
# perl -lne 'END {print $.}' file

Monday, June 14, 2010

What is .exrc file for?

   vi is one of the most commonly used editors in UNIX. When we open a file using vi, we do some settings depending on our needs say, to set line number, to set indentation, to set the tab space etc in the file. However, these settings or customizations last only till the file is open. Some of these settings any user would always like to retain on opening any file.  These settings can be grouped in a file called .exrc file, and hence the settings become permanent.

  Whenever a file is opened using vi, vi looks for the .exrc file in the home directory of the user. If present, it reads the file and applies the settings in the file being opened. And hence any customization we would like to have in the file should be put in this. In fact, all the commands which we execute in the escape mode or the last line execution mode can be put in the .exrc file.

Settings commonly maintained in the .exrc file are:

1.  Set commands :- Set commands used in vi like set list, set number, etc.
2. Abbreviations:-  Frequently used words can be abbreviated. For example,say, a user uses the word 'include' lot many times.  The user can create an abbreviation 'inc' for the word include.
3. Mapping keys:- Some key combinations can be mapped or made into a hot-key combination. We saw in in one of our earlier articles, how to map function keys.

A typical $HOME/.exrc file will look like as shown below:
set number
set autoindent
set nowrapscan
ab inc include
map Q :q!
1. The first three entries are part of the set commands, which includes setting line number, auto-indentation and search wrapping. With these settings, whenever a file is opened, automatically the line numbers will be set, and so is the indentation and search wrap.


2. The abbreviation 'ab inc include':- whenever you type 'inc' followed by a space in a file, it automatically gets converted to 'include'. This is a very helpful feature at times.

3. map Q :q! - This command maps the Q key with the quit operation. So, whenever the user wants to quit the file, instead of typing ':q!', the user can simply type 'Q' from the escape mode.

   The .exrc file should be present in the home directory itself and there is no limitation on the number of customizations being done in it.

Thursday, June 10, 2010

Different ways to split the PATH variable

  Hardly a day passes for a UNIX programmer or an administrator without executing the command 'echo $PATH' . A typical output of this command will be as shown below:
#echo $PATH
/usr/xpg4/bin:/usr/bin:/bin:/usr/local/include
  In the above example, for simplicity, we have shown only 4 components in the PATH variable. However, in a real-time environment, the number of components in a PATH variable could be really lot. And hence it becomes really cumbersome to interpret OR to know whether a particular component in the PATH variable what we are looking for is present or not. In fact, even doing grep on the 'echo $PATH' wont help us since either it will return the whole thing back or nothing.

  It would have been more readable had the PATH variable been split or broken and displayed as shown below:
/usr/xpg4/bin
/usr/bin
/bin
/usr/local/include
Let us see the different ways how we can split the PATH variable:

1. The simplest and my favorite is echo piped with tr. (tr is one of the most powerful commands ). This simply substitutes ':' with a new line.
#echo $PATH | tr ':' '\n'
2. There are many ways in awk to achieve the same. This one is using the record separator RS.
#echo $PATH | awk  '1' RS=":"
     The default RS is a new line. In this case, we tell awk to consider ':' as the record separator, and hence it prints every component on encountering the ':'.

3. awk provides a gsub function using which the we can split the variable. The gsub substitutes all ':' to newline:
#echo $PATH | awk 'gsub ( ":","\n" )'
4. Using the ':' as delimiter, we can read the different components as different columns:
#echo $PATH | awk -F: '{ for(j=1;j<=NF;j++) print $j;}'
5. The last one using the 'sed'. However, this is the least, I would prefer among the rest. sed replaces('s') all(g) occurrences of  ':' with a form-feed character. The newline cannot be used as a replacement in sed as with 'tr' command and hence this workaround:
#echo $PATH | sed "s/:/`echo  '\f'`/g"
   Friends, in every article, we try to solve a problem with as many different options as possible. The reason is simple: Next time we encounter an issue, we start thinking in many different ways.

Happy Unix Programming!!!

Tuesday, June 1, 2010

Different ways to print non-empty lines in a file

 There are times when we want to display only the actual lines in a file, leaving the blank lines. There are various methods in which we can print only the non-empty lines of a file.

1. The simplest of all is using the famous 'grep' command:
grep -v '^$' file
     This is one of the most common methods used to get the non-empty lines of a file. The '^' is used to denote "lines beginning with", and '$' is used to denote "lines ending with".  In this case, '^$' means a line beginning with and ending, which is nothing but a blank line. By using the -v options, we get the lines other than the blank lines.

2. Different ways of achieving the same thing in sed:
sed '/^$/D' file
      The above method removes('D') all the patterns(blank lines) matching and displays the rest.
sed -n '/^$/!p' file
     This form suppresses the default output(-n) and prints the non-empty(!p) lines.

3. Three different ways of getting non-empty lines using awk;
awk NF file
     This is my favourite of all. NF indicates the total number of fields, and hence this prints only non-blank lines, since in blank lines NF is 0 and evaluates to false.

awk 'length' file
      This one is almost same as the above, but uses the length function and prints only those lines whose length is > 0.
awk '$0 !~ /^$/' file
      An other option using awk as the previous methods.  !~ indicates lines not containing.

4. Getting the non-empty lines using perl:
perl -lne 'print if length($_) ' file
      This is perl version of using the same in awk as before.
perl -lne 'print if $_ ne ""' file
     $_ indicates the current line. Prints only those lines which is not(ne) blank.
perl -ne 'print unless /^$/' file
     'print unless' indicates print lines which does not contain.

Sunday, May 23, 2010

How to use an extern variable in C?

   Extern variable in C is an extension of global variable concept. First, let us see what is a global variable and the difference between a global variable and an extern variable.

What is a global variable?

       Any variable declared outside a function block is a global variable. A global variable  can be accessed by any function in the file in which it is defined. The scope of the global variable is throughout the file in which it is present. 
int globalVar;
   The variable globalVar is defined as a global variable for which memory space is allocated and the memory location is accessed by the name globalVar. Since there is no initial value specified, the variable gets initialized to zero. This variable can now be accessed from any function in the file in which this definition is present.

What is an extern  variable?

   Assume a scenario where you would like to access the global variable globalVar in another program. If you happen to declare a global variable with the same name in the second file, the compiler will not allow since no two variables can be declared with the same name.  So, how would you access the global variable globalVar defined in the first file? Simple, the answer to this is extern.
extern int globalVar;
  When you use extern keyword before the global variable declaration, the compiler understands you want to access a variable being  defined in another program or file, and hence not to allocate any memory for this one. Instead, it simply points to the global variable defined in the other file. In this fashion, you can use the extern keyword in any number of programs to access the global variable globalVar. However, the definition should only be at one place.

Let us see an example:

# cat f1.c
#include <stdio.h>

int globalVar=3;

void fun();

int main()
{
 fun();
 printf("Global var in f1 is %d\n", globalVar);
 return 1;
}
     The above file f1.c contains the main program in which a function fun is being called. The main point here is the definition of the variable globalVar. At this juncture, the globalVar is simply a global variable.

#cat f2.c
#include <stdio.h>

extern int globalVar;
void fun()
{
  printf("Global var in f2 is %d\n", globalVar);
  globalVar++;
}
     In the above file f2.c, the function fun wants to access the variable globalVar being defined in the file f1.c. In order to access the variable, the extern keyword is used for declaration of the globalVar variable and hence no memory is allocated for globalVar, instead it starts pointing to the globalVar in the f1.c .

# cc -o ser f1.c f2.c
# ./ser
Global var in f2 is 3
Global var in f1 is 4
#
  The files are compiled and an executable ser is generated. On running the executable ser, as shown above, the function in the file f2.c accessed the variable defined in the first file f1.c.

Sunday, May 16, 2010

Secure sqlplus connection?

  sqlplus command is used to connect to Oracle from Unix and we saw the different ways to connect to sqlplus from UNIX and retrieve data in one of our earlier articles. Whenever we connect to sqlplus, the username, password and instance are provided in the command line itself. This way of establishing a connection to sqlplus is not considered secure since the database user details are accessible to any UNIX user from the process table.

  To illustrate this, let us open two Unix terminals.

  1. In Terminal 1, connect to an sqlplus session as shown below:
#echo $USER
guru
# sqlplus blogger/Secret!@myinst
>
      The sqlplus command used above will establish a sqlplus connection.

  2. In the Terminal 2, let us list all the process for the user 'guru'.
#ps -ef | grep sqlplus
guru 2716 29208  0 20:43:05 pts/10  0:01 sqlplus blogger/Secret!@myinst
#
  As shown above, sqlplus connection is shown as one of the processes in which all the credentials are easily viewable. These credentials are not only visible to the user 'guru' and the 'root', but also to any user in the UNIX box. And hence this is not a secure way of connecting to sqlplus in very sensitive environments.

 Solution:

   sqlplus connection can be made from the shell in a different way in which no user information needs to be given as part of command line arguments. All the credentials are given only after getting into the sqlplus session:

1. In the 1st terminal, we will establish a sqlplus connection in the way shown below:
#sqlplus /nolog
>connect blogger/Secret!@myinst
Connected.
>show user
USER is "blogger"
>
    As shown above, the username and password details are provided using the sqlplus connect command. And hence the Unix shell is not aware of the user details.

  2. In the 2nd terminal, lets list again all the processes run by the user 'guru':
#ps -ef | grep sqlplus
guru 3261 29208 23 20:44:07 pts/10 0:01 sqlplus /nolog
#
   The user/password details are no longer visible and hence the connection is secure.

Monday, May 10, 2010

Login shell or a non-login shell?

 Shells in UNIX are classified into two categories:
  • Login Shell
  • Sub shell (Non-Login shell)
    Login shell is a shell where the user reaches on trying to login to his account. This login shell, ksh or bash or tcsh or sh, is defined for the user at the time of user account creation. However, the login shell of an user can always be changed by the root user. 

   Sub shell or a non-login is a shell which is invoked from the login shell or from a different sub shell by just typing the name of the shell. In fact, whenever a shell script is run, a sub-shell is opened internally and the script is run from the sub-shell.

1. How to go to a sub-shell?
  
  Simple, from the current shell, if you want to go to a k-shell, type 'ksh' at the prompt.
#ksh
#
   In the same way, one can switch to any shell by tying the name of the shell at the prompt. In other words, any shell opened from the login shell in the above manner is a sub-shell.

2. How to find out whether the shell is a login-shell or a non-login shell?

   Two environment variables are available to find or identify whether a shell is login or sub-shell.
  • $SHELL - This always tells the login shell.
  • $0        - This always tells the current shell.
   i) Login Shell: Assuming the user is currently in his login-shell which is tcsh:
#echo $SHELL
tcsh
#echo $0
tcsh
#
    In this case, both the variables are showing the same value. It is because the login shell and current shell are same in this case.

  ii) Non-Login shell:
#echo $SHELL
tcsh
#ksh
#echo $SHELL
tcsh
#echo $0
ksh
#
   As shown above, the user initially is in login shell tcsh and then switches to ksh. On switching to ksh, the $0 shows ksh however the SHELL variable still shows tcsh.



Monday, May 3, 2010

File permissions vs Directory permissions

   Everything is a file in UNIX, they say. A file needs some necessary file permissions to be accessed, so is a directory. File permissions is one of the most common and important activity which every UNIX user comes across. However, directory permissions is not so common an activity and hence the assumption that the file permissions are the same as directory permissions, difference being applied on the directory level, which is incorrect. Lets see the difference between them.

The basic file/directory permission attributes are : r w x
r   -  read permission
w   -  write permission
x   -  Execute permission
File Permissions:

   Very quickly, lets see the file permissions with a file as example:
# ls -l file
-rw-r--r--   1 blogger        adm             27 May  2 08:04 file
#
 For simplicity, lets focus only on the owner permissions(highlighted in red). On the file named 'file' :

r   - Indicates the user can read the file.
w - Indicates the user can edit or delete the file.
'-'  - Indicates the user cannot execute the file since 'x' is not set.


Directory Permissions:

     Directory permissions are very different from the file permissions. Lets create a directory named 'abc' and check the differences. To understand the differences better,  all the file permissions are being removed and will be applied one by one. For simplicity, we deal only with the owner permissions:
#mkdir abc
#ls -l
total 0
drwxr-xr-x   2 blogger        adm             96 May  2 08:20 abc
#chmod -rwx abc
#ls -l
total 0
d---------   2 blogger        adm             96 May  2 08:26 abc
#
  1. Listing the files inside the directory:
#ls abc
abc unreadable
total 0
#
    It is the read permission of the directory which enables the unix user to list the files inside the directory.
#chmod u+r abc
#ls abc
#
   No files have been listed since the directory does not contain any. However, the point to be noted is no error is being thrown.

   2. Lets get into the directory 'abc':
#cd abc
abc: Permission denied.
#chmod u+x abc
#cd abc
#
   It is the executable permission on a directory which enables the user to get into the directory. However, the executable permission with respect to a file is completely different.

 3.   Lets now try to create a file under 'abc' directory:
#touch file
touch: file cannot create
   The user is not able to create a file in the abc directory because the directory does not have write permission.
#cd ..
#chmod u+w abc
#cd abc
#touch file
#
  The file got created successfully.  If creation is successful, so will be the deletion of the file. These are the differences between the permissions or attributes between file permissions and directory permissions.

Tuesday, April 27, 2010

How to find the length of a variable?

    In UNIX, one deals with variables all the times. Many a time, you might want to find the length of a variable. The length of a variable can be found or calculated using many different ways in UNIX. Lets explore the different ways to do it:

1. In the k-shell or bourne shell, the simple echo command can be used to find the length of a variable.
#VAR="welcome"
#echo ${#VAR}
7
#

2. The echo command with wc can also be used to find the length of a variable.
#VAR="welcome"
#echo -n $VAR | wc -c
7
#

3. The printf command with wc can also be used to calculate the length of a variable.
#VAR="welcome"
#printf $VAR | wc -c
7
#

4. The expr command can also be used to find the length of a variable.
#VAR="welcome'
#expr $VAR : '.*'
7
#

5. The awk command can also be used to calculate the length of the variable.

#VAR="welcome"
#echo $VAR | awk '{print length ;}'
7
#

6. The perl command can also be used for the same:

#VAR="welcome"
#echo $VAR | perl -ne 'chop; print length($_) . "\n";'
7
#

    All the above examples all common across all UNIX flavors such as Solaris, HP-UX, Linux, AIX, etc.

[Note: In the above examples, the setting of variable VAR is shown with respect to k-shell/bourne shell. Depending on your shell, the setting of VAR variable will change.]

Friday, April 23, 2010

rlogin: How to login to a UNIX account without password?

       Every UNIX user logs into a user account by giving the username and password. There are situations when a user has to login to a particular account many times during the day. It would have been easier had the user not been asked to provide the password every time he tried to log-in.

     There are many ways in which a user can login to an account without password. In this article, we are going to see how to use the rlogin command to achieve this.

    The basic use of rlogin command is to do a remote login. Though the name tells 'remote login',the rlogin command can be used to login  to a user account of the same machine or a different machine.


Using rlogin to login to a user account in different box:

    Let us consider a scenario. A user 'gpr' in UNIX box 'SandalWood' tries to login to the user 'blogger' in UNIX box 'TeakWood'.

  1. In the home directory of the user 'blogger' in 'TeakWood', create a file .rhosts if not present and add the following contents to it.

#cat $HOME/.rhosts
SandalWood gpr
#

  2.  The user 'gpr' should now execute the rlogin command from 'SandalWood' box as shown below:

#rlogin TeakWood -l blogger
#

   When  issuing the above rlogin command, the user will now be directly taken into the user account 'gpr' without being prompted for password.

     Whenever the rlogin command is used, it first checks for .rhosts file in the destination user account's home directory. On finding the .rhosts file, it tries to find the entry for the remote user being logged. If a valid entry exists, the user gets considered as a trusted user and hence not prompted for password.

Using rlogin to login to a user account in the same box:
        
  If the rlogin command is used to login to a user account of the same machine, the hostname need not be specified in the .rhosts file. In place of hostname, the symbol '+' can also be used as shown in the below example: 
       
#hostname 
TeakWood
#echo $USER
gpr
#cat $HOME/.rhosts
+  blogger
#

     The above entry in the .rhosts file will allow the blogger user 'blogger' to login to the user account 'gpr' without password.

Tuesday, April 13, 2010

What is the difference between export, set and setenv UNIX commands?

  export, set and setenv commands are used in UNIX for setting value of a variable or an environment variable. In order to understand the difference between the set, setenv and export UNIX commands, the user should know the difference between a normal variable and an environment variable.

   Let us  consider an example. In k-shell or bourne shell, a variable is defined as shown below:

# FILE=”output.txt”

     This means the variable FILE is assigned a value 'output.txt'. This value can be checked by doing "echo $FILE".  This FILE variable is a normal or local variable.  This assignment makes the scope of this variable  only inside the shell in which it is defined.  Any shell or a process invoked from the original shell will not have the variable FILE defined as shown below.

      
#FILE=”output.txt”
#echo $FILE
output.txt
#ksh
#echo $FILE

#

     There are instances or situations where we would like to define a variable and it should be accessed in all the shells or processes invoked by the original shell. This can be achieved by the export command in ksh/sh as shown below.
      
#export FILE=”output.txt”
#echo $FILE
output.txt
#ksh
#echo $FILE
output.txt
#

       This FILE variable is now an environment variable. An environment variable is the one which can be accessed across all the shells or processes initiated from the original shell of the environment. So, in ksh/sh, a variable can be made an environment variable using the export command.

       set and setenv are the c-shell/tc-shell alternatives for setting a local variable and environment variable respectively. The set command is used for setting local variable, setenv is uesd for setting an environment variable:

   The example below shows the set command usage:
       
#set  FILE=”output.txt”
#echo $FILE
output.txt
#tcsh
#echo $FILE

#

   The example below shows the setenv command usage:
          
#setenv  FILE ”output.txt”
#echo $FILE
output.txt
#tcsh
#echo $FILE
output.txt
#




Wednesday, April 7, 2010

How to do non-interactive FTP in Unix?

   FTP is one of the means used to transfer files from one system to another, either way. FTPing files is one of the routine tasks for a developer or a system administrator.  There could be instances when an individual's task involve FTP to be done quite frequently.


  Every time when FTP is done, one needs to login to the destination system using the destination user  credentials, and do the file transfer. This is called interactive FTP. This process is pretty time consuming. Non-interactive FTP, as the name suggests, is a process in which the user has minimal interaction with the system to do FTP.  The user credentials and the files information are used in an non-interactive manner in this method.

   Let us consider a scenario: A file named 'alpha.c' needs to the put to the UNIX box 'TeakWood' under the user 'blogger'.

   The following steps describe the non-interactive FTP steps to do the file transfer:

   1. Create a file named .netrc in the home directory.

#touch .netrc

   2. Add the following contents in the .netrc file:

       Syntax:

machine <hostname> login <username> password <password>
macdef init
prompt
put <filename>
bye


          where 'hostname' is the hostname of the destination machine
                    'username' is the user name of the destination machine
                    'password' is the password of the destination user.

       Actual:

machine TeakWood login blogger password abcd123
macdef init
prompt
put  alpha.c
bye


          The file .netrc is saved in the home directory with the above contents .

   3. Change the permission of the file to read-only to the user and none to the group and others.


#chmod 400 .netrc

    4.Execute the following command from the command prompt to do FTP:

#echo “quit” | ftp –v TeakWood

      The file got FTP'ed successfully to the destination box TeakWood using non-interactive FTP.  In our future articles, we will see how to write a script to automate the non-interactive FTP.

Sunday, April 4, 2010

How to connect to sqlplus from Shell?

   Sqlplus is an Oracle command line utility which is used to execute SQL and PL/SQL commands.  Connecting to sqlplus from UNIX box to retrieve data is one of the very common tasks and hence sqlplus becomes an important tool in shell scripting.  The data to be retrieved from Oracle could simply be a column value from a table or it could be set of data from more than one  table. This article explains the different ways to connect to sqlplus from shell and retrieve data.

   Let us consider the example of retrieving data from the EMPLOYEE table to get the employee-id for a given employee name:

 Example 1:
    This example shows a sample program which connects to the sqlplus and retrieve the employee-id for the given employee name.


#!/usr/bin/ksh

emp_id=`sqlplus –s $USER/$PASSW@$INST   << EOF
                set pagesize 0
                set feedback off
                set verify off
                set heading off
                select EMP_ID from EMPLOYEE where EMP_NAME='Blogger';
                exit;
EOF`
echo $emp_id



Example 2:
   This example explains the same as above, except this uses a variable name as a parameter.

#!/usr/bin/ksh

EMP="Blogger"
emp_id=`sqlplus –s $USER/$PASSW@$INST   << EOF
                set pagesize 0
                set feedback off
                set verify off
                set heading off
                select EMP_ID from EMPLOYEE where EMP_NAME='$EMP';
                exit;
EOF`
echo $emp_id



Example 3:
      This example shows another way to connect to sqlplus from shell.

#!/usr/bin/ksh

EMP="Blogger"
emp_id=`echo "
                set pagesize 0
                set feedback off
                set verify off
                set heading off
                select EMP_ID from EMPLOYEE where EMP_NAME='$EMP';
                exit;" | sqlplus –s $USER/$PASSW@$INST  `
echo $emp_id



Example 4:
  This example shows sql file based approach to connect to the sqlplus. The sql code is written in a separate sqlplus script file emp.sql, and the shell script emp_id.sh invokes the sqlplus script :

#cat emp.sql
set pagesize 0
set feedback off
set verify off
set heading off
select EMP_ID from EMPLOYEE where EMP_NAME='&1';
exit;

#cat emp_id.sh
#!/usr/bin/ksh
EMP="Blogger"
emp_id=`sqlplus –s $USER/$PASSW@$INST  @emp.sql $EMP`
echo $emp_id




Wednesday, March 31, 2010

What is crontab?

   UNIX operating systems contain a daemon called 'cron'. The job of the cron daemon is to wake up every minute and execute all the tasks scheduled for the particular minute.  The tasks to be executed by the cron are present in a file called 'crontab' file.

   The crontab file is present for every user in the UNIX box.  Generally, shell scripts to be executed on a timely basis is scheduled as part of the crontab. However, even standalone UNIX commands can also be scheduled as part of the crontab.

  UNIX has a command called 'crontab'. This crontab command can be used to create new crontab tasks and also to check the tasks scheduled in cron.

crontab usage:

  1. To check the crontab activities scheduled,

    #crontab -l
    4 5 * * sat     echo "Execute at 4 after 5 every saturday"
    #

    The above displayed task will execute at 4hours 5 mins on every saturday.

  2. To create a new task in crontab;

   #crontab -e
    20 00 * * *  tcsh ~gpr/bin/create_ctags v94_0
   #

    On typing the command 'crontab -e', a vi like editor opens up. We need to edit the file and put the above shown contents as editing any file in vi. And finally, save and quit the file. The above command creates a cron job to execute at 20hours everyday.

  Syntax of the crontab file:

   hh mm dom mon dow  

    hh   - hour   (0-24)
    mm - minute (0-59)
    dom - day of the month   (1-31)
    mon - month  (0-12)
    dow  - day of the week (0-6   0-Sunday  6-Saturday. Also, we can use the names, like sun, mon)

  The following are the important things to keep in mind:

  1. While testing cron activities, dont schedule any activity in the immediate next minute. Schedule atleast 2mins later. At times, cron does not refresh properly.

  2. When cron executes any crontab activity, it spawns a new shell and executes. This new shell does not inherit all the environment variables of the user. Instead, very few environment variables are set. The following are few of them: SHELL(/bin/sh), USER, LOGNAME and HOME

3. Because of the above reason, a shell script running successfully at the command prompt may not run successfully when scheduled in crontab. In order to overcome this, the user has to include the sourcing of his .login or .profile file inside the shell script.