8.12.2011

Setting up automatic startup script for Solaris 10 Update 9

To have your applications automatically start at boot time on Solaris, there are 2 ways: SMF and Legacy Init Scripts.

Legacy Init Scripts are old fashioned, used forever on System V based UNIX. They typically look something like this:

Code:

$ cat /etc/init.d/acct




#!/sbin/sh
state="$1"




case "$state" in
'start')

echo 'Starting process accounting'
/usr/lib/acct/startup
;;



'stop')

echo 'Stopping process accounting'
/usr/lib/acct/shutacct
;;



*)

echo "Usage: $0 { start | stop }"
exit 1
;;

esac

exit 0

The above legacy init script typically is simply a case statement that handles at least 2 arguments: start and stop. Very often these scripts will have other options like "status", "restart", "refresh" and others.

These script are stored in /etc/init.d. They are then symlinked into various RC directories, 1 directory per run-level the most commonly used being /etc/rc2.d and /etc/rc3.d . Init scripts symlinked into these directories are prefixed with either a capital S for "Start" or K for "Kill", followed by two numbers. When a system is booted scripts in these run-level directories are run if they start with a S and sequentially based on the two digits, so S00whatever is run first, then S01something, so on and so forth. This is why user added scripts tend to be named S99something, to ensure that they run last. If you have two scripts with the same digits (S99apache and S99bind) thats fine.

So lets say you want /etc/init.d/cswapache2 to start at boot, you'd do this:

Code:

# ln -s /etc/init.d/cswapache2 /etc/rc3.d/S99cswapache2


In a like manner, if something is already set to run at boot but you don't want it to, you can either rename the script so it doesn't begin with a capital S or just delete the symlink.


With SMF we start to look at our applications and daemons as services. Using the svcs command we can view running services, if you add the "-a" option it'll show you all services running or not.

Code:

$ svcs

STATE STIME FMRI

legacy_run Nov_19 lrc:/etc/rc2_d/S20sysetup

legacy_run Nov_19 lrc:/etc/rc2_d/S72autoinstall

legacy_run Nov_19 lrc:/etc/rc2_d/S73cachefs_daemon

legacy_run Nov_19 lrc:/etc/rc2_d/S85cswsaslauthd

legacy_run Nov_19 lrc:/etc/rc2_d/S89PRESERVE

legacy_run Nov_19 lrc:/etc/rc2_d/S98deallocate

legacy_run Nov_19 lrc:/etc/rc3_d/S50cswapache2

online Nov_19 svc:/system/svc/restarter:default

online Nov_19 svc:/system/filesystem/root:default

online Nov_19 svc:/network/loopback:default

...

online 10:20:09 svc:/network/nfs/cbd:default

online 10:20:09 svc:/network/nfs/nlockmgr:default


Here we can see the legacy init scripts that are running, and a few of the SMF services that are online, as well as when they last changed state (ie: started). You'll notice that each service has an identifying "FMRI" (Fault Management Resource Identifier) which is used by other Solaris frameworks such the Fault Management Architecture (FMA).

Dealing with services is easy. We can use the svcadm command to "enable", "disable", "refresh", "restart", or otherwise change the state of a given service.

Code:

$ svcs -a | grep -i mysql

disabled Nov_17 svc:/network/cswmysql5:default

$ svcadm enable svc:/network/cswmysql5:default

$ svcs -a | grep -i mysql

online 16:54:36 svc:/network/cswmysql5:default

$ svcadm restart svc:/network/cswmysql5:default

$ date

Tue Nov 21 16:55:26 PST 2006

$ svcs -a | grep -i mysql

online 16:55:27 svc:/network/cswmysql5:default


In this example above I looked for any MySQL services and found network/cswmysql5 , so I enabled it, verified that it was online, then restarted it and checked again. Notice that the time at which it was started is displayed.

Now lets see one way in which SMF is superior to legacy init scripts. When SMF starts something it has a "contract" for that service. That contract keeps track of whats running for any given service. Using the "-p" option we can see what processes are part of a services contract and take advantage of that intellegance.

Code:

$ svcs -p network/cswmysql5

STATE STIME FMRI

online 16:55:27 svc:/network/cswmysql5:default


16:55:27 28938 mysqld_safe
16:55:27 29004 mysqld

$ kill -9 29004

$ svcs -p network/cswmysql5

STATE STIME FMRI

online* 17:00:01 svc:/network/cswmysql5:default
16:55:27 28938 mysqld_safe
17:00:01 29228 mysqld

$ mysql -u mysql

...

mysql> \q

Bye


Notice here that I used svcs -p to list the processes associated with my MySQL5 service. Then I brutally killed mysqld and faster than I can blink the proccess was restarted! You can see that represented by the "STIME" for mysqld. The asterisk ("online*") indicates that the service is currently in a transistion state, in this case transitioning to online, but as you can see MySQL is already back in action.

But SMF isn't restarting thing in brain-dead mode like an inittab, we can define thresholds reguarding restarts. For instance, if SMF restarts a service more than 3 times in 60 seconds, something probly very wrong and it should stop attempting it. At that point it'll put the service in a "maintance" mode, and it will stay that way until you clear the state with svcadm clear some/service .

Lets look at an example of something broken trying to start. I'm going to break MySQL and then try to start it...

Code:

$ mv /opt/csw/mysql5/var/ /opt/csw/mysql5/xxx-var/

$ svcadm enable network/cswmysql5

$ svcs network/cswmysql5

STATE STIME FMRI

maintenance 17:29:01 svc:/network/cswmysql5:default

$ svcs -vx

svc:/network/cswmysql5:default (?)


State: maintenance since Tue Nov 21 17:29:01 2006

Reason: Restarting too quickly.
See: http://sun.com/msg/SMF-8000-L5
See: /var/svc/log/network-cswmysql5:default.log

Impact: This service is not running.


So I moved MySQL's data directory, obviously it can't start without it. When I enable the service it ends up in "maintenance". Using SMF's most magical command svcs -vx we can see a listing of all services that failed to start, why they failed to start, some information about them, the log location, all dependencies of that service that can't start as a result, and even a URL to a page that'll tell us more!

Now lets resolve the issue and bring the service back online:

Code:

$ mv /opt/csw/mysql5/xxx-var/ /opt/csw/mysql5/var/

$ svcs network/cswmysql5

STATE STIME FMRI

maintenance 17:29:01 svc:/network/cswmysql5:default

$ svcadm clear network/cswmysql5

$ svcs network/cswmysql5

STATE STIME FMRI

online 17:32:57 svc:/network/cswmysql5:default


The usefulness of the svcs -vx command can not be overstated. The first thing I run when logging into any Solaris 10 or OpenSolaris machine is this command.

So how do you actually use SMF with your own service?

SMF Services are defined in XML Manifests. These manifests describe how to start, stop, restart, and refresh (reload the configuration) your application, what dependancies it has, various thresholds, as well as various meta-data that may be useful such as man pages that apply to that service. In addition to the manifest, scripts just like your legacy init scripts can be used which we call methods , or "method scripts".

Service configuration changes can be made by using the svccfg ("Service Config") tool. The most common uses of this command are to import or export a manifest. For instance, I'm curious what the manifest for that MySQL5 service looks like:


Code:

$ svccfg export network/cswmysql5

To view the code refer http://pastebin.com/yRqeMKzi

































That probly looks really intimidating at first glance, but its really not so bad if you just break it down. First we define our dependencies, for instance MySQL is dependent on the network loopback service and the local filesystems service. There are 3 "exec_methods" which define the methods for start, stop, and restart. If you can start your app or daemon in just a single line then you don't need an external method script, but in the case of this service it opts for a script. Notice the "stop" method uses an SMF shortcut which just kills the processes rather than use a script or command.

This is only a very simple example, you can put lots more information in there, but its pretty simple XML when you just break it down.

When you create a new SMF Manifest, you simple put the XML in a file and use svccfg import my_service.xml to import it.

7.21.2011

How to setup OpenManage on ESX 4.1.0


Dell OpenManage is a suite of system management applications for managing Dell PowerEdge servers. Today I carried out my day with an aim to see how OpenManage could be setup on top of ESX 4.1. I have ESX 4.1 build on Dell PowerEdge 2970 hardware.
I followed the following steps:



1. Install ESX 4.1.0 build on your Dell hardware. Enable ssh and firewall as shown below:

#vi /etc/ssh/sshd_config

PermitRootLogin yes

#esxcfg-firewall --allowOutgoing --allowIncoming

2. Download OpenManage Package from http://ftp.dell.com/sysman/OM-SrvAdmin-Dell-Web-LX-6.3.0-2075.ESX41.i386_A00.12.tar.gz and scp to ESX server(You can use wget alternative command to put it under ESX straight)

3.Run the followind command:

#mkdir /tmp/openmanage
#cd /tmp/openmanage
#tar xvzf /path/to/OM-SrvAdmin-Dell-Web-LX-6.3.0-2075.ESX41.i386_A00.12.tar.gz
#cd linux/supportscripts/

[root@vmqa021-km-s2970 supportscripts]# ./srvadmin-install.sh -w -r -s
Installing the selected packages.

warning: libsmbios-2.2.19-5.1.vmw41.i386.rpm: Header V3 DSA signature: NOKEY,

key ID 23b66a9d
warning: smbios-utils-bin-2.2.19-5.1.vmw41.i386.rpm: Header V3 DSA signature:

NOKEY, key ID 23b66a9d
warning: libsmbios-2.2.19-5.1.vmw41.i386.rpm: Header V3 DSA signature: NOKEY,

key ID 23b66a9d
Preparing... ###########################################

[100%]
1:libsmbios ########################################### [

50%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

2:smbios-utils-bin ###########################################

[100%]
warning:

/tmp/openmanage/linux/RPMS/supportRPMS/srvadmin/ESX41/i386/../../../opensourc

e-components/ESX41/i386/libwsman1-2.2.1-1.2.vmw41.i386.rpm: Header V3 DSA

signature: NOKEY, key ID 23b66a9d
warning:

/tmp/openmanage/linux/RPMS/supportRPMS/srvadmin/ESX41/i386/../../../opensourc

e-components/ESX41/i386/openwsman-client-2.2.1-1.2.vmw41.i386.rpm: Header V3

DSA signature: NOKEY, key ID 23b66a9d
warning:

/tmp/openmanage/linux/RPMS/supportRPMS/srvadmin/ESX41/i386/../../../opensourc

e-components/ESX41/i386/libwsman1-2.2.1-1.2.vmw41.i386.rpm: Header V3 DSA

signature: NOKEY, key ID 23b66a9d
Preparing... ###########################################

[100%]
1:libwsman1 ########################################### [

50%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

2:openwsman-client ###########################################

[100%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

warning: srvadmin-argtable2-6.3.0-9.1.vmw41.i386.rpm: Header V3 DSA

signature: NOKEY, key ID 23b66a9d
Preparing... ###########################################

[100%]
1:srvadmin-storelib ########################################### [

4%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

2:srvadmin-hapi ########################################### [

9%]
3:srvadmin-sysfsutils ########################################### [

13%]
4:srvadmin-megalib ########################################### [

17%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

5:srvadmin-libxslt ########################################### [

22%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

6:srvadmin-xmlsup ########################################### [

26%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

7:srvadmin-argtable2 ########################################### [

30%]
/sbin/ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

/sbin/ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

8:srvadmin-omilcore ########################################### [

35%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

**********************************************************
After the install process completes, you may need
to log out and then log in again to reset the PATH
variable to access the Dell OpenManage CLI utilities

**********************************************************
9:srvadmin-deng ########################################### [

39%]
10:srvadmin-omcommon ########################################### [

43%]
11:srvadmin-isvc ########################################### [

48%]
12:srvadmin-omacore ########################################### [

52%]
13:srvadmin-rac-components########################################### [

57%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

14:srvadmin-racdrsc ########################################### [

61%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

15:srvadmin-racadm5 ########################################### [

65%]
16:srvadmin-racadm4 ########################################### [

70%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link

17:srvadmin-smcommon ########################################### [

74%]
18:srvadmin-jre ########################################### [

78%]
19:srvadmin-smweb ########################################### [

83%]
20:srvadmin-cm ########################################### [

87%]
21:srvadmin-iws ########################################### [

91%]
22:srvadmin-storage ########################################### [

96%]
23:srvadmin-storage-popula###########################################

[100%]
ldconfig: /usr/lib/libkrb4.so.2 is not a symbolic link

ldconfig: /usr/lib64/libkrb4.so.2 is not a symbolic link


#su -

[root@vmqa021-km-s2970 ~]# srvadmin-services.sh start
Starting Systems Management Device Drivers:
Starting dell_rbu: [ OK ]
Starting ipmi driver: Already started [ OK ]
Starting snmpd: [ OK ]
Starting Systems Management Data Engine:
Starting dsm_sa_datamgrd: [ OK ]
Starting dsm_sa_eventmgrd: [ OK ]
Starting dsm_sa_snmpd: [ OK ]
Starting DSM SA Shared Services: [ OK ]
Starting DSM SA Connection Service: [ OK ]
[root@vmqa021-km-s2970 ~]#


#esxcfg-firewall -o 1311,tcp,in,OpenManage
#cd /tmp
#rm -rf openmanage/


There you go. Shoot out https://10.112.172.55:1311 and there you see OpenManage Page.

Till then, Enjoy Maadi !!!

6.09.2011

How to inject driver into Linux Image ISO?

EXTRACT CD:

1) Boot up a Linux Box (I used WhiteBox Enterprise with 2.4.21-27 kern)
2) Insert CD into CDROM
3) mkdir /cdiso
4) cp -av /mnt/cdrom/* /cdiso

EXTRACT and EDIT INITRD.IMG (The Linux Filesystem)

5) mkdir /cdinitimg
6) find the initrd.gz or initrd.img in /cdiso
7) gunzip -c /cdiso/isolinux/initrd.img > /cdinitimg/initrd.img
- believe it or not the image is compressed
8) cd /cdinitimg
9) mdkir point
10) mount initrd.img point -o loop
11) mkdir /cdimgextract
12) cp -av /cdinitimg/point/* /cdimgextract
13) umount /cdinitimg/point
14) rm -rf /cdinitimg
15) make any changes you need to the initrd.img in /cdimgextract

REMAKE MODIFIED INITRD.IMG

16) mkdir /cdinitrd
17) dd if=/dev/zero of=initrd.img bs=1k count=60960
18) mke2fs -i 1024 -b 1024 -m 5 -F -v initrd.img
19) mount initrd.img cdinitrd -t ext2 -o loop
20) cp -av /cdimgextract/* /cdinitrd
21) umount /cdinitrd
22) gzip --best initrd.img
23) cp initrd.img /cdiso/isolinux/
24) rm -rf /cdinitrd
25) rm -rf /cdimgextract

REMAKE MODIFIED ISO

26) cd /cdiso
27) use this shell script:
-note: you may have to edit it a little to fit your ISO.

#!/bin/bash
# make the new iso and put in root.
mkisofs -o /new.iso -b isolinux/isolinux.bin \
-c isolinux/boot.cat -no-emul-boot -boot-load-size 4 \
-boot-info-table -J -R -V disks .
#

28) now you have a new ISO named new.iso in your / directory

BURN THE NEW ISO IMAGE

29) eject the old cd, and pop in a blank
- make sure you have cdrecord
30) cdrecord -v -pad speed=1 dev=0,0,0 /new.iso

NOTE: For the record, I had to edit a RHEL 6.0 Beta CD to add pvscsci driver

5.20.2011

Booting Solaris Systems to Either the 64-Bit Kernel or the 32-Bit Kernel

A common questions which I came across huge number of times while exploring the OpenSolaris forum - How do I configure the system to boot a 32-bit kernel or 64 bit kernel?.

This lead me to include it on my blog so as to help my friends with 32 bit and 64 bit availability of Solaris 10.
Here is the way to do that:

Booting into 32 bit:

You searched for Solaris 10 and just got to see a single ISO. No issues.


Log in as root and use the eeprom command to set the boot-file parameter to the 32-bit kernel:

# /usr/sbin/eeprom boot-file="kernel/unix"


The next time the system is rebooted, the 32-bit kernel will load.

To confirm what kernel your system is running now:

Run the following command:

$ /usr/bin/isainfo -kv
64-bit sparcv9 kernel modules

The 64-bit sparcv9 output indicates the system is running the 64-bit Solaris kernel.
Else, it is running 32 bit.


Hope this quick guide makes someone's day.

4.30.2011

How do I bind NIC interrupts to selected CPU?

I read this interesting mailing thread and want to share with all the followers and commuters searching for the solution.

I have a 4 Quad server, am trying to bind NIC eth0 interrupt(s) to CPU4 and
CPU5. As of now, my eth0 is found bind to all the 8's.

grep eth0 /proc/interrupts | awk '{print $NF}' | sort

eth0-0
eth0-1
eth0-2
eth0-3
eth0-4
eth0-5
eth0-6
eth0-7

How to move ahead?

Solution: Follow these steps to get it done.

As I am using Broadcom card(bnx2), I am going to run this command and reboot my machine.

Open the terminal:

echo "options bnx2 disable_msi=1" > /etc/modprobe.d/bnx2.conf

then reboot, after you'll only see one irq for eth0.

Next, run this command:

echo cpumask > /proc/irq/IRQ-OF-ETH0-0/smp_affinity

I believe the mask for cpu4 is 10 and cpu5 is 20.
(don't forget to disable irqbalance)

you can only bind the irqs for one nic to one core at a time.

or you could do something fancy/silly with isolcpus and....

isolcpus all but 4/5 so that all irqs will be scheduled on 4/5. this will
mean that the kernel can only schedule tasks on cpu4/5.

Hope it helps !!!
then use cpusets/taskset/tuna to move all the processes off cpu 4/5... and
you'll have to use taskset/cpuset/tuna for every task to ensure its not
using cpu4/5

How to transfer files through Bluetooth under Ubuntu? - Part-I

Bluetooth is a specification for the use of low-power radio communications to wirelessly link phones, computers and other network devices over short distances. The name Bluetooth is borrowed from Harald Bluetooth, a king in Denmark more than 1,000 years ago.

Bluetooth is the short-range networking facility that allows various items of hardware to work with each other wirelessly. For Bluetooth to work, both devices need to have Bluetooth support. Many mobile phones come with Bluetooth nowadays, and an increasing number of notebook computers do too. It’s also possible to buy very inexpensive Bluetooth USB adapters.

Your PC’s Bluetooth hardware is automatically recognized under Ubuntu, and the low-level driver software is installed by default. Therefore, all you normally need to do is install the software that provides the Bluetooth functionality you require.

Configuring Bluetooth

When two pieces of Bluetooth-compatible hardware need to communicate on a regular basis, they can pair together. This means that they trust each other, so you don’t need to authorize every attempt at communication between the devices. Indeed, some devices won’t communicate unless they’re paired in this way.

Pairing is very simple in practice and works on the principle of a shared personal ID number (PIN). The first Bluetooth device generates the PIN, and then asks the second Bluetooth device to confirm it. Once the user has typed in the PIN, the devices are paired.

Pairing is easily accomplished under Ubuntu and doesn’t require any additional software. However, you will need to edit a configuration file. This only needs to be done once.

Start by opening the central Bluetooth configuration file, hcid.conf, in Gedit, using superuser powers:

Code:

gksu gedit /etc/bluetooth/hcid.conf

Look for the line that reads "security user", and change it so that it reads "security auto".

The default PIN needed to pair with Ubuntu is 1234. For security reasons, it’s wise to change this, and the setting is contained further down in the hcid.conf file. Look for the line that reads

Code:

passkey "1234";

and replace 1234 with the number you desire. For example, if I wanted a PIN of 9435, the line would read

Code:

passkey "9435";

When you’ve finished, save the file, and close Gedit. It’s then necessary to restart the background Bluetooth service. To do this, type the following into a Terminal window (Applications -> Accessores -> Terminal):

Code:

sudo /etc/init.d/bluetooth restart

Following this, I paired my Ubuntu test PC to a Nokia 6680 mobile phone. It’s easiest to initiate pairing on the phone, which should then autosense the PC’s Bluetooth connection.

On my Samsung Champ, I opened the menu, and selected Connections -> Bluetooth. Then I pressed the right arrow key to select Paired Devices and selected Options -> New Paired Device -> More Devices. This made the phone autosense my Ubuntu PC, which was identified by its hostname, followed by -0. In my case, the Ubuntu PC was identified as keir-desktop-0, and I was then prompted to enter the PIN I set earlier. Following this, the two devices were paired.

In the next episode we will discuss how to transfer/receive files through bluetooth.
Till then, bbye.

1.22.2011

Logging activity to a MySQL database

Problem
Rather than logging accesses to your server in flat text files, you want to log the information directly to a database for easier analysis.

Solution

Install the latest release of mod_log_sql from http://www.outoforder.cc/projects/apache/
mod_log_sql/ according to the modules directions (see Recipe 2.1), and then issue the following commands:

# mysqladmin create apache_log
# mysql apache_log < access_log.sql # mysql apache_log mysql> grant insert,create on apache_log.* to webserver@localhost identified by 'wwwpw' ;

Add the following lines to your httpd.conf file:


LogSQLLoginInfo mysql://webserver:wwwpw@dbmachine.example.com/apache_log
LogSQLCreateTables on


Then, in your VirtualHost container, add the following log directive:

LogSQLTransferLogTable access_log

Discussion

Replace the values of webserver and wwwpw with a less guessable username and password when you run these commands.
Consult the documentation on the referenced website to ensure that the example here reflects the version of the module that you have installed, as the configuration syntax changed with the 2.0 release of the module.

Building a Centralized Logging Server

I was just hanging around blogs until I cam across one nice piece of setting up centralized Logging setup.I thought to try it out of mine and here is the output:

Syslog is a fantastic facility for logging on Linux machines. Lets say you have a small number of servers, and want to log them all to one central syslog server. Here we'll describe a simple configuration.

1) Setup the syslog server

On the system you want to use as the syslog server, edit the file /etc/sysconf/syslog, and add '-r' as follows:

# Options to syslogd
# -m 0 disables 'MARK' messages.
# -r enables logging from remote machines
# -x disables DNS lookups on messages recieved with -r
# See syslogd(8) for more details
SYSLOGD_OPTIONS="-m 0 -r"
# Options to klogd
# -2 prints all kernel oops messages twice; once for klogd to decode, and
# once for processing with 'ksymoops'
# -x disables all klogd processing of oops messages entirely
# See klogd(8) for more details
KLOGD_OPTIONS="-x"



Initially I added -x because I thought it would use networked DNS. But as I am logging all from local servers, all of which are defined in /etc/hosts, it doesn't actually go to the network for name lookup. And, having the name of the system in the log file is nice.

Now, restart syslog, and confirm that syslog is listening on port 514 (the syslog port):

root@remy:/root>/etc/init.d/syslog restart
Shutting down kernel logger: [ OK ]
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]
Starting kernel logger: [ OK ]
root@remy:/root>netstat -an|grep 514
udp 0 0 0.0.0.0:514 0.0.0.0:*



2) Now, configure your client:

For simplicity, I added a line in the /etc/hosts file to add the name 'loghost' to the other names I am using for my logging server. This is actually beneficial - because I can move my syslog server to another host - and I only have to modify the hosts file...

Next, edit the /etc/syslog.conf file. I added 1 simple line to log all informational messages to the remote loghost:

*.info @loghost


Note: separate all columns with the tab character, not space.

Finally restart syslog on the client with /etc/init.d/syslog restart.

To test, you can use the command line logging facility called logger. On the client I type:

root@booker:/etc>logger foobar


And on the server I see:

root@remy:/root>tail -f /var/log/messages
...
Jun 28 21:17:29 booker bemo: foobar

Setup DNS on Linux?

It has been long I was thinking of writing something about DNS(Domain Name Server).DNS is a database of the IP to Name and Name to IP conversion.I went through lots of tutorials related to DNS but couldnt satisfy myself unless and until I started writing commands and configuring files for the initial setup.After lot of tweaking and commandline I was able to setup a simple DNS server.

Lets travel into the world of DNS.
I have a RHEL 4 Machine ready with bind packages installed.The Minimal requirement are:

[root@localhost ~]# rpm -qa bind*
bind-libs-9.2.4-24.EL4
bind-utils-9.2.4-24.EL4
bind-9.2.4-24.EL4
bind-chroot-9.2.4-24.EL4
bind-devel-9.2.4-24.EL4
bind-libs-9.2.4-24.EL4
[root@localhost ~]#

The IP Details of my Machine are:
[root@localhost ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:17:C6:BE:47
inet addr:10.14.77.33 Bcast:10.14.77.127 Mask:255.255.255.128
inet6 addr: fe80::216:17ff:fec6:be47/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:36287 errors:0 dropped:0 overruns:0 frame:0
TX packets:19141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5408275 (5.1 MiB) TX bytes:2370680 (2.2 MiB)
Interrupt:201

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:18714 errors:0 dropped:0 overruns:0 frame:0
TX packets:18714 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:10174891 (9.7 MiB) TX bytes:10174891 (9.7 MiB)

[root@localhost ~]#

The Exact Steps I followed are mentioned Below:

1. Open a file /etc/hosts and make it look like this:


[root@localhost ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost

[root@localhost ~]#

2.Edit the file /etc/resolv.conf:

[root@localhost ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script

search tuxbuddy.logica.com
nameserver 10.14.77.33
[root@localhost ~]#

3. Run this Command:

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=dhcp
HWADDR=00:16:17:C6:BE:47
ONBOOT=yes
TYPE=Ethernet
PEERDNS=no
[root@localhost ~]#

4. Follow this step:

[root@localhost etc]# pwd
/var/named/chroot/etc
[root@localhost etc]# vi named.conf

//
// named.conf for Red Hat caching-nameserver
//

options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
/*
* If there is a firewall between you and nameservers you want
* to talk to, you might need to uncomment the query-source
* directive below. Previous versions of BIND always asked
* questions using port 53, but BIND 8.1 uses an unprivileged
* port by default.
*/
// query-source address * port 53;
};

//
// a caching only nameserver config
//
controls {
inet 127.0.0.1 allow { localhost; } keys { rndckey; };
};

zone "." IN {
type hint;
file "named.ca";
};

zone "tuxbuddy.logica.com" IN {
type master;
file "tuxbuddy.logica.com.zone";
allow-update { none; };
};

zone "33.77.14.10.in-addr.arpa" IN {
type master;
file "10.14.77.33.zone";
allow-update { none; };
};

zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
};

zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
};

zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.ip6.local";
allow-update { none; };
};

zone "255.in-addr.arpa" IN {
type master;
file "named.broadcast";
allow-update { none; };
};

zone "0.in-addr.arpa" IN {
type master;
file "named.zero";
allow-update { none; };
};

include "/etc/rndc.key";


Save the file.

5.Edit the Database Files:

[root@localhost named]# pwd
/var/named/chroot/var/named
[root@localhost named]#vi tuxbuddy.logica.com

$TTL 86400
@ IN SOA station1.tuxbuddy.logica.com. root.station1.tuxbuddy.logica.com. (
2009091100; Serial
28800 ; Refresh
14400 ; Retry
3600000 ;Expire
0 ) ; Negative

@ IN NS station1.tuxbuddy.logica.com.
@ IN A 10.14.77.33

station1.tuxbuddy.logica.com. IN A 10.14.77.33
www IN A 10.14.77.33
ftp IN A 10.14.77.33
pop IN A 10.14.77.33

www1 IN CNAME station1.tuxbuddy.logica.com.
www2 IN CNAME station2.tuxbuddy.logica.com.
www.station1.tuxbuddy.logica.com IN A 10.14.77.33
Innovation2.groupinfra.com. IN A 10.14.16.215
@ IN MX 10 station1.tuxbuddy.logica.com.
station1 IN MX 10 station1.tuxbuddy.logica.com.
~


[root@localhost named]#

6. Edit this file too:

[root@localhost named]# pwd
/var/named/chroot/var/named
[root@localhost named]#

[root@localhost named]# cat 10.14.77.33.zone
$TTL 86400
@ IN SOA station1.tuxbuddy.logica.com. root.station1.tuxbuddy.logica.com. (
4 10800 3600 604800 86400 )
IN NS station1.tuxbuddy.logica.com.
33.77.14.10.IN-ADDR.ARPA. IN PTR station1.tuxbuddy.logica.com.
[root@localhost named]#

JUST REMEMBER DONT MISS ANY . during the configuration.
DNS IS VERY SENSITIVE TO SINGLE SIGN.

Thats ALL !!!

YOUR SIMPLE DNS SERVER IS READY.

Testing the DNS SERVER

[root@localhost named]# dig -x 10.14.77.33

; <<>> DiG 9.2.4 <<>> -x 10.14.77.33
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48322 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;33.77.14.10.in-addr.arpa. IN PTR ;; ANSWER SECTION: 33.77.14.10.in-addr.arpa. 86400 IN PTR station1.tuxbuddy.logica.com. ;; AUTHORITY SECTION: 33.77.14.10.in-addr.arpa. 86400 IN NS station1.tuxbuddy.logica.com.

;; ADDITIONAL SECTION:
station1.tuxbuddy.logica.com. 86400 IN A 10.14.77.33

;; Query time: 1 msec
;; SERVER: 10.14.77.33#53(10.14.77.33)
;; WHEN: Wed Oct 7 07:28:30 2009
;; MSG SIZE rcvd: 114

[root@localhost named]#


Just See..Your IP is resolving to Hostname and vice versa.

Other Way to see if things work or not !!

[root@localhost named]# host 10.14.77.33
33.77.14.10.in-addr.arpa domain name pointer station1.tuxbuddy.logica.com.
[root@localhost named]#

These too,
[root@localhost named]# host www
www.tuxbuddy.logica.com has address 10.14.77.33
[root@localhost named]#

How can I view Apache server performance status?

Apache server performance can be monitored using the Apache module mod_status. Server status is presented in an HTML page that gives the current server statistics in an easily readable form. The page can be also made automatically refresh with a compatible browser. Another page gives a simple machine-readable list of the current server state.


The details given are:


•The number of children serving requests.

•The number of idle children.

•The status of each child, the number of requests that child has performed and the total number of bytes served by the child (*)

•A total number of accesses and byte count served (*).

•The time the server was started/restarted and the time it has been running for

•Averages giving the number of requests per second, the number of bytes served per second and the average number of bytes per request (*).

•The current percentage CPU used by each child and in total by Apache (*).

•The current hosts and requests being processed (*).


Details marked "(*)" are only available with ExtendedStatus On.


Activating Status Support for Apache

To activate status reports only for browsers from the mydomain.com domain add the following code to your httpd.conf configuration file:





SetHandler server-status
Order Deny,Allow
Deny from all
Allow from .mydomain.com



Note: The lines above may already exist in your default httpd.conf file. If this is the case, you can simply uncomment this section to use the existing configuration.


The server statistics can now be accessed by using a Web browser. To access the page http://mydomain.com/server-status.


Try accessing and get the result likewise:

Apache Server Status for 10.14.236.98
Server Version: Apache/2.0.52 (Red Hat)
Server Built: Aug 7 2007 05:01:09

--------------------------------------------------------------------------------

Current Time: Sunday, 11-Oct-2009 21:38:25 IST
Restart Time: Sunday, 11-Oct-2009 21:38:19 IST
Parent Server Generation: 0
Server uptime: 5 seconds
Total accesses: 0 - Total Traffic: 0 kB
CPU Usage: u0 s0 cu0 cs0
0 requests/sec - 0 B/second -
1 requests currently being processed, 7 idle workers
_W______........................................................
................................................................
................................................................
................................................................

Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process

Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
1-0 25523 0/0/0 W 0.00 0 0 0.0 0.00 0.00 158.234.236.15 relay.groupultra.com GET /server-status HTTP/1.1



--------------------------------------------------------------------------------
Srv Child Server number - generation
PID OS process ID
Acc Number of accesses this connection / this child / this slot
M Mode of operation
CPU CPU usage, number of seconds
SS Seconds since beginning of most recent request
Req Milliseconds required to process most recent request
Conn Kilobytes transferred this connection
Child Megabytes transferred this child
Slot Total megabytes transferred this slot

--------------------------------------------------------------------------------
SSL/TLS Session Cache Status:
cache type: SHMCB, shared memory: 512000 bytes, current sessions: 0
sub-caches: 32, indexes per sub-cache: 133
index usage: 0%, cache usage: 0%
total sessions stored since starting: 0
total sessions expired since starting: 0
total (pre-expiry) sessions scrolled out of the cache: 0
total retrieves since starting: 0 hit, 0 miss
total removes since starting: 0 hit, 0 miss


--------------------------------------------------------------------------------

Apache/2.0.52 (Red Hat) Server at 10.14.236.98 Port 80

How can I rotate my application logs periodically in Red Hat Enterprise Linux?

Red Hat Enterprise Linux has a daily cron job named logrotate which runs /usr/sbin/logrotate /etc/logrotate.conf every day. it is designed to ease administration of systems that generate large numbers of log files.


This job will first import all the policy files in directory of /etc/logrotate.d/ and do as the config file ask to, For example, /etc/logrotate.dsyslog defines /var/log/messages to be rotated weekly.


Suppose there is a log file /var/log/app_log which needs to be rotated every day, this is the example config file to make that happen (/etc/logrotate.d/app_log):


[root@dhcp-0-130 ~]# cat /etc/logrotate.d/app_log
/var/log/app_log {
rotate 2
daily
}


rotate #

means log files are rotated times before being removed.


daily

means log files are rotated every day.

Hope you got a glimpse of power of LinuX.

Happy Logging !!

How to configure Directory Indexing in Apache?

Quick Tips:

Edit the /etc/httpd/conf/httpd.conf file :

Just Look at the line starting:

[Please note: Do add lesser than sign in front of directory]
directory "/var/www/html/pdfs"


Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all

/Directory

Restart the Apache.
Try browsing http://localhost/pdfs

Setting up Nagios on RHEL 5.3

Last week I thought of setting up Nagios on my Linux Box.I installed a fresh piece of RHEL on my Virtualbox and everything went fine. I thought of putting this complete setup on my blog and here it is : "A Complete Monitoring Tool for your Linux Box"

Here is my Machine Configuration:

[root@irc ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.3 (Tikanga)
[root@irc ~]#

[root@irc ~]# uname -arn
Linux irc.chatserver.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
[root@irc ~]#

1) Create Account Information

Become the root user.


su -l


Create a new nagios user account and give it a password.


/usr/sbin/useradd -m nagios

passwd nagios


Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to the group.


/usr/sbin/groupadd nagcmd

/usr/sbin/usermod -a -G nagcmd nagios

/usr/sbin/usermod -a -G nagcmd apache

2) Download Nagios and the Plugins

Create a directory for storing the downloads.


mkdir ~/downloads

cd ~/downloads


Download the source code tarballs of both Nagios and the Nagios plugins (visit http://www.nagios.org/download/ for links to the latest versions). These directions were tested with Nagios 3.1.1 and Nagios Plugins 1.4.11.


wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.0.tar.gz

wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.11.tar.gz


3) Compile and Install Nagios

Extract the Nagios source code tarball.


cd ~/downloads

tar xzf nagios-3.2.0.tar.gz

cd nagios-3.2.0


Run the Nagios configure script, passing the name of the group you created earlier like so:


./configure --with-command-group=nagcmd


Compile the Nagios source code.


make all


Install binaries, init script, sample config files and set permissions on the external command directory.


make install

make install-init

make install-config

make install-commandmode


Don't start Nagios yet - there's still more that needs to be done...

4) Customize Configuration

Sample configuration files have now been installed in the /usr/local/nagios/etc directory. These sample files should work fine for getting started with Nagios. You'll need to make just one change before you proceed...

Edit the /usr/local/nagios/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the nagiosadmin contact definition to the address you'd like to use for receiving alerts.


vi /usr/local/nagios/etc/objects/contacts.cfg


5) Configure the Web Interface

Install the Nagios web config file in the Apache conf.d directory.


make install-webconf


Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you assign to this account - you'll need it later.


htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin


Restart Apache to make the new settings take effect.


service httpd restart


Note: Consider implementing the ehanced CGI security measures described here to ensure that your web authentication credentials are not compromised.

6) Compile and Install the Nagios Plugins

Extract the Nagios plugins source code tarball.


cd ~/downloads

tar xzf nagios-plugins-1.4.11.tar.gz

cd nagios-plugins-1.4.11


Compile and install the plugins.


./configure --with-nagios-user=nagios --with-nagios-group=nagios

make

make install


7) Start Nagios

Add Nagios to the list of system services and have it automatically start when the system boots.


chkconfig --add nagios

chkconfig nagios on


Verify the sample Nagios configuration files.


/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg


If there are no errors, start Nagios.


service nagios start


8) Modify SELinux Settings

Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This can result in "Internal Server Error" messages when you attempt to access the Nagios CGIs.

See if SELinux is in Enforcing mode.


getenforce


Put SELinux into Permissive mode.


setenforce 0


To make this change permanent, you'll have to modify the settings in /etc/selinux/config and reboot.

Instead of disabling SELinux or setting it to permissive mode, you can use the following command to run the CGIs under SELinux enforcing/targeted mode:


chcon -R -t httpd_sys_content_t /usr/local/nagios/sbin/

chcon -R -t httpd_sys_content_t /usr/local/nagios/share/


For information on running the Nagios CGIs under Enforcing mode with a targeted policy, visit the Nagios Support Portal or Nagios Community Wiki.

9) Login to the Web Interface

You should now be able to access the Nagios web interface at the URL below. You'll be prompted for the username (nagiosadmin) and password you specified earlier.


http://localhost/nagios/


Click on the "Service Detail" navbar link to see details of what's being monitored on your local machine. It will take a few minutes for Nagios to check all the services associated with your machine, as the checks are spread out over time.

10) Other Modifications

Make sure your machine's firewall rules are configured to allow access to the web server if you want to access the Nagios interface remotely.

Configuring email notifications is out of the scope of this documentation. While Nagios is currently configured to send you email notifications, your system may not yet have a mail program properly installed or configured. Refer to your system documentation, search the web, or look to the Nagios Support Portal or Nagios Community Wiki for specific instructions on configuring your system to send email messages to external addresses. More information on notifications can be found here.

11) You're Done

Congratulations! You sucessfully installed Nagios. Your journey into monitoring is just beginning.


Example:

Say, If You Nagios Server is 10.14.236.140. You need to monitor the Linux Machine with IP: 10.14.236.70. You need to follow up like this:

[root@irc objects]# pwd
/usr/local/nagios/etc/objects
[root@irc objects]#
[root@irc objects]# ls
commands.cfg localhost.cfg printer.cfg switch.cfg timeperiods.cfg
contacts.cfg localhost.cfg.orig remotehost.cfg templates.cfg windows.cfg
[root@irc objects]#

The File should looks like:


# HOST DEFINITION
#
###############################################################################
###############################################################################

# Define a host for the local machine

define host{
use linux-server ; Name of host template to use
; This host definition will inherit all variab les that are defined
; in (or inherited by) the linux-server host t emplate definition.
host_name localhost
alias localhost
address 127.0.0.1
}

define host{
use linux-server ; Name of host template to use
; This host definition will inherit all variab les that are defined
; in (or inherited by) the linux-server host t emplate definition.
host_name ideath.logic.com
alias ideath
address 10.14.236.140
}


###############################################################################
###############################################################################
#
# HOST GROUP DEFINITION
#
###############################################################################
###############################################################################

# Define an optional hostgroup for Linux machines

define hostgroup{
hostgroup_name linux-server ; The name of the hostgroup
alias Linux Servers ; Long name of the group
members localhost ; Comma separated list of hosts that belong to this group
}



###############################################################################
###############################################################################
#
# SERVICE DEFINITIONS
#
###############################################################################
###############################################################################


# Define a service to "ping" the local machine

define service{
use local-service ; Name of service template to use
host_name localhost
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}

define service{
use local-service ; Name of service template to use
host_name ideath.logica.com
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}

# Define a service to check the disk space of the root partition
# on the local machine. Warning if < 20% free, critical if # < 10% free space on partition. define service{ use local-service ; Name of service template to use host_name localhost service_description Root Partition check_command check_local_disk!20%!10%!/ } define service{ use local-service ; Name of service template to use host_name ideath.logic.com service_description Root Partition check_command check_local_disk!20%!10%!/ } # Define a service to check the number of currently logged in # users on the local machine. Warning if > 20 users, critical
# if > 50 users.

define service{
use local-service ; Name of service template to use
host_name localhost
service_description Current Users
check_command check_local_users!20!50
}

define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description Current Users
check_command check_local_users!20!50
}


# Define a service to check the number of currently running procs
# on the local machine. Warning if > 250 processes, critical if
# > 400 users.

define service{
use local-service ; Name of service template to use
host_name localhost
service_description Total Processes
check_command check_local_procs!250!400!RSZDT
}


define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description Total Processes
check_command check_local_procs!250!400!RSZDT
}
# Define a service to check the load on the local machine.

define service{
use local-service ; Name of service template to use
host_name localhost
service_description Current Load
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}

define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description Current Load
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}

# Define a service to check the swap usage the local machine.
# Critical if less than 10% of swap is free, warning if less than 20% is free

define service{
use local-service ; Name of service template to use
host_name localhost
service_description Swap Usage
check_command check_local_swap!20!10
}

define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description Swap Usage
check_command check_local_swap!20!10
}

# Define a service to check SSH on the local machine.
# Disable notifications for this service by default, as not all users may have SSH enabled.

define service{
use local-service ; Name of service template to use
host_name localhost
service_description SSH
check_command check_ssh
notifications_enabled 0
}

define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description SSH
check_command check_ssh
check_period 24x7
notifications_enabled 0
is_volatile 0
max_check_attempts 4
normal_check_interval 5
retry_check_interval 1
contact_groups admins
notification_options w,c,u,r
notification_interval 960
notification_period 24x7
check_command check_ssh
}



# Define a service to check HTTP on the local machine.
# Disable notifications for this service by default, as not all users may have HTTP enabled.

define service{
use local-service ; Name of service template to use
host_name localhost
service_description HTTP
check_command check_http
notifications_enabled 0
}

define service{
use local-service ; Name of service template to use
host_name ideath.logic.com
service_description HTTP
check_command check_http
notifications_enabled 0
is_volatile 0
max_check_attempts 4
normal_check_interval 5
retry_check_interval 1
contact_groups admins
notification_options w,c,u,r
notification_interval 960
notification_period 24x7
check_command check_http
}


Ideath.logic.com is the hostname of 10.14.236.70.
Do make entry in /etc/hosts if it is unable to resolve the IP(or else check the DNS).

How do I rotate log files?

I had a hectic time playing with the logs yesterday. I have a VirtualBox setup on Dell Lappy and was exploring more on Logging this time.In one of the interview I was asked on "Logging" and thought to give it a little more exploration.Here is what I really get to know about the logrotate.

Happy Reading !!


The rotation of log files can be done with the logrotate.


logrotate is designed to ease administration of systems that generate large numbers of log files. It allows automatic rotation, compression, removal, and mailing of log files. Each log file may be handled daily, weekly, monthly, or when it grows too large.


The /etc/logrotate.conf is the main configuration file for log rotation. This file is pretty self explanatory. Some important values to keep in mind:




# rotate log files weekly
weekly

# keep 4 weeks worth of backlogs
rotate 4


Normally, logrotate is run as a daily cron job. It will not modify a log multiple times in one day unless the criteria for a log is based on the logs size and logrotate is being run multiple times each day.


For example, to change the log setting for CUPS, follow the steps below:


Edit /etc/logrotate.d/cups file and add the following lines:


rotate
# Log files are rotated times before being removed If count is 0, old versions are
removed rather then rotated.
size
# Log files are rotated when they grow bigger then size bytes. If size is followed by M,
the size if assumed to be in megabytes. If the k is used, the size is in kilobytes. So size
100, size 100k, and size 100M are all valid.
compress
# Old versions of log files are compressed with gzip by default.

The file should look similar to the following:


/var/log/cups/*_log {
missingok
notifempty
size 100k # log files will be rotated when they grow bigger that 100k.
rotate 5 # will keep the logs for 5 weeks.
compress # log files will be compressed.
sharedscripts
postrotate
/etc/init.d/cups condrestart >/dev/null 2>1 || true
endscript
}

Save the file and exit.

How to setup RAID 1 on Ubuntu?

RAID 1 creates a mirror on the second drive. .You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu. For a tutorial on that CLICK HERE.

You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers. Be sure to increment the "2" number by one each time you create an additional RAID device.

# mknod /dev/md1 b 9 2

This will create the device if you have already used /dev/md0.

Create RAID 1


# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb7 /dev/sdb8

--create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md1.

--level=1
The level option determines what RAID level you will use for the RAID.


--raid-devices=2 /dev/sdb7 /dev/sdb8
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.


Verify the Create of the RAID


# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[1] sdb7[0]

497856 blocks [2/2] [UU]

[======>..............] resync = 34.4% (172672/497856) finish=0.2min speed=21584K/sec



md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks



unused devices:




# tail /var/log/messages

You can also verify that RAID is being built in /var/log/messages.

May 19 09:21:45 ub1 kernel: [ 5320.433192] md: raid1 personality registered for level 1

May 19 09:21:45 ub1 kernel: [ 5320.433620] md2: WARNING: sdb7 appears to be on the same physical disk as sdb8.

May 19 09:21:45 ub1 kernel: [ 5320.433628] True protection against single-disk failure might be compromised.

May 19 09:21:45 ub1 kernel: [ 5320.433772] raid1: raid set md2 active with 2 out of 2 mirrors

May 19 09:21:45 ub1 kernel: [ 5320.433913] md: resync of RAID array md2

May 19 09:21:45 ub1 kernel: [ 5320.433926] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.

May 19 09:21:45 ub1 kernel: [ 5320.433934] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.

May 19 09:21:45 ub1 kernel: [ 5320.433954] md: using 128k window, over a total of 497856 blocks.




Create the File System ext 3.
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.


# mke2fs -j /dev/md1

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

124928 inodes, 497856 blocks

24892 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

61 block groups

8192 blocks per group, 8192 fragments per group

2048 inodes per group

Superblock backups stored on blocks:

8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409




Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done


This filesystem will be automatically checked every 35 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.



Mount the RAID on the /raid Partition

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mount /dev/md1 /raid

# df
The df command will verify that it has mounted.

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda2 5809368 2699256 2817328 49% /

varrun 1037732 104 1037628 1% /var/run

varlock 1037732 0 1037732 0% /var/lock

udev 1037732 80 1037652 1% /dev

devshm 1037732 12 1037720 1% /dev/shm

/dev/sda1 474440 49252 400691 11% /boot

/dev/sda4 474367664 1738024 448722912 1% /home

/dev/md1 482090 10544 446654 3% /raid


You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md1 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.


Create a Failed RAID Disk

In order to test your RAID 1 you can fail a disk, remove it and reinstall it. This is an important feature to practice.

# mdadm /dev/md1 -f /dev/sdb8
This will deliberately make the /dev/sdb8 faulty.

mdadm: set /dev/sdb8 faulty in /dev/md1

root@ub1:/etc/network# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2](F) sdb7[0]

497856 blocks [2/1] [U_]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:


Hot Remove the Failed Disk
You can remove the faulty disk from the RAID array.

# mdadm /dev/md1 -r /dev/sdb8

mdadm: hot removed /dev/sdb8


Verify the Process

You should be able to see the process as it is working.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb7[0]

497856 blocks [2/1] [U_]


md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

Add a Replacement Drive HOT

This will allow you to add a device into the array to replace the bad one.
# mdadm /dev/md1 -a /dev/sdb8

mdadm: re-added /dev/sdb8




Verify the Process.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2] sdb7[0]

497856 blocks [2/1] [U_]

[=====>...............] recovery = 26.8% (134464/497856) finish=0.2min speed=26892K/sec

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:
Posted by LinuxFreaker at 10:48

How to setup RAID 0 on Ubuntu Linux?

RAID 0 will create striping to increase read/write speeds as the data can be read and written on separate disks at the same time. This level of RAID is what you want to use if you need to increase the speed of disk access.You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

These commands must be done as root or you must add the sudo command in front of each command.

# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb5 /dev/sdb6

--create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md0.

--level=0
The level option determines what RAID level you will use for the RAID.


--raid-devices=2 /dev/sdb5 /dev/sdb6
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.


Check the development of the RAID.

# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks
unused devices:

# tail /var/log/messages
You can also verify that RAID is being built in /var/log/messages.

May 19 09:08:51 ub1 kernel: [ 4548.276806] raid0: looking at sdb5

May 19 09:08:51 ub1 kernel: [ 4548.276809] raid0: comparing sdb5(497856) with sdb6(497856)

May 19 09:08:51 ub1 kernel: [ 4548.276813] raid0: EQUAL

May 19 09:08:51 ub1 kernel: [ 4548.276815] raid0: FINAL 1 zones

May 19 09:08:51 ub1 kernel: [ 4548.276822] raid0: done.

May 19 09:08:51 ub1 kernel: [ 4548.276826] raid0 : md_size is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276829] raid0 : conf->hash_spacing is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276831] raid0 : nb_zone is 1.

May 19 09:08:51 ub1 kernel: [ 4548.276834] raid0 : Allocating 4 bytes for hash.




Create the ext 3 File System
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.


# mke2fs -j /dev/md0

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

62464 inodes, 248928 blocks

12446 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

7808 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.




Create a Place to Mount the RAID on the File System

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mkdir /raid

Mount the RAID Array

# mount /dev/md0 /raid

You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md0 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

Hope you find this article helpful.

Integrating Active Directory Server to Linux

Linux and Windows System Administrators generally undergoes tough time working in integrated environment. Sometimes Linux freaks are required to integrate Linux environment with Windows environment while other time Windows administrators do face problems working in Linux command line.

One of my colleague working as Linux System Administrator was asked to connect ADS to Linux box.Since he has 3 years of experience working only as Linux system admin he did faced issues configuring the same in Windows. He shared the experience with me and I collected the steps.Hope it would be helpful for whoever wanna tweak with it.
Here it goes..


Prerequisite:

Following Samba client RPM’s must be pre-installed on the server:

samba-client-3.0.33-3.7.el5
samba-common-3.0.33-3.7.el5

01)Configuring Linux networking:

a)Make sure that your host file has proper entries for your server [if it is static IP ].

b)Configure DNS client properly. Entries for /etc/resolv.conf file:

search sap.com
nameserver 10.210.1.252
nameserver 10.219.1.252

02)Synchronize the time using NTP.

a)Remove all public server IP’s/Names from /etc/ntp.conf and replace with the Company DNS server IP.

server 10.222.1.252

b)Synchronize the time with spaient time server.

#ntpdate -u 10.222.1.252

c)Start the NTP daemon

# service ntpd restart

d)Set the NTP service to start at boot time.

#chkconfig --level 234 ntpd on

03)Configuring PAM and NSS

a)Run the system-config-authentication in GUI or setup [for authentication configuration] command in CLI.

# system-config-authentication

Check the Winbind option on both the User Information tab (which configures the nss.conf file) and the Authentication tab (which modifies system-auth file).


Click the Configure Winbind button and enter the following entries:


b)Open the /etc/pam.d/system-auth file, then scroll down toward the bottom and insert a highlighted line before the last line. This will create a home directory for a user if doesn’t exists.

session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid

session required pam_mkhomedir.so skel=/etc/skel/ umask=0022

session required pam_unix.so

04)Open the /etc/samba/smb.conf and add/edit the highlighted entries in the global section of this file.

[global]
#--authconfig--start-line--

# Generated by authconfig on 2010/02/13 11:48:48

workgroup = sap
password server = dellads2
realm = SAP.COM
security = ads
idmap uid = 16777216-33554431
idmap gid = 16777216-33554431
idmap backend = rid
template shell = /bin/bash
template homedir = /home/%U
winbind use default domain = true
winbind offline logon = false


05)Domain Join and Logging in:
a)Add that machine into a Sapient domain

#net ads join -U

Note: Required NT-ID of a Company IT member who has privileges to add machine into a domain.

b)Start the winbind service and set it up for startup at boot time.

# service winbind restart
# chkconfig --level 234 winbind on

Try logging into the server using your NTID.

Thats Done.

1.15.2011

How to install Cron on CentOS Linux?

I have a CentOS Box which was without cron package installed.When I tried running crond service it says "Unrecognized service". I thought of installing through:

#yum install cron

But that din't help. So I explored and came up with the solution as below:

[root@localhost graphs]# service crond restart
crond: unrecognized service

[root@localhostgraphs]# rpm -qa | grep cron
crontabs-1.10-8
[root@localhostgraphs]# rpmquery --whatprovides $(which crontab)
file /usr/bin/crontab is not owned by any package
[root@localhost tl_graphs]# rpmquery --whatprovides $(which crontab)
file /usr/bin/crontab is not owned by any package
[root@localhost graphs]# yum install vixie-cron
Setting up Install Process
Parsing package install arguments
Resolving Dependencies
--> Running transaction check
---> Package vixie-cron.i386 4:4.1-76.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
vixie-cron i386 4:4.1-76.el5 base 78 k

Transaction Summary
=============================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)

Total download size: 78 k
Is this ok [y/N]: y
Downloading Packages:
(1/1): vixie-cron-4.1-76. 100% |=========================| 78 kB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: vixie-cron ######################### [1/1]

Installed: vixie-cron.i386 4:4.1-76.el5
Complete!
[root@localhost graphs]# service crond restart
Stopping crond: cannot stop crond: crond is not running. [FAILED]
Starting crond: [ OK ]
[root@localhost graphs]#

So I have crond service running on my CentOS.
Happy CronDing !!!

How to install Linux PAE enabled kernel?

If you are testing 32-bit RHEL5, please make sure that we use PAE kernel instead of the default kernel that is installed.With the default kernel, 32-bit RHEL5 will not recognize more than 4GB of guest memory even though you configure it on the VM..
So please verify the following before you so the testing....

a) Check the kernel that is installed when you install the distribution
(RHEL5 32-bit & 64-bit)


[root@localhost ~]# uname -r
2.6.18-8.el5

The above output indicates that the kernel that is installed is neither PAE / huge mem kernel.


You can find the list of installed kernel using:

# rpm -qa | grep -i kernel

kernel-2.6.18-8.el5
kernel-headers-2.6.18-8.el5
kernel-devel-2.6.18-8.el5

For the guest to use more than 4GB, we need to install the PAE kernel (You will get that from RHEL5 CD1..)

rhel-5-server-i386-disc1.iso


c) Mount the cdrom

mount /dev/cdrom /mnt


d) You will see something similar to this:

# ls /mnt/Server/ | grep -i 2.6.18-8
kernel-2.6.18-8.el5.i686.rpm
kernel-devel-2.6.18-8.el5.i686.rpm
kernel-doc-2.6.18-8.el5.noarch.rpm
kernel-headers-2.6.18-8.el5.i386.rpm
kernel-PAE-2.6.18-8.el5.i686.rpm
kernel-PAE-devel-2.6.18-8.el5.i686.rpm
kernel-xen-2.6.18-8.el5.i686.rpm
kernel-xen-devel-2.6.18-8.el5.i686.rpm



At a shell prompt, change to the directory that contains the kernel RPM packages. Use -i argument with the rpm command to keep the old kernel.

Caution: Do not use the -U option, since it overwrites the currently installed kernel, which creates boot loader problems.

For example:

[root@localhost]# rpm -ivh kernel-PAE-2.6.18-8.el5.i686.rpm

warning: kernel-PAE-2.6.18-8.el5.i686.rpm: Header V3 DSA signature: NOKEY,
key ID 37017186
Preparing... ########################################### [100%]
1:kernel-PAE ########################################### [100%]


Verify the initial RAM disk image.

To verify that an initial RAM disk already exists, use the command ls -l /boot to make sure the initrd-.img file was created (the version should match the version of the kernel just installed).

# ls -l /boot/ | grep img

-rw------- 1 root root 2318314 Jun 2 06:41 initrd-2.6.18-8.el5.img
-rw------- 1 root root 2319154 Jun 4 08:18 initrd-2.6.18-8.el5PAE.img

This shows the new image that is created...

Verifying the boot loader..

[root@localhost]# cat /boot/grub/grub.conf | grep PAE
title Red Hat Enterprise Linux Server (2.6.18-8.el5PAE)
kernel /vmlinuz-2.6.18-8.el5PAE ro root=/dev/VolGroup00/LogVol00 rhgb
quiet
initrd /initrd-2.6.18-8.el5PAE.img



Done. Restart the guest and at the grub menu press the key "e" and select the PAE kernel.

[root@localhost ~]# uname -r
2.6.18-8.el5PAE

[root@localhost ~]# cat /proc/meminfo | grep -i memtotal

MemTotal: 4934652 kB (I set only 5GB and it should go upto 64GB


If you want to always boot with PAE kernel, please do the changes in /etc/grub.conf.

Edit /etc/grub.conf to make default=0 (on my server new kernel is 0th kernel).

This is how it looks like:

default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-8.el5PAE)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.18-8.el5PAE.img

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet
initrd /initrd-2.6.18-8.el5.img

1.08.2011

Understanding Git - Part- II

What is Git?

Git is a distributed revision control system with an emphasis on speed. Git was initially designed and developed by Linus Torvalds for Linux kernel development.

Every Git working directory is a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server.

Git’s current software maintenance is overseen by Junio Hamano. Distributed under the terms of version 2 of the GNU General Public License, git is free software.

Git is a free & open source, distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

How Git Works?


Git is a decentralized version control system which means that there is no any kind of centralized repository (as SVN) rather we have distributed repository where each user has their own repository and they have freedom to create branches and experiment rather than creating branches at the central repo (as in SVN).

How to install Git on Windows and Linux?

Windows

1. Download Cygwin.
2. Put setup.exe in a folder of its own in your documents.
3. Launch setup.exe.
4. While installing Cygwin, pick these packages:
* git from the DEVEL category
* nano (if you’re wimpy) or vim (if you know it), both in the EDITORS category

You’ll now have a shortcut to launch Cygwin, which brings up something like the Linux terminal.

Linux

Install the git package using your preferred method (package manager or from source).

yum install git-core

Introduce Yourself to Git

Fire up your Cygwin/Linux terminal, and type:

git config --global user.name "Joey Joejoe"
git config --global user.email "joey@joejoe.com"

You only need to do this once.

Start Your Project

Start your project using the Sphere editor, or from a ZIP file, or just by making the directory and adding files yourself.

Now cd to your project directory:

cd myproject/

Tell git to start giving a damn about your project:

git init

… and your files in it:

git add .

Wrap it up:

git commit

Now type in a “commit message”: a reminder to yourself of what you’ve just done, like:

Initial commit.

Save it and quit (type Ctrl+o Ctrl+x if you’re in nano, :x if you’re in vim) and you’re done!

When dealing with git, it’s best to work in small bits. Rule of thumb: if you can’t summarise it in a sentence, you’ve gone too long without committing.

This section is your typical work cycle:

1. Work on your project.
2. Check which files you’ve changed:

git status

3. Check what the actual changes were:

git diff

4. Add any files/folders mentioned in step 2 (or new ones):

git add file1 newfile2 newfolder3

5. Commit your work:

git commit

6. Enter and save your commit message. If you want to back out, just quit the editor.

Repeat as much as you like. Just remember to always end with a commit.

What you did till now?

To see what you’ve done so far, type:

git log

To just see the last few commits you’ve made:

git log -n3

Replace 3 with whatever you feel like.

For a complete overview, type:

git log --stat --summary

Browse at your leisure.

What changes you made?

To view changes you haven’t committed yet:

git diff

If you want changes between versions of your project, first you’ll need to know the commit ID for the changes:

git log --pretty=oneline

6c93a1960072710c6677682a7816ba9e48b7528f Remove persist.clearScriptCache() function.
c6e7f6e685edbb414c676df259aab989b617b018 Make git ignore logs directory.
8fefbce334d30466e3bb8f24d11202a8f535301c Initial commit.

The 40 characters at the front of each line is the commit ID. You’ll also see them when you git commit. You can use it to show differences between commits.

To view the changes between the 1st and 2nd commits, type:

git diff 8fef..c6e7

Note how you didn’t have to type the whole thing, just the first few unique characters are enough.

To view the last changes you made:

git diff HEAD^..HEAD

Troubleshoot?

Haven’t committed yet, but don’t want to save the changes? You can throw them away:

git reset --hard

You can also do it for individual files, but it’s a bit different:

git checkout myfile.txt

Messed up the commit message? This will let you re-enter it:

git commit --amend

Forgot something in your last commit? That’s easy to fix.

git reset --soft HEAD^

Add that stuff you forgot:

git add forgot.txt these.txt

Then write over the last commit:

git commit

Don’t make a habit of overwriting/changing history if it’s a public repo you’re working with, though.

Hope you liked this post relevant and useful.

In the next article we shall dig deep into Workflow which makes Git flexible tool to use both the decentralized and centralized version control system.

Please go through this presentation before we proceed with the different workflow.

Git Vs SVN

Introduction to Git

1.06.2011

Setting up a Simple Samba Share - Part II

In the previous article we saw the practical implication of setting up a samba share.In this article we will explore more about the Samba with practical approach.

Aim:

Connnecting to Samba Server through Own Client Software on the same machine

Implementation:

[root@rhel samba]# smbclient //localhost/ -U jen
Password:
Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6]
tree connect failed: NT_STATUS_BAD_NETWORK_NAME
[root@rhel samba]# smbclient //localhost/jen -U jen
Password:
Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6]
smb: \>


You will see lots of commands here:

smb: \> ?
? altname archive blocksize cancel
case_sensitive cd chmod chown close
del dir du exit get
getfacl hardlink help history lcd
link lock lowercase ls mask
md mget mkdir more mput
newer open posix posix_open posix_mkdir
posix_rmdir posix_unlink print prompt put
pwd q queue quit rd
recurse reget rename reput rm
rmdir showacls setmode stat symlink
tar tarmode translate unlock volume
vuid wdel logon listconnect showconnect
!

smb: \>

Example:

Lets put a file called text1 from the share to a directory /tmp
Here it goes:

[root@rhel samba]# cd /home/jen/
[root@rhel jen]# ls
[root@rhel jen]# touch text <<----- Lets Create a file called text1 [root@rhel jen]# vi text [root@rhel jen]# smbclient //localhost/jen -U jen Password: Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6] smb: \> ls
. D 0 Mon Aug 3 17:16:20 2009
.. D 0 Mon Aug 3 17:02:24 2009
.bash_logout H 24 Mon Aug 3 17:02:24 2009
.kde DH 0 Mon Aug 3 17:02:24 2009
.gtkrc H 120 Mon Aug 3 17:02:24 2009
.bash_profile H 191 Mon Aug 3 17:02:24 2009
text 6 Mon Aug 3 17:16:20 2009 <<--- Here is a file .bashrc H 124 Mon Aug 3 17:02:24 2009 50521 blocks of size 262144. 27714 blocks available smb: \>

Remember currently we are now in /tmp directory:

[root@rhel jen]# cd /tmp
[root@rhel tmp]# smbclient //localhost/jen -U jen
Password:
Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6]
smb: \> ls
. D 0 Mon Aug 3 17:16:20 2009
.. D 0 Mon Aug 3 17:02:24 2009
.bash_logout H 24 Mon Aug 3 17:02:24 2009
.kde DH 0 Mon Aug 3 17:02:24 2009
.gtkrc H 120 Mon Aug 3 17:02:24 2009
.bash_profile H 191 Mon Aug 3 17:02:24 2009
text 6 Mon Aug 3 17:16:20 2009
.bashrc H 124 Mon Aug 3 17:02:24 2009

50521 blocks of size 262144. 27714 blocks available
smb: \> get text
getting file \text of size 6 as text (60000.0 kb/s) (average inf kb/s)
smb: \>

Now, When I browse /tmp directory i can see:

[root@rhel tmp]# ls
mapping-root text
[root@rhel tmp]#

Aim:

Seting up a Samba Server which avails documents and printer to only the system regular users and not to anyone outside.

Implementation:

1. Share Point ==> /export
2. All files owned by user called Ajeet Raina

Lets create a user :

[root@rhel tmp]# useradd -c "Ajeet Raina" -m -g users -p Oracle9ias ajeetr
[root@rhel tmp]# mkdir /export
[root@rhel tmp]# chmod u+rw,g+rw,o+rw /export
[root@rhel tmp]# chown ajeetr.users /export
[root@rhel tmp]#

Copy the files that should be shared to the /export directory.

Setting up a Simple Samba Share - Part I

The best definition I found for Samba goes this way "Samba is an important component to seamlessly integrate Linux/Unix Servers and Desktops into Active Directory environments using the winbind daemon".In simple words, Samba is the standard Windows interoperability suite of programs for Linux and Unix.Since long back in 1992, Samba has provided secure, stable and fast file and print services for all clients using the SMB/CIFS protocol, such as all versions of DOS and Windows, OS/2, Linux and many others.

Today we are going to setup a simple samba share.

Setting Up a Simple Samba Share which can be accessed by anyone who has account on the Machine.

Backup the smb.conf file
------------------------------------------

Locating the Correct Samba configuration File:

[root@rhel samba]# smbd -b | grep smb.conf
CONFIGFILE: /etc/samba/smb.conf
[root@rhel samba]#


[root@rhel ~]# cd /etc/samba/
[root@rhel samba]# cp smb.conf smb.conf.orig
[root@rhel samba]# > smb.conf
[root@rhel samba]# vi smb.conf

Add a simple Homes Share in smb.conf
------------------------------------------------------

[root@rhel samba]# cat smb.conf

[global]
workgroup = MIDEARTH
[homes]
guest ok = no
read only = no

[root@rhel samba]#

[root@rhel samba]# service smb restart
Shutting down SMB services: [FAILED]
Shutting down NMB services: [FAILED]
Starting SMB services: [ OK ]
Starting NMB services: [ OK ]
[root@rhel samba]#

Add a user called Jen
-----------------------------

[root@rhel samba]# useradd jen
[root@rhel samba]# passwd jen
Changing password for user jen.
New UNIX password:
BAD PASSWORD: it is WAY too short
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

Provide him with smb credentials(this is different from normal user/pass credentials)


[root@rhel samba]# smbpasswd -a jen
New SMB password:
Retype new SMB password:
Added user jen.
[root@rhel samba]#


Go to Start > Run > \\MachineIP
Login in through user/pass
Successfull !!!

You can see home directory [homes] and jen own home directory


Testing Your Samba Share
-------------------------

[root@rhel samba]# testparm /etc/samba/smb.conf
Load smb config files from /etc/samba/smb.conf
Processing section "[homes]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions

[global]
workgroup = MIDEARTH

[homes]
read only = No
[root@rhel samba]#

List Shares Available on the Server
-----------------------------------------------

[root@rhel samba]# smbclient -L rhel -U jen
Password:
Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6]

Sharename Type Comment
--------- ---- -------
homes Disk
IPC$ IPC IPC Service (Samba 3.0.25b-0.4E.6)
jen Disk Home directory of jen
Domain=[rhel] OS=[Unix] Server=[Samba 3.0.25b-0.4E.6]

Server Comment
--------- -------

Workgroup Master
--------- -------
MIDEARTH BL07DL380G5


Done. Your first samba share is Ready !!!

Linux Kernel 2.6.37 Announced

The Linux kernel 2.6.37 is finally announced by Linus Torvald on 4 January 2011.The announcement can be found here.The kernel is now stable and can be downloaded from http://kernel.org.

The kernel includes several SMP scalability improvements for Ext4 and XFS, removal of the BKL (Big Kernel Lock) from the core code,a network device based in the Ceph cluster filesystem, new Btrfs capabilities (namely the Btrfs space cache option), more efficient static probes, perf support to probe modules and listing of accesible local and global variables, PPP over IPv4 support, improvements and new drivers.

The Linux 2.6.37 kernel is already been shipped with an upcoming Ubuntu 11.04 Alpha 2 release yesterday.The detail information of advancement and new features which came with this kernel release can be explored through http://kernelnewbies.org/Linux_2_6_37

1.03.2011

Need to add extra swap space? Read this

"Can Linux be installed without swap space?". This question might sound confusing for those who listen it for the first time and never tried out. But the reality is "Yes". But if you did this, you should be ready to cope up with your Linux box next time you put extra load on your box.It will crash someday.
Its always recommended to provide extra space for swap partition.Swap is only used when you have maximum load.

This article discuss how to increase the swap space through adding swap file on Linux machine.

Lets proceed with the requisite steps to add swap file as shown below:

Lets use dd command to create swapfile. Then you need to use mkswap command to set up a Linux swap area on a device or in a file.

a) Log in as root user.

b) Run this command to create 512MB swap file (1024 * 512MB = 524288 block size):

# dd if=/dev/zero of=/swapf1 bs=1024 count=524288

c) Set up a Linux swap area:

# mkswap /swapf1

d) Activate /swapfile1 swap space immediately:

# swapon /swapf1

e) To activate /swapf1 after Linux system reboot, add entry to /etc/fstab file. Open this file using text editor such as vi:

# vi /etc/fstab

Then,append following line:

/swapf1 swap swap defaults 0 0

So next time Linux comes up after reboot, it enables the new swap file for you automatically.

Verifying if the swap is activated or not?

$ free -m

Hope the article proves useful for everyone who wants an extra swap space therein.

1.02.2011

How to enable USB 3.0 support for Fedora 14

USB establish communication between devices and a host controller.It connects computer peripherals such as mouse, keyboards, digital cameras, printers, personal media players, flash drives, Network Adapters, and external hard drives.

The original USB 1.0 specification had a data transfer rate of 12 Mbit/s.USB 2.0 had a data transfer rate of 480 Mbit/s.USB 3.0 is a newly launched and has a specification of 4.8 Gbps.USB.It promises increased maximum bus power and increased device current draw to better accommodate power-hungry devices,New power management features and full-duplex data transfers and support for new transfer types.

Fedora 14 includes the support of USB 3.0 but by default it comes disabled. Reason being it is preventing users from being able to suspend their laptops. USB 2.0 ports (ehci_hcd) works fine as expected.

For anyone who need support for USB 3.0 and support for suspend/resume is not necessary, there is a workaround. Just pass the kernel parameter

xhci.enable=1

This allows the xHCI support to load. There is also a workaround which will allow you to enable USB 3.0 support and still suspend successfully.


Next,create a file named /etc/pm/config.d/xhci, with the contents:

SUSPEND_MODULES="xhci"

One can suspend and resume the system successfully.

Hope it helps !!!