Archive for the 'unix' Category

Find out where Unix/Linux executable binary is located

There are two commands that may help you to find where executable binary is located regardless it’s Unix or Linux system. They are whereis and type. First locates source/binary and manuals sections for specified files and second tells what exactly shell executes when you type a certain command.

The next picture shows examples of these commands work.

Find out where Unix/Linux executable binary is located


Secure shell (ssh) session timeout

I’ve noticed that when I keep ssh sessions that I opened before untouched for some period of time (like 30 minutes) they become frozen and as a result I have to close ssh terminal and start a new connection. To prevent such situation I found several tips:

1) Start some utility updating the screen before leaving ssh session untouched. I usually use watch -n 1 ‘date’ that shows current date every second. Other simple way is to send icmp requests to some host, e.g. ping

2) Increase ssh session idle time by

echo “7200” > /proc/sys/net/ipv4/tcp_keepalive_time

I’ve checked these tips with Fedora Core, CentOS, Debian and Ubuntu but I’m completely sure that it applicable also for other Linux distributions. First tip (ping) can be used in Unix also.


Silent and unattended large file download with wget

If you would like to download large file like HD Video or Linux ISO image while you’re working at your Linux PC or laptop I recommend to use the following command:

wget -c -b “http://file/you/wish/to/download.ext” –limit-rate=100k

Where, -c will continue getting partially-downloaded file (if connection was dropped or smth else), -b will put wget downloading into background, –limit-rate=100k will limit downloading speed at 100 KBytes per second (KB/s).

P.S. man wget still works 🙂

Can’t find usable shell script encryption solution…

It sounds a bit cheesy but I can’t find any usable solution on how to encrypt Linux/Unix shell script in order to protect it’s source code when it’s still executable.

There was a post here about how to protect shell scripts, but I found SHC bug. Just run script that is encrypted with shc and execute command ps ax and you’ll see full source code at ps‘s output (newline characters will be replaced with “?” symbol). Because of this fact I can’t use SHC anymore and looking for alternative solution.

It would be very appreciated if you suggest a way how to solve the issue. THANK YOU, mates.

Quick shell change for user in Unix or Linux

In order to change shell for a particular user of Unix/Linux system without editing /etc/passwd file manually just use command:

chsh -s /path/to/shell username

for example, to change shell to ‘sh’ from ‘bash’ for user ‘viper’, use command:

chsh -s /bin/sh viper

Move linux to another hard drive (dump, restore, backup)

There are several methods to move running Linux to another hard drive at the same server. But I used Unix dump/restore utility to perform this…

First of all it’s necessary to partition new hard drive in the same way as it’s done with old drive (Linux is running at). I usually use ‘fdisk’ utility. Let’s assume that old drive is /dev/hda and new one is /dev/hdb. To view hda’s partition table please run ‘fdisk -l /dev/hda’ which should show something like this:

Disk /dev/hda: 60.0 GB, 60022480896 bytes
255 heads, 63 sectors/track, 7297 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 1 15 120456 83 Linux
/dev/hda2 16 276 2096482+ 82 Linux swap
/dev/hda3 277 7297 56396182+ 83 Linux

After this run ‘fdisk /dev/hdb’ and make the same partitions at it. Interactive mode of fdisk utility is well documented and is very intuitive, so I don’t think it would be difficult to perform partitioning.

After this is done, we should make new filesystems at partitions we’ve created:

mkfs -t ext3 /dev/hdb1
mkfs -t ext3 /dev/hdb3
mkswap /dev/hdb2

When it’s done it’s NECESSARY to mark newly created filesystems as it’s done with old ones. To check filesystem volume name run command ‘tune2fs -l /dev/hda1 | grep volume’ and etc. You’ll see something like this:

Filesystem volume name: /boot

It means that we should mark new hdb1 with label /boot. It can be done by command:

tune2fs -L “/boot” /dev/hdb1

The same should be performed for all partitions except swap one. In my case I should label hdb3 by command:

tune2fs -L “/” /dev/hdb3

At this point new hard drive preparation is finished and we can proceed with moving Linux to it. Mount new filesystem and change directory to it:

mount /dev/hdb1 /mnt/hdb1
cd /mnt/hdb1

When it’s done we can perform moving by command:

dump -0uan -f – /boot | restore -r -f –

And the same with / partition:

mount /dev/hdb3 /mnt/hdb3
cd /mnt/hdb3
dump -0uan -f – / | restore -r -f –

When dump/restore procedures are done we should install boot loader to new HDD. Run ‘grub’ utility and execute in it’s console:

root (hd1, 0)
setup (hd1)

In case everything is done carefully and right (I’ve tested this method by myself) you can boot from new hard drive and have ‘old’ Linux running at new hard drive running.

Good luck!

Automatic blog posts translation into many languages with Altavista’s Babelfish engine

I think everybody will agree with me that a blog translated into many different languages may attract a bit more visitors and traffic. The idea of such translation isn’t new, I’ve seen this somewhere over Internet (sorry, but I don’t remember where) and I’m sure it makes sense. In this post I’d like to share my observation regarding popular web page translation engine Babelfish

There are many posts in my blog which aren’t translated yet and it would take much time to make them translated in case I do it manually (go to babelfish’s site, copy/paste post’s contents, choose proper translation direction and etc.). That’s why I prefer automatic page translation which is done for me while I have cup of cofee, have sex or whatever instead of sitting in front of blue screen of the monitor and copying/pasting text into babelfish’s textareas.

All what we need is Linux or Unix distribution (Ubuntu rocks for me) and ‘wget’ utility which usually comes with any Linux, Unix distributions. The idea lies in downloading links to the posts from rss feed of the blog. I’ve chosen blogger service by google and that’s why my rss feed can be foud here. Any other blog engines provides feeds, I’m sure.

So, create shell script somewhere and make it be executable by commands:
echo “#!/bin/bash” > /tmp/
chmod +x /tmp/

My script looks as follows:


\rm /tmp/tr_output -r
wget -O /tmp/rss.xml
cat /tmp/rss.xml | sed “s/></>\n</g” | grep “<link>” | awk -F ‘<link>’ ‘{print$2}’ | awk -F ‘</link>’ ‘{print$1}’ | grep “2007” | while read link;do
mkdir /tmp/tr_output &> /dev/null
save=$(echo $link | awk -F ‘.html’ ‘{print$1}’ | awk -F ‘/’ ‘{print$6}’)
echo “Translating $save…”
mkdir /tmp/tr_output/$save
echo -e “nl\nfr\nde\nel\nit\nja\nko\npt\nru\nes” | while read lang;do
#echo $lang
wget “$lang&url=$link&#8221; -U “Mozilla” –wait=10 -O /tmp/tr_output/$save/$save”.”$lang”.html”
sleep 600
\rm -f /tmp/rss.xml

This one downloads rss.xml file (don’t forget to change URL to rss feed), parses it and sends every post to Babelfish’s input, after translation script saves output to /tmp/tr_output directory, waits 10 minutes and proceeds with nex language. Translation is performed into 10 languages. 10 minutes waiting period is needed as babelfish may accuse your script as a bot and ban you.

After some time you’ll find pretty large amount of data in /tmp/tr_output which you can copy/paste to your blog seamlessly. I recommend not to publish these posts to main page and keep it only for googlebot 🙂

Good luck!

P.S. If anybody know how to perform automatic posts translation with “google translate” instead of babelfish, it would be very appreciated if you leave some comments regarding it. Thanks!