Skip to main content

Linux Commands

few commands which is taken from web surfing


# Suspend process:
Ctrl + z # Move process to foreground: fg # Generate random hex number where n is number of characters: openssl rand -hex n # Execute commands from a file in the current shell: source /home/user/file.name # Substring for first 5 characters: ${variable:0:5} # SSH debug mode: ssh -vvv user@ip_address # SSH with .pem key: ssh user@ip_address -i key.pem # Get complete directory listing to local directory with wget: wget -r --no-parent --reject "index.html*" http://hostname/ -P /home/user/dirs # Create multiple directories: mkdir -p /home/user/{test,test1,test2} # List processes tree with child processes: ps axwef # Make .war file: jar -cvf name.war file # Test disk write speed: dd if=/dev/zero of=/tmp/output.img bs=8k count=128k conv=fdatasync; rm -rf /tmp/output.img # Test disk read speed: hdparm -Tt /dev/sda # Get md5 hash from text: echo -n "text" | md5sum # Check .xml syntax: xmllint --noout file.xml # Extract tar.gz in new directory: tar zxvf package.tar.gz -C new_dir # Get http headers with curl: curl -I http://www.example.com # Modify timestamp of some file or directory (YYMMDDhhmm): touch -t 0712250000 file # Download from ftp using wget: wget -m ftp://username:password@hostname # Generate random password (16 char long in this case): LANG=c < /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c$"{1:-16}";echo; # Quickly create a backup of a file: cp some_file_name{,.bkp} # Access Windows share: smbclient -U "DOMAIN\user" //dc.domain.com/share/test/dir # Run command from history (here at line 100): !100 # unzip to directory: unzip package_name.zip -d dir_name # cat multiline text (CTRL + d to exit): cat > test.txt # Create empty file or empty existing one: > test.txt # Update date from Ubuntu NTP server: ntpdate ntp.ubuntu.com # netstat show all tcp4 listening ports: netstat -lnt4 | awk '{print $4}' | cut -f2 -d: | grep -o '[0-9]*' # Convert image from qcow2 to raw: qemu-img convert -f qcow2 -O raw precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1.raw # Run command repeatedly, displaying it's output (default every two seconds): watch ps -ef # List all users: getent passwd # Mount root in read/write mode: mount -o remount,rw / # Mount a directory (for cases when symlinking will not work): mount --bind /source /destination # Send dynamic update to DNS server: nsupdate <<EOF update add $HOST 86400 A $IP send EOF # Test iops: ioping -RLD /dev/vda # Recursively grep all directories: grep -r "some_text" /path/to/dir # List ten largest open files: lsof / | awk '{ if($7 > 1048576) print $7/1048576 "MB "$9 }' | sort -n -u | tail # Show free RAM in MB: free -m | grep cache | awk '/[0-9]/{ print $4" MB" }' # Open Vim and jump to end of file: vim + some_file_name # Print out the last cat command from history !cat:p # Run your last cat command from history !cat # Find all empty subdirectories in /home/user find /home/user -maxdepth 1 -type d -empty # Get all from line 50 to 60 in test.txt < test.txt sed -n '50,60p' # Run last command (if it was: mkdir /root/test, below will run: sudo mkdir /root/test) sudo !! # Create temporary RAM filesystem - ramdisk (first create /tmpram directory) mount -t tmpfs tmpfs /tmpram -o size=512m # Grep whole words grep -w "name" test.txt # Append text to a file that requires raised privileges echo "some text" | sudo tee -a /path/file # List all supported kill signals kill -l # Generate random password (16 characters long in this case): openssl rand -base64 16 # Do not log your last session in bash history: kill -9 $$ # Scan network to find open port: nmap -p 8081 172.20.0.0/16 # Move all files with "txt" in name to /home/user: find -iname "*txt*" -exec mv -v {} /home/user \; # Put file lines side by side: paste test.txt test1.txt # Progress bar in shell: pv data.log # Send data to Graphite server with netcat: echo "hosts.sampleHost 10 `date +%s`" | nc 192.168.200.2 3000 # Convert tabs to spaces: expand test.txt > test1.txt # Skip bash history: <space>cmd # Go to the previous working directory: cd - # Split large tar.gz archive (100MB each) and put it back: split –b 100m /path/to/large/archive /path/to/output/files cat files* > archive # Get HTTP status code with curl: curl -sL -w "%{http_code}\\n" www.example.com -o /dev/null # Set root password and secure MySQL installation: /usr/bin/mysql_secure_installation # When Ctrl C does not work: Ctrl \ # Get file owner: stat -c %U file.txt # List block devices: lsblk -f # Find files with trailing spaces: find . -type f -exec egrep -l " +$" {} \; # Find files with tabs indentation: find . -type f -exec egrep -l $'\t' {} \; # Print horizontal line with "=": printf '%100s\n' | tr ' ' = # Check if remote port is open with bash: echo >/dev/tcp/8.8.8.8/53 && echo "open" # Generate pass and get first 20 characters: echo -n "test" | sha512sum | base64 | cut -c 1-20 # Using xargs and variables: command | awk '{print $1}' | xargs --verbose -I IFACE ip link set IFACE mtu 9000 # Set MTU: ip link set eth0 mtu 9000 # Lines of text in all files in a directory: find deployment -type f -exec wc -l {} \; | awk '{total += $1} END{print total}' # Find common path for list of files and directories: cat test.txt | sed -e 's,$,/,;1{h;d;}' -e 'G;s,\(.*/\).*\n\1.*,\1,;h;$!d;s,/$,,' # List dir by used space du -a /var | sort -n -r | head -n 10 # Create empty partition on whole disk parted -sa optimal /dev/xvdb mklabel gpt mkpart primary 0% 100% # Trim http or https from URL variable with echo: echo ${URL#*//} # Find all files starting with dot: find . -name ".[^.]*" # Echo as sudo (tee -a to append): echo "/dev/xvdb1 /data ext4 defaults 0 2" | sudo tee -a /etc/fstab # Get ip address for the host: hostname --ip-address # Convert mkv audio to ac3: ffmpeg -i infile.mkv -vcodec copy -acodec ac3 -b:a 640k outfile.mkv # Check limits for process: cat /proc/<pid>/limits # Postgres dump and restore: pg_dump dbname > /tmp/dbname.sql pg_dump -h localhost -d dbname -U dbuser > /tmp/dbname.sql psql dbname < /dbname/sar.sql # Fio iops test: # Random 4k read: fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randread -size=1G -filename=/tmp/fio-testing -name="test" -iodepth=32 -runtime=200 # Random 4k write: fio -ioengine=libaio -bs=4k -direct=1 -thread -rw=randwrite -size=1G -filename=/tmp/fio-testing -name="test" -iodepth=32 -runtime=200 # Grep replace hostname: grep -rl --include "config.xml" test.example.com . | sudo xargs sed -i 's/test.example.com/test2.example.com/g' # To sort IP addresses: sort -t . -k 3,3n -k 4,4n # Do not run the same script if running already: flock -n /tmp/test.lock -c "/home/sysadmin/test.sh > test.log" # Add ssh key to knwon hosts: ssh-keyscan github.com # Get encoding of file: file -I file.txt # Change encoding: iconv -f utf8 -t WINDOWS-1250 org_file new_file # Delete all files older than 14 days: find /home/archive/mysql -type f -mtime +14 -delete # Bandwidth monitor: bwm-ng -I eth0 -d # Clear cache: echo 3 > /proc/sys/vm/drop_caches # List all users: getent passwd # Kill all active screen sessions: screen -X quit # Check fragmentation on XFS: xfs_db -c frag -r /dev/sdf1 # List dirs and sort by timestamp in dirs name: find test/* -type d -exec basename {} \; | sort -r | head -1 ls -1 test/ | sort -r | head -1 # Check if there is available dirs staring with dir-: ls test/dir-* &>/dev/null # Convert date to seconds date -d"2014-02-14T12:30" +%s date -d"20140214" +%s # Pkill bash script: pkill -9 -f "test.sh" # Memory usage by processes (with cache): ps -e -o pid,vsz,comm= | sort -n -k 2 # Print human readable time format in dmesg log: dmesg -T # Virsh send remote command: virsh -c qemu+tcp://testnode.example.com/system # List targets in systemd: systemctl --all -t target # Check domain records with dig: dig techbar.me +nostats +nocomments +nocmd # Jinja2 find replace -%} with %}: grep -rl "\-\%\}" . | xargs sed -i 's/\-\%\}/\%\}/g' # Add new line to the end of file if not exists: find files/. -type f | xargs -I FILE sed -i -e '$a\' FILE # Resize qcow2 image: qemu-img resize ubuntu-server.qcow2 +10GB # Stress test: stress-ng --cpu 16 --io 4 --vm 1 --vm-bytes 1G -v --timeout 60s --metrics-brief # Sysbench CPU test: sysbench --num-threads=48 --test=cpu --cpu-max-prime=200000 run # Find with regex: find /path -regextype posix-extended -regex '.*img.*lz4' # Find pci bus for NVIDIA card: lspci | grep NVIDIA # Check bus info: lspci -n -s 05:00 # Show available pkgs on rhel system: yum --showduplicates list ceph | expand # Zap disk: sgdisk --zap /dev/sda # Test if jumbo packets are enabled/working with ping: ping -M do -s 8972 <ip_address> # Check interface info: ethtool eth1 # Force NFS umount: umount -f -l /mnt/nfs # Check if SSL crt, key and csr matches: openssl x509 -noout -modulus -in certificate.crt | openssl md5 openssl rsa -noout -modulus -in privateKey.key | openssl md5 openssl req -noout -modulus -in CSR.csr | openssl md5 # Export SSL private key from jks file: keytool -importkeystore -srckeystore server.jks -destkeystore server.p12 -srcstoretype jks -deststoretype pkcs12 openssl pkcs12 -in server.p12 -nodes -nocerts -out server.pem openssl rsa –in server.pem -out server.key

Comments

Popular posts from this blog

C program jackpot

/*Program to show sum of 10 elements of array & show the average.*/ #include<stdio.h> int main () { int a[ 10 ],i,sum = 0 ; float av; printf( "enter elements of an aaray: " ); for (i = 0 ;i < 10 ;i ++ ) scanf( "%d" , & a[i]); for (i = 0 ;i < 10 ;i ++ ) sum = sum + a[i]; printf( "sum=%d" ,sum); av = sum / 10 ; printf( "average=%.2f" ,av); return 0 ; } Output: enter elements of an array : 4 4 4 4 4 4 4 4 4 4 sum = 40 average = 4.00 /*Program to find the maximum no. in an array.*/ #include<stdio.h> void main () { int a[ 5 ],max,i; printf( "enter element for the array: " ); for (i = 0 ;i < 5 ;i ++ ) scanf( "%d" , & a[i]); max = a[ 0 ]; for (i = 1 ;i < 5 ;i ++ ) { if (max < a[i]) max = a[i]; } printf( "maximum no= %d" ,max); } Output: enter elements for array : 5 4 7 1 2 maximum no = 7 /*Swapp

Newsboat RSS Reader

Let me tell what is rss reader then i will explain about newsboat,. RSS stands for Rich Site Summary. Simply it is way to subscribe to webpages and when new article is published you can see through the feeds. you get the idea Newsboat is a terminal rss reader which is simple, easy to use and highly customizable how to find rss feed link ?                                   simple trick that i used to find rss feeds in a website check for rss image like above [OR] Right click the website -> view page souce and search for rss copy the link and paste it in urls file in the .config/newsboat/urls (for linux and mac) RSS feeds from twitter: https://twitrss.me/twitter_user_to_rss/?user=<USERNAME> https://www.twitrss.me/twitter_search_to_rss/?term=<SEARCH TERM> example: https://twitrss.me/twitter_user_to_rss/?user=Hytale https://www.twitrss.me/twitter_search_to_rss/?term=Hytale RSS feeds from youtube: https://www.youtube.com/feeds/videos.xml?channel_id

Ecosia

ECOSIA - A Green search engine “ Ecosia donates 80% profits to planting trees” Site: www.ecosia.org Lauched on 7 th December 2009 and created by Christian Kroll available in more than 26 language. It is located in Berlin, Germany.It is also called as CO2-neutral company.Ecosia has donated to different tree-planting programs. Until December 2010 Ecosia’s donations went to a program by WWF Germany that protected the Juruena-National park in the Amazonas. In order to make sure the protection was kept up, the program also drew up and financed plans with timber companies and the  local communities. According to B-labs, as of January 2015, "In donating 80 percent of its ad revenue, the search engine has raised over $1.5 million for rainforest protection since its founding in December 2009." According to Ecosia, by 2015, the search engine had almost 2.5 million active users, and searches through it had resulted in more than 2 million trees being planted.Since October 201