Wednesday, 18 March 2020

IP calculations in BASH with bitwise operations

This simple BASH script uses a network CIDR and two reserved IPs to calculate the pool range for a DHCP configuration. The two reserved IPs can be any valid IP within the CIDR. The IP range with the largest number of available IP, not using one of the static IPs, is selected for use as the pool. The highest and lowest IPs of the pool are returned.

The script uses ipcalc for getting the network values from the network CIDR. It is able to get values for Network, Netmask, Address, Broadcast, HostMin, HostMax and the number of host addresses in the network.

Unit conversion from decimal to binary is done using bc. BASH is able to perform bitwise operations for AND, OR and XOR. The BASH operation of shift is also used in this script.

As always you can download this code along with all the others from this blog using GitLab.com Linux Code Snippets


#!/bin/bash

# Given the network CIDR and two static IPs, calculate the lower and upper IPs for the DHCP pool.
# This script uses ipcalc that may need to be installed.


# https://stackoverflow.com/questions/40667382/how-to-perform-bitwise-operations-on-hexadecimal-numbers-in-bash
# https://unix.stackexchange.com/questions/223338/convert-a-value-into-a-binary-number-in-a-shell-script
# echo "obase=2; 34" | bc

# https://unix.stackexchange.com/questions/65280/binary-to-hexadecimal-and-decimal-in-a-shell-script
# echo $((2#101010101))
# printf "%x\n" "$((2#101010101))"
# echo $((0xf8 ^ 0x1f)) XOR
# echo $((0xf8 & 0x1f)) AND
# echo $((0xf8 | 0x1f)) OR
# echo $((0xf8 >> 3)) shift right 3 bits
# echo $((0xf8 << 4)) shift left 3 bits

function AddOne {
   IPCI=
   [ "$1" = "NS_ADDR" ] && IPCI=$(( NS_ADDR_INT + 1 ))
   [ "$1" = "GW_ADDR" ] && IPCI=$(( GW_ADDR_INT + 1 ))
   IPCB=$(echo "obase=2; $IPCI" | bc)
   Bin2Addr $(( 2#$B_ADDRESS_MASK | 2#$IPCB ))
}

function SubOne {
   IPCI=
   [ "$1" = "NS_ADDR" ] && IPCI=$(( NS_ADDR_INT - 1 ))
   [ "$1" = "GW_ADDR" ] && IPCI=$(( GW_ADDR_INT - 1 ))
   IPCB=$(echo "obase=2; $IPCI" | bc)
   Bin2Addr $(( 2#$B_ADDRESS_MASK | 2#$IPCB ))
}

function Bin2Addr {
   GA=$(echo "obase=2; $1" | bc)
   D_BLOCK=$(( 2#$GA & 2#11111111 ))
   GA=$( echo "obase=2; $(( 2#$GA >> 8 ))" | bc)
   C_BLOCK=$(( 2#$GA & 2#11111111 ))
   GA=$( echo "obase=2; $(( 2#$GA >> 8 ))" | bc)
   B_BLOCK=$(( 2#$GA & 2#11111111 ))
   GA=$( echo "obase=2; $(( 2#$GA >> 8 ))" | bc)
   A_BLOCK=$(( 2#$GA & 2#11111111 ))
   echo "built IP = '${A_BLOCK}.${B_BLOCK}.${C_BLOCK}.${D_BLOCK}'"
}

function NetCalcs {
   echo
   echo --------------------------------------------------
   # set -x
   echo "NAMESERVER_IP=$NAMESERVER_IP, GATEWAY_IP=$GATEWAY_IP"
   NAMESERVER_NET=$(ipcalc -nb ${NETWORK_CIDR} | grep ^Network: | awk '{print $2}' )
   #echo "NAMESERVER_NET=$NAMESERVER_NET"
   NETWORK_MASK=$(ipcalc -nb ${NETWORK_CIDR} | grep ^Netmask: | awk '{print $2}' )
   #echo "NETWORK_MASK=$NETWORK_MASK"
   B_ADDRESS_MASK=$(ipcalc -n ${NETWORK_CIDR} | grep ^Address: | sed -e 's/\.//g' | awk '{print $(NF-1)$NF}' )
   #echo "Binary ADDRESS_MASK=$B_ADDRESS_MASK"
   NETWORK_BROADCAST=$(ipcalc -nb ${NETWORK_CIDR} | grep ^Broadcast: | awk '{print $2}' )
   #echo "NETWORK_BROADCAST=$NETWORK_BROADCAST"
   NET_MIN_IP=$(ipcalc -nb ${NETWORK_CIDR} | grep ^HostMin: | awk '{print $2}' )
   #echo "NET_MIN_IP=$NET_MIN_IP"
   NET_MAX_IP=$(ipcalc -nb ${NETWORK_CIDR} | grep ^HostMax: | awk '{print $2}' )
   #echo "NET_MAX_IP=$NET_MAX_IP"
   NET_MIN_BIN=$(ipcalc -n ${NETWORK_CIDR} | grep ^HostMin: | sed -e 's/\.//g' | awk '{print $NF}')
   #echo "NET_MIN_BIN=$NET_MIN_BIN"
   NET_MAX_BIN=$(ipcalc -n ${NETWORK_CIDR} | grep ^HostMax: | sed -e 's/\.//g' | awk '{print $NF}')
   #echo "NET_MAX_BIN=$NET_MAX_BIN"
   NS_ADDR_BIN=$(ipcalc -n ${NAMESERVER_IP}/${NETWORK_CIDR##*/} | grep ^Address: | sed -e 's/\.//g' | awk '{print $NF}')
   NS_ADDR_INT=$(( 2#$NS_ADDR_BIN ))
   #echo "NS_ADDR_BIN=$NS_ADDR_BIN"
   GW_ADDR_BIN=$(ipcalc -n ${GATEWAY_IP}/${NETWORK_CIDR##*/} | grep ^Address: | sed -e 's/\.//g' | awk '{print $NF}')
   GW_ADDR_INT=$(( 2#$GW_ADDR_BIN ))
   #echo "GW_ADDR_BIN=$GW_ADDR_BIN"

   read LOWEST HIGHEST <<< $( printf "%d " $(printf "%d\n" $(( 2#$GW_ADDR_BIN )) $(( 2#$NS_ADDR_BIN ))|sort -n))
   LOWCOUNT=$(( LOWEST - $NET_MIN_BIN ))
   #echo "LOWCOUNT=$LOWCOUNT"

   # Mid IP pool size
   MIDCOUNT=$(( HIGHEST - LOWEST ))
   #echo "MIDCOUNT=$MIDCOUNT"

   # Upper IP pool size
   HICOUNT=$(( 2#$NET_MAX_BIN - HIGHEST ))
   #echo "HICOUNT=$HICOUNT"


   read JUNK LOWNET JUNK HIGHNET JUNK <<< $( echo $( (echo "$GW_ADDR_INT GW_ADDR"; echo "$NS_ADDR_INT NS_ADDR"; ) | sort -n) )
   read JUNK POOL JUNK <<< $( (echo "$LOWCOUNT LOWCOUNT"; echo "$MIDCOUNT MIDCOUNT"; echo "$HICOUNT HICOUNT";) | sort -nr)


   #echo "NAMESER_IP=$NAMESERVER_IP GATEWAY_IP=$GATEWAY_IP LOWNET=$LOWNET HIGHNET=$HIGHNET POOL=$POOL"

   case $POOL in
      LOWCOUNT)
         echo "From $NET_MIN_IP to $(SubOne $LOWNET)";;
      MIDCOUNT)
         echo "From $(AddOne $LOWNET) to $(SubOne $HIGHNET)";;
      HICOUNT)
         echo "From $(AddOne $HIGHNET) to $NET_MAX_IP";;
   esac

}


clear
NETWORK_CIDR="172.16.0.0/20"
echo "NETWORK_CIDR=$NETWORK_CIDR"

NAMESERVER_IP="172.16.1.2"
GATEWAY_IP="172.16.1.1"
NetCalcs


NAMESERVER_IP="172.16.14.200"
GATEWAY_IP="172.16.15.9"
NetCalcs


NAMESERVER_IP="172.16.1.22"
GATEWAY_IP="172.16.1.22"
NetCalcs


NETWORK_CIDR="192.168.1.0/24"
echo "NETWORK_CIDR=$NETWORK_CIDR"

NAMESERVER_IP="192.168.1.2"
GATEWAY_IP="192.168.1.1"
NetCalcs


NAMESERVER_IP="192.168.1.120"
GATEWAY_IP="192.168.1.121"
NetCalcs


NAMESERVER_IP="192.168.1.2"
GATEWAY_IP="192.168.1.1"
NetCalcs

Sunday, 16 February 2020

Using Jq to work with JSON

While starting to work with Packer & Terraform, our deployment server needed some information about the AMIs that are being built. This sounds like a job for jq


#!/bin/bash

TMP=$(mktemp /dev/shm/JQ_XXXXXXXXXXXXXXX)
echo $TMP

echo Create a new JSON data set.
sleep 1
jq -Rn '{ "firewall": { amiId: "ami-1234", base: "amazon-linux-2", built: "'$(date +%F_%T)'" }}' > $TMP
cat $TMP | jq
sleep 5

echo Append to a JSON data set.
sleep 1
cat $TMP | jq '.["webserver"].amiId="ami-5678"' > ${TMP}.swap && mv ${TMP}.swap $TMP
cat $TMP | jq '.["webserver"].base="centos-7"' > ${TMP}.swap && mv ${TMP}.swap $TMP
cat $TMP | jq
sleep 5

echo Get all values for an entry by name
sleep 1
cat $TMP | jq '.["firewall"]'
sleep 5

echo Get the amiId by name
sleep 1
cat $TMP | jq '.["firewall"].amiId'
sleep 5

echo Set the amiId by name
sleep 1
cat $TMP | jq '.["firewall"].amiId="AMI-9876"' # > ${TMP}.swap && mv swap $TMP
sleep 5

echo Select a data set by exact match
sleep 1
cat $TMP | jq 'with_entries(select(.value.base=="centos-7"))'
sleep 5

echo Select a data set by partial match
sleep 1
cat $TMP | jq 'with_entries(select(.value.base | startswith("amazon")))'
sleep 5

rm -f $TMP


Create a new JSON data set.









Append to a JSON data set.












Get all values for an entry by name.







Get the amiId by name




Set the amiId by name













Select a data set by exact match







Select a data set by partial match


























Tuesday, 11 February 2020

The One Line Python Directory Web Server

While testing some adjustments to iptables I needed a simple way to test web requests. This is a handy way to serve a directory and a simple way to see if the firewall is working.

On the server-side create directory content that will be hosted from a one-line Python script.
mkdir /tmp/fakeserver

cd /tmp/fakeserver

git clone https://github.com/torvalds/linux.git

sudo python -m SimpleHTTPServer 80


On the client-side use wget to spider and download the entire content of the server.
Replace <TEST SERVER> with the servers IP or resolvable name.
mkdir /tmp/fakeserver

cd /tmp/fakeserver

wget --mirror --convert-links --adjust-extension \
--page-requisites --no-parent http://<TEST SERVER>/



Sunday, 8 September 2019

nmon on your headless Proxmox server

Proxmox is great and after it is up and running you can control it from the web page or ssh in, so what do you do with the monitor connected to it. Here is a one-line command to get nmon to show you what the server is doing.

I set my script up in cron so it always comes up, also nmon will not refresh the entire screen so cron can also kill the old nmon and start a new one.

I called this headless because you don't need to login to have this work. The pgrep command finds the getty that has the login prompt.



( sleep 2; 
export NMON=cmVdn; 
export TERM=xterm-color; 
export NCURSES_NO_UTF8_ACS=1; 
nmon > /proc/$(pgrep -f agetty)/fd/1 ) &

And this is what it looks like.


Thursday, 6 June 2019

The Perl Negative Modulus Bug

This is a bug that has been in Perl for all the years I have used it. There is a workaround, but I choose to avoid using Perl for anything beyond simple math.

The bug only shows when you try to calculate the modulus of negative values.


#!/usr/bin/perl

$A = 106;
printf("Division\n");
printf("106 / 10 = 10\t<- expected\n");
printf("%d / 10 = %01d\t<- correct\n", $A, $A / 10);

printf("\n");
printf("Modulus\n");
printf("106 %% 10 = 6\t<- expected\n");
printf("%d %% 10 = %d\t<- correct\n", $A,  $A % 10);

printf("\n\nNegative values\n");

$A = -106;
printf("Division\n");
printf("%d / 10 = 10\t<- expected\n", $A);
printf("%d / 10 = %01d\t<- correct\n", $A, $A / 10);

printf("\n");
printf("Modulus\n");
printf("%d %% 10 = -6\t<- expected\n", $A);
printf("%d %% 10 = %d\t<- WHAT THE!!!\n", $A,  $A % 10);

This is the output.

Division
106 / 10 = 10 <- expected
106 / 10 = 10 <- correct

Modulus
106 % 10 = 6 <- expected
106 % 10 = 6 <- correct


Negative values
Divsion
-106 / 10 = 10 <- expected
-106 / 10 = -10 <- correct

Modulus
-106 % 10 = -6 <- expected
-106 % 10 = 4 <- WHAT THE!!!


Friday, 24 May 2019

Simple Docker Example

This is a continuation of the Ultra minimal docker Node.JS example.

The code can be downloaded from GitLab.com

This is a very simple example of Docker running a simple NodeJS app.

Things to know. Containers are not VMs, they are more like a root jail. They use the host OS kernel and processes running in the container are visible to the host OS. Docker is just one of many ways to use containers. Proxmox is another, and it can also run VMs.

This example requires a Linux system with Docker installed. Installing Docker for Debian Installing Docker for CentOS

For more information see this cheat sheet.

Almost all of these steps require permission to execute privileged commands. In a production environment, a special group would be created with permissions to run these commands. If you are new to Linux the sudo command is commonly used to run commands as the root administrator, just prefix the commands with sudo. It is recommended that you practice this on a test system.

Image Files

Image files are a collection of Linux OS files needed to run in a container. They contain a directory tree like /bin/, /lib/, /usr/, /var/ and others, everything needed to run the required programs.

One popular image is BusyBox were hundreds of Linux programs such as ls, grep, find, cat are all just one binary program. This makes the image very small.

This sample project uses alpine-node, a small image with a functional NodeJS.

Building A Simple Project

Three files are included with this project.

service.js

This is a simple JavaScript to be run by NodeJS

package.json

This file tells NodeJS about the code to run.
It tells NodeJS:
  • the name of the script to start
  • dependencies that are needed
  • how to start the script

Dockerfile

This tells Docker how to build an image that contains your project.
It tells Docker
  • what image to source the build from, this one uses mhart/alpine-node
  • where to install code files
The source image mhart/alpine-node is a very small Linux that includes a NodeJS service.

Building Images

On a Linux system where you have a running Docker, change to the working code directory where the Dockerfile is, then build a new image.

docker build -t <new image title> .

This will copy the mhart/alpine-node image and add the project code to it as instructed by the contents of the Dockerfile.

If your docker does not have the mhart/alpine-node image yet, the build will automatically download it. This may take extra time but after it has been downloaded, future builds will be much faster.

Running Containers

Once the image is built, it is just an image. It is not yet an instance of a container, for that it needs to be run. The container is created and initialized when it is first run.

Then run a container with this new image, this will create a new container.
docker run -d -p 3000:3000 <new image title>

The -p 3000:3000 will publish the service at port 3000 inside the container on the host OS.

The -d sets the container to run detached so it will fork to the background.


Monitoring

At this point, the docker service is running and should respond to web requests.
Open a browser to the IP address of the Linux host OS at port 3000
http://<linux>:3000

The node process can be seen running in the host OS.
From your host Linux system, run these commands.
ps -ef | grep node
netstat -naptu |grep 3000.*LISTEN

List the status of running containers

docker ps 
docker container ls

List the status of all containers.

docker ps -a
docker container ls -a

Status

Get a top like status of the running containers.
docker stats

To just get a one time report of stats.
docker stats --no-stream

Stopping Containers

Containers must be stopped before they can be deleted.
docker stop <container ID>

Re-Starting Containers

Once a container has been ran, it has been initialized and can be restarted.
docker start <container ID>

Deleting Containers

List the containers, use -a all to see the ones that are not running.
docker container ls -a

Containers must be deleted before the associated image can be deleted.
docker rm <container ID>

Deleting Images

First, list the images.
docker image ls

Then use the image name or ID to delete it.

docker rmi <image name or ID>

Monday, 4 February 2019

Get your data together!

Analyzing logs is fun, kind of like how going to the dentists is fun. You stair at lon-n-n-n-g pages of data and your brain goes numb from the overwhelming amount of information.

GBT is a Perl script that can condense and extract meaningful information from time-indexed logs and numbers. GBT is Group-By-Time and rather than have sporadic bursts of data separated by black holes of emptiness, GBT produces a consistent flow of data that is easily graphable 

GBT by default will condense data into ten minute blocks of maximum values for each column given. The size of the time block can be changed using the -t argument. The output data format can be changed to provide min, max, mean, sum, delta or count. 

The main use for GBT is to pipe data into GNU Plot to produce graphs.


Here is the gnuplot PNG created from the sample data.