04 July 2015

Monitoring Traffic

So as a scientific exercise, I endeavoured on finding out what would be involved in monitoring traffic going from the internal network to the outside world. The article assumes quite a bit of prior knowledge and is based on a specific use case with lots of assumptions.

There are many ways to approach this problem. Some alternative options would be (in no particular order):

  • Have a hub on the network (before the internet connection) and monitor all traffic coming from it
  • Build a router setup on the linux machine with two external network cards (one on the internal side and one on the internet side)
  • Hack the Internet router to support packets inspection
  • Install hardware solutions to monitor traffic.

The option I decided to explore was to route all traffic through a linux server, before it goes to the router. To achieve this, these were the rough steps to achieve this:

  1. Install DHCP server
  2. Setup DHCP server to give out the linux server's IP as the gateway
  3. Use tcpdump to record all traffic
  4. Investigative tools to analyse the collected traffic

Basic Network layout

For the sake of all the material that follows, here is a summary of what my network looks like:

192.168.0.1 - I have a router (Cable modem) that provides access to the greater Internet.
192.168.0.2 - Linux Server (Gentoo distribution)
192.168.0.3 - 192.168.0.9 - Other servers, devices and appliances (Printer, SIP Server, TV, etc)
192.168.0.10 - 192.168.0.99 - Computer devices (PCs, Laptops, mobile phones, tablets, Kindle, etc)

DHCP Server

Depending on your linux distribution, you can pick the DHCP server you are most comfortable with. I use ISC DHCP. Depending on the server you have chosen, your configuration will vary, but I've included some snippets from the configuration to demonstrate the setup I settled on.

In my installation, the file lives in /etc/dhcp/dhcpd.conf

# option definitions common to all supported networks...
option domain-name-servers 208.67.220.220, 208.67.222.222, 8.8.8.8, 8.8.4.4;

default-lease-time 360000;
max-lease-time 3600000;

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;

# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;

# This is a very basic subnet declaration.
subnet 192.168.0.0 netmask 255.255.255.0 {
  # Assign IPs between 192.168.0.10-192.168.0.99
  range 192.168.0.10 192.168.0.99;
  # Address of server to route traffic through
  option routers 192.168.0.2;
}

Some explanations -
option domain-name-servers defines my DNS resolving servers. I use public servers for both speed and independence from my ISPs DNS (the 8.8.* addresses are Google's free DNS, while the other two are part of OpenDNS)
To determine which of the free DNS servers to use, I settled on a decision based on speed. To determine which is the fastest DNS in terms of latency, I used the following command

parallel -j0 --tag dig @{} "$*" ::: 208.67.222.222 208.67.220.220 198.153.192.1 198.153.194.1 156.154.70.1 156.154.71.1 8.8.8.8 8.8.4.4 | grep Query | sort -nk5

It requires the parallel command. Essentially, it cycles through the list of known free DNS servers and does a query in each, then sorts the results. The top is the fastest.

max-lease-time identifies how long before a new lease is required. I set this to a very long time, since I don't have a congested network and I would like for devices to keep their IPs for as long as practical.

range 192.168.0.10 192.168.0.99; identifies the IPs I'd like dynamically allocated

option routers 192.168.0.2; - this is the most critical part for what I was trying to achieve. I want all traffic on the network to be routed through the Linux server machine, so I set that as the router (gateway) address.

After this step, any device that was getting their address automatically, was also routing all its internet access through the linux server (and thus packets could be intercepted and inspected on the way in and out).

Record all network traffic

To record all the traffic, I settled on using tcpdump. It's one of the most established and reliable solutions. Instead of doing real time inspection, I setup a running log that dumps everything to a file.

My shell script was:

#!/bin/bash
export FILE=/var/log/internet.log
export INTIP=192.168.0.2
export INTNET=192.168.0.0/16
tcpdump -s0 -i enp2s0 -w $FILE  "(ip src net $INTNET and not ip src $INTIP) or (ip dst net $INTNET and not ip dst $INTIP)"

The variables at the top set where the log file should be recorded, what's the Server's IP, the intranet ip range and the network card's device name (please note yours could be something like "eth0" instead). The condition in the end ensured that traffic from the server itself or internally between the server and the Local Area Network were not filed.

And just like that, I had all traffic between the computers on the network and the outside world being logged to a file. It turns out that was the easy part.

Traffic analysing

As I soon found out, it turns out there are many tools to make sense of what was captured by tcpdump in what's known as a pcap (packet capture) file. Most of the tools I ran into were quite "mature" (old) and very technical (not very easy or straight forward to use). I'll discuss only a few of them based on what I found useful

wireshark

Most people would have heard of that one. It is capable of monitoring and investigating traffic. It is very powerful and can allow you to do a lot of things, but with that power, also comes the drawback of it being quite hard to use. There are new features that make some things easy (like export all HTTP objects to files), but it is still somewhat of a cumbersome tool. In summary, if you know what you are doing, this may be the only tool you need. But if you want a quick look and something easy to use, you probably want something simpler.

tcpflow

This tool is quite simple to use and can give you a lot of readable information quickly.

tcpflow -aCJ -e http -r /var/log/internet.log

This is the basic usage scenario and it shows the raw ASCII data of the requests and responses between the local machines and remote servers.

tcpflow -aCJ -e http -r /var/log/internet.log | grep -E "(GET|POST|Host)"

Similar to the above command, but instead of showing the complete ASCII packets, it's only showing the HTTP requests and hosts the request is to. It can help with getting a good idea about users are generally doing on the web, without going into details.

tcpflow -aCJ -e http -r /var/log/internet.log | grep -E "Host" | sort -u | awk '{ print $2 }'

Another variation, which will give you just a list of the websites users have gone to.

chaosreader

Now we are getting a bit more user friendly, This simple tool merges the tcp streams and extracts the relevant files. It stores the images and html files that have been downloaded and shows a complete log of all the established connections. It generates HTML files as reports, that are very easy to use.

The usage is simple:

chaosreader /var/log/internet.log

Just make sure you run the command inside an empty directory - it will generate a lot of files as part of the export.

NetworkMiner

I finally found a tool I am pretty happy with. Some parts of its interface are not as straight forward as I would like but by and large, it's the most comprehensive tool to service the needs I had. It lets you see all the remote connections, all the HTTP streams, all the files and images that have been downloaded and a couple of views to investigate the raw text that has been passed through. All in all, it can give you all the information you need about what has happened on your network.
It's a GUI, so you pick your capture file from the interface. It takes a bit of time to do its processing, but it is relatively quick. It also stores all the downloaded files locally.
It's written in .NET, but it is designed to run under Mono as well. 
It is open source and I couldn't help doing a couple of modifications. The main things I changed is to store session data in a sub-folder with the hosts' names instead of their IPs as well as ignore storing certificate files - it was just creating a lot of useless clutter. Further modifications I'm considering is exporting data to DB/SQL files to be easier to analyse and search through as well as some UI changes to allow looking at HTTP packets quickly.

Conclusions

All in all, after a few trials and errors and some investigation, it was fairly straight forward to get network monitoring up and running. On the downside, this approach would have worked a lot better in terms of results a few years ago, before most major sites and communication applications started forcing exclusive SSL encrypted traffic. Now, as expected, the only information to be gained from an HTTPS connection is the address of the site - non of the content is visible. Same with messaging applications. If you need to monitor network traffic on your own network, you are probably better off installing software locally on the machines/devices you are interested in. fiddler for PCs and mSpy for mobile devices would be a good place to start. There are ways to do it through a router, but they get complex and rely on playing with trusted certificates. If you decide to go down that path, this article would get you only half way there.

12 September 2012

Virtual Box Console Commands

These are some notes about commands I had to dig out to operate my VirtualBox image through the command line (since the GUI for it segfaults on start).
For the purpose of this article, I am using [machine_name] as the name you have given during the creation of the virtual machine. Also, this article does not describe the actual creation of the virtualbox machine and image for attached storage, as those can vary quite a bit, depending on your needs and situation. This just gives some tips about how to operate your machines when you need to work with them.

Before we start, all the virtualbox modules should be loaded in. The following command makes sure that is done:
for m in vbox{drv,netadp,netflt}; do modprobe $m; done

Create Virtual Machine

VBoxManage createvm -name "[machine_name]" -register --ostype Windows7 --basefolder /mnt/disks

Adjust settings

VBoxManage modifyvm [machine_name] --memory 2048

Set network in bridged mode

VBoxManage modifyvm [machine_name] --nic1 bridged --bridgeadapter1 eth0
or for running OS:
VBoxManage controlvm [machine_name] nic1 bridged eth0

Attach Storage controller

VBoxManage storagectl [machine_name] --name "SATA Controller" --add sata --controller IntelAHCI --bootable on

Create and attach disk

VBoxManage createhd --filename /mnt/disks/[machine_name]/root.vdi --format VDI --size 20480
VBoxManage storageattach [machine_name] --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium /mnt/disks/[machine_name]/root.vdi



Attach usb devices

VBoxManage list usbhost
VBoxManage controlvm [machine_name] usbattach [UUID]



Attach usb devices permanently

VBoxManage list usbfilters
VBoxManage usbfilter add 1 --target [machine_name] --name Mouse --vendorid [linux vendor id] --productid [linux product id]
VBoxManage controlvm [machine_name] usbattach [UUID]




To start the virtual machine (in headless mode), you can use this command:

VBoxManage startvm [machine_name] --type headless

If you want to access a GUI during the booting process, you can enable the buildin RDP with:
VBoxManage controlvm [machine_name] vrde on

And then connect with any Remote Desktop application to it on the server (at default port 3389).

When installing/modifying OS, it is often handy to change the booting device.

This will let you set the DVD to an ISO image of your choice (the name of my storagectl is "SATA Controller", but that is something you could have set to anything):
VBoxManage storageattach [machine_name] --storagectl "SATA Controller" --port 1 --device 0 --type dvddrive --medium /mnt/video/disks/VBoxGuestAdditions_4.1.20.iso

To mount the system's CD-ROM:
VBoxManage storageattach [machine_name] --storagectl "SATA Controller" --port 1 --device 0 --type dvddrive --medium Host:/dev/cdrom

To "eject" the ISO:
VBoxManage storageattach [machine_name] --storagectl "SATA Controller" --port 1 --device 0 --type dvddrive --medium emptydrive

This commands help set the DVD as a booting drive:
VBoxManage modifyvm [machine_name] --boot1 dvd

And change it back to HDD:
VBoxManage modifyvm [machine_name] --boot1 disk


modifyvm is also the command to change a lot of the virtual machine's settings, like how much RAM is allocated, what OS is going to be installed on it and other similar hardware and other settings. The virtual machine has to be power off for most of those changes to be made.


To shutdown the machine, best option would be to emulate the pressing of the power button, which should trigger the OS shutdown procedures:
VBoxManage controlvm [machine_name] acpipowerbutton

If that doesn't do the job, you can do a hard plug with:
VBoxManage controlvm [machine_name] poweroff
To see the settings of a machine:
VBoxManage showvminfo [machine_name]

Show running virtual machines
VBoxManage list runningvms

To monitor the operation of a virtual machine, you can enable metrics:
VBoxManage metrics setup --period [refresh seconds] --samples [samples to keep] [machine_name]
as
VBoxManage metrics setup --period 1 --samples 1 [machine_name]

Show all statistics:
VBoxManage metrics query [machine_name]
Look CPU stats:
VBoxManage metrics query [machine_name] CPU/Load/User
Check RAM stats:
VBoxManage metrics query [machine_name] RAM/Usage/Used


This is all I've needed so far to operate virtual machines through the command line for all intents and purposes.

11 September 2012

Handling disconnected NFS mounts

If you use NFS, when the network/server hosting the NFS gets disconnected, it leaves the mount unresponsive and impossible to unmount. Some programs (including KDE/Gnome) sometimes freeze, trying to access the drive.

To fix it, mount the nfs with the "soft" option (as in -o soft parameter for mount or in fstab)

enable url rewriting on default linux setup

A few distributions don't have URL rewriting in apache enabled by default. Here is how to turn it on in Ubuntu:

make a symbolic link for the module

cd /etc/apache2/mods-enabled
ln -s ../mods-available/rewrite.load


Enable .htaccess to overwrite server set behavior - edit /etc/apache2/sites-enabled/000-default and change

...
AllowOverride None


to


<Directory /var/www>
...
AllowOverride All


or instead of all, you can allow only whatever you need.

Restart Apache

/etc/init.d/apache2 restart


edit your .htaccess in the folder you are serving you want to enable url rewriting:

RewriteCond on

RewriteCond %{REQUEST_FILENAME} ^.*\.(gif|jpg|png|css|js|ico)$
RewriteRule ^(.*)$ $1 [L]

RewriteRule ^.*$ index.php [L]

This setup forwards any request that is not a resource to a central index.php page.

That's it folks.

09 September 2012

Converting archives

How To Batch Convert Files From ACE Compression to ZIP

So recently I realized I had a bunch of ACE compressed files that I couldn't open in a lot of other computers without having to install additional software. That over time was becoming less and less ideal so in the end I gave up and decided to recompress all these files to something a little more generic, like .zip

As background history, I must say the reason I had so many .ace files was because a few years ago (after some intensive testing) I determined that the ACE format was giving me the best compression results. Props go to WinACE for consistently beating all other windows based compression algorithms I could get my hands on at the time. I don't know if that is still true as the 7zip and a few others have progressed by leaps and bounds since then, but it seems like winace is still a decent choice as far as file size is concerned.
Unfortunately, it seems like the format never got a huge public support and didn't spread to become ubiquitous enough to be easy to use and share files with. Thus, my decision to recompress files into something more accessible.

After I started converting a few files, I decided it was taking too long to do them all one by one, so I wrote the following script to get a directory full with *.ace files and recompress them all one by one to their .zip equivalents.


#!/bin/bash
mkdir tmp
for i in *.ace
do
        echo "Recompressing $i"
        mv "$i" "tmp/$i"
        cd tmp
        unace x "$i"
        rm "$i"
        zip -mr "$i.zip" *
        mv "$i.zip" ../
        cd ..
        echo "Done $i"
done
rm -r tmp



Rehash of what it is doing.

  1. Create a temporary directory
  2. For each file with .ace extension, do the following
  3. Move the .ace file to the temporary directory
  4. Extract the files
  5. Delete the .ace file
  6. Compress the files to .zip (and delete them)
  7. Move the ready .zip file to the original directory
  8. Continue with the next .ace file
  9. When done, delete the temporary directory.
Please note that with a small change to this script, this can be modified to batch change files from any supported format to any other supported format (change the unace and zip lines with their equivalents).

Also note, I'm using the unace program, which typically does not come pre-installed with distributions, but most distros would have it in their repositories.

01 May 2011

Install Ubuntu 11.04 and migrate

This is my story of installing Ubuntu 11.04 Natty Narwall on an SSD partition and migrating my existing Ubuntu 10.10 to the new system for the applications I use.

I use my machine primarily for web development and the majority of my tools are web-based. With that in mind, my migration was easier than what it could have been.

Installation


There are many guides out there that will lead you step by step through the installation process. For the most part it's a breeze.

After the installation, I ran into one major hurdle - the UI wouldn't boot properly. After looking through the log files, I identified the problem to be related to the nvidia drivers. After some experimentation, I ended up uninstalling the driver that came with the distribution and installing the latest nvidia drivers from the repository and that fixed my problem.


sudo apt-get remove nvidia-173
sudo apt-get install nvidia-current
reboot


SSD Related tweaks


I made a few basic changes to optimize the performance of the file system for SSD drive.

First, change disk scheduling to noop.

sudo apt-get install sysfs

Then add

block/sda/queue/scheduler = noop

to /etc/sysfs.conf

Next, add the noatime option to the SSD drive in /etc/fstab
Finish off by mapping the /tmp folder into RAM. That way a lot of the random, not important writes will go directly to RAM instead of wearing the disk. This also has the side effect of clearing the /tmp folder on reboot. Add the following line to the /etc/fstab file.

tmpfs /tmp tmpfs nodev,nosuid,noexec,mode=1777 0 0


Migration


This is what it took to migrate my applications and settings I needed to resume work.

SSH keys


A lot of systems I login to authenticate me via my SSH key. To carry that forward, I copied the ~/.ssh folder over to the current home directory.

Sudo


I like sudo to do its thing without pestering me for a password all the time. This is how I do this:

Replace

%admin ALL=(ALL) ALL

with

%admin ALL=NOPASSWD: ALL


NFS


I have a few NFS shares mapped to my machine. I had to install NFS and copy the shares information from the old system.

sudo apt-get install nfs-client

copy remote systems from /etc/fstab

Compiz Settings


Unity is a compiz plugin, but even though the system runs compiz, the compiz manager was not installed by default.

sudo apt-get install compizconfig-settings-manager

Once I had that installed, I searched for compiz and found the config manager.
One of the features I was missing was the enhanced zoom shortcut. To enable that, in enhanced zoom, enable shortcuts for zoom in and zoom out.

Another point of interest in the compiz configuration is the Unity section, where you can adjust the Ubuntu Unity Plugin setting.

Applications


I am used to vim and have grown to expect it when I type the vi command. To fix this, just install vim.
sudo apt-get install vim

Browser


I use google chrome as my main browser. Download from google.com/chrome (get the 64-bit .deb package)
After it's installed, enable account sync. With google chrome and the account sync, all my bookmarks, saved passwords and other browser settings were migrated automatically.

FTP Client


My FTP client of choice is Filezilla for features, speed and supporting SFTP.

sudo apt-get install filezilla

copy ~/.filezilla folder over to preserve all saved FTP accounts

Code Editor / IDE



sudo apt-get install geany

I also install a few geany plugins (geany-plugin-* - codenav, prj, webhelper, treebrowser)

Source Control



sudo apt-get install git


Remote Desktop


tsclient is installed by default. To copy your Remote Desktop Server configurations,
copy ~/.tsclient to your home directory

Terminal


I find the drop down consoles quite handly, so I use guake (a quake like console)

sudo apt-get install guake

It is advised to setup guake to start on login. In unity, search for "Startup Applications" and select "Guake Terminal".

29 January 2009

System Administration with Webmin

Whether you have a linux server somewhere you would like to keep an eye on or you are doing shared hosting for other people, webmin is a great tool to let you do a lot of the common administration tasks via an easy to use web application.

To install, get the latest version from here http://www.webmin.com/download.html (or install from your distro's repository, but it might not be up to date)

Then, depending on the OS you have, you may have to run '/usr/libexec/webmin/setup.sh'

After that, you are ready to go. To access webmin, headoff to http://yourserver:10000/ (default configuration)

You can install a number of additional modules available here: http://www.webmin.com/third.html
Some that I found useful:
System Information - http://swelltech.com/projects/webmin/modules/sysinfo-1.170.wbm
Custom Link - http://www.webmin.com/download/modules/link.wbm
Stats - http://downloads.sourceforge.net/webminstats/sysstats-1.2.tgz

One nice feature is that webmin allows you to install module directly from the net, just by pasting the link above.

If you are running a shared hosting environment, the usermin module is a must. You might also want to rely on virtualmin for setting up all the virtual domains, but if you go that route, you have to set it up and use it from the start.