Thứ Sáu, 31 tháng 10, 2014

Install ntopng on Centos 7

This is how to compile ntopng in a fresh centos 7 x64 installation
  • For the impatient:
    • # yum install -y subversion autoconf automake make gcc 
      libpcap-devel libxml2-devel sqlite-devel libtool glib2-devel
       gcc-c++
      $ svn co https://svn.ntop.org/svn/ntop/trunk/ntopng
      $ ./autogen.sh 
      $ ./configure
      $ make
      $ ./ntopng --help
      ntopng x86_64 v.1.1.4 (r7865) - (C) 1998-14 ntop.org
      <snip>
  • Step by step description
    • Pull the source code from the ntop svn repository. To do this, you need first to install subversion using yum as follows
      $ sudo yum -y install subversion
      
    • Now change your directory to the one you want ntopng in and run
      $ svn co https://svn.ntop.org/svn/ntop/trunk/ntopng
      
    • Once the repository is downloaded, you should run the autogen.sh script
      $ ./autogen.sh
      
    • It will fail due to the lack of a autoconf packages. To step over this run
      $ sudo yum install -y autoconf automake
      
    • and re-run autogen.sh
      $ ./autogen.sh
      ......
      
    • Now autogen.sh completes successfully, then run ./configure, but it will fail due to the missing compiler
      $ ./configure
      .....
      configure: error: no acceptable C compiler found in $PATH
      
    • Install it using
      $ sudo yum install -y gcc
      Next step is the missing libpcap development package
      $ ./configure
      ......
    • Please install libpcap(-dev) (http://tcpdump.org)
      $ sudo yum install -y libpcap-devel
    • Next mandatory package is libxml2-devel required by rrd compilation
      $ ./configure
      .....
    • Please install libxml2(-devel) package (RRD prerequisite)
      $ sudo yum install -y libxml2-devel
      and glib2-devel
      $ ./configure
      .....
    • Please install libglib-2.0 (glib2-devel/libglib2.0-dev) package (RRD prerequisite)
      $ sudo yum install -y glib2-devel
    • now configure require another package
      $ ./configure
      SQLite 3.x missing (libsqlite3-dev): please install it and try again
      
    • Installable running
      $ sudo yum install -y sqlite-devel
    • Now configure works
      $ ./configure
    • You are now ready to compile typing /usr/bin/gmake
      But make will fail due the the missing c++ compiler
      $ make
      configure: error: Unable to find a working C++ compiler
      $ sudo yum install gcc-c++
      
    • After the last installed package, build will fail on json-c compilation with the following error
      $ make
      make: *** [third-party/json-c/.libs/libjson-c.a] Error 2
      
    • To solve this, install libtool package using
      $ sudo yum -y install libtool
      
    • Then rerun make
      $ make
    • and you should have everything compiled successfully.
      Test is running:
      $ ./ntopng --help
      ntopng x86_64 v.1.1.4 (r7865) - (C) 1998-14 ntop.org
      
Enjoy!

Other

IMPORTANT

This directory contains nightly builds (SVN code) of 64 bit binary packages for RedHat/CentOS (latest OS version). Please use rpm-stable.ntop.org for stable builds.
In order to use the repository you need to create a file named /etc/yum.repos.d/ntop.repo containing
# cat /etc/yum.repos.d/ntop.repo
[ntop]
name=ntop packages
baseurl=http://www.nmon.net/centos/$releasever/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://www.nmon.net/centos/RPM-GPG-KEY-deri
[ntop-noarch]
name=ntop packages
baseurl=http://www.nmon.net/centos/$releasever/noarch/
enabled=1
gpgcheck=1
gpgkey=http://www.nmon.net/centos/RPM-GPG-KEY-deri
and also install the /etc/yum.repos.d/epel.repo extra repositories
# cat /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux X - $basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-X&arch=$basearch
failovermethod=priority
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-X
Note: replace X with 6 (for CentOS 6) or 7 (for CentOS 7) then do:
  • yum clean all
  • yum update
  • yum install pfring n2disk nprobe ntopng ntopng-data nbox
PF_RING is now packaged without ZC/DNA drivers. You can choose what family you want to install
  • ZC: yum install pfring-drivers-zc-dkms
  • DNA: yum install pfring-drivers-dna-dkms
Most software works without licenses. However some components do need a license. They include:
  • PF_RING DNA and libzero user-space libraries
  • nProbe (NetFlow/IPFIX probe)
  • n2disk (packet to disk application)
You can find more info on the ntop site, or acquire licenses on the ntop e-shop.
We remind you that all ntop products are available at no cost to universities and research.
NOTE
  • we periodically update the kernel package in order to build against a recent kernel. If you encounter issues while installing packages make sure you have first updated the linux kernel package.

Install Redis on CentOS 6.5

Perform an update to ensure you've got the latest of everything in the base package.

yum update

Install wget so you can download a few things.

yum install wget

Allow yum to locate/install redis, per this page here.

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm"

Now, install all the prerequisites

yum install tar make automake gcc gcc-c++ git net-tools libcurl-devel libxml2-devel libffi-devel libxslt-devel tcl redis ImageMagick npm mysql-server mysql-devel nginx libyaml libyaml-devel patch readline-devel libtool bison

Enable and start MySQL

chkconfig --level 3 mysqld on
service mysqld start

Secure your MySQL installation by setting a password. replace 'new-password' with your secure password.

mysqladmin -u root password 'new-password'
mysqladmin -u root -h YourHost.YourDomain.com password 'new-password'

Open up the necessary firewall ports

vi /etc/sysconfig/iptables
  copy this line.
    -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
  and add two more with port 80 & 443 as well
    -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
    -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

Restart the firewall

service iptables restart

Thứ Sáu, 10 tháng 10, 2014

How to set a default gateway on CentOS

A default gateway is a remote host or router that your Linux host forwards traffic to when the destination IP address of outgoing traffic does not match any route in your local routing table. Configuring a default gateway on CentOS is quite straightforward.
If you wish to change a default gateway temporarily at run time, you can use ip command.
First things first. To check what default gateway you are using currently:
$ ip route show
192.168.91.0/24 dev eth0  proto kernel  scope link  src 192.168.91.128 
169.254.0.0/16 dev eth0  scope link  metric 1002 
default via 192.168.91.2 dev eth0 
According to the local routing table shown above, a default gateway is 192.168.91.2, and traffic is forwarded to the gateway via eth0.
In order to change a default gateway to another IP address:
$ sudo ip route replace default via 192.168.91.10 dev eth0
Obviously, a default gateway's IP address should come from the subnet associated with the interface connected to the default gateway, in this example, 192.168.91.0/24. Otherwise, the command will fail with the following error.
RTNETLINK answers: No such process
Also, keep in mind that the default route change made by ip command will be lost after rebooting.
In order to set a default gateway permanently on CentOS, you will need to update /etc/sysconfig/network accordingly.
$ sudo vi /etc/sysconfig/network
GATEWAY=192.168.91.10
Again, be aware that the IP addressed specified here should match with the subnet (192.168.91.0/24) associated with a default route interface.
Another option to set a default gateway persistently on CentOS is to edit /etc/sysconfig/network-scripts/ifcfg-<default_interface_name>, and add "GATEWAY=<gateway_ip>" there. If the default interface is "eth0", you will need to edit /etc/sysconfig/network-scripts/ifcfg-eth0. If you choose to use this method, you need to refer to this post to get familiar with this option.
Whether you edit /etc/sysconfig/network or /etc/sysconfig/network-scripts/ifcfg-ethX, don't forget to restart networkservice as follows, or reboot your CentOS for the change to take effect.

phpize - Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script.

You may get the below error :


# phpize
Configuring for:
PHP Api Version:         20090626
Zend Module Api No:      20090626
Zend Extension Api No:   220090626
Cannot find autoconf. Please check your autoconf installation and the
$PHP_AUTOCONF environment variable. Then, rerun this script.


Solution: 

# yum install autoconf

Re-run the "phpsize" command, the issue will fix.

# phpize
Configuring for:
PHP Api Version:         20090626
Zend Module Api No:      20090626
Zend Extension Api No:   220090626

Thứ Tư, 8 tháng 10, 2014

Centralized Logs Management with Logtash, ElasticSearch, and Redis

Deploying a Centralized Logs Management System seems very easy these days with such these great tools:

+ Logtash: collect logs, index logs, process logs, and ship logs
+ Redis: receive logs from logs shippers
+ ElasticSearch: store logs
+ Kibana: web interface with graphs, tables...

We will implement the logs management system as the following architecture:



In  this tutorial, I only deploy one shipper (nginx logs of my Django app) on one machine, and one server to play as logs indexer (redis, logstash, elasticsearch, kibana):


1. On the indexer server, install and run Redis

http://iambusychangingtheworld.blogspot.com/2013/11/install-redis-and-run-as-service.html

2. On the indexer server, install and run ElasticSearch:

$ sudo aptitude install openjdk-6-jre
$ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.deb
$ sudo dpkg -i elasticsearch


3. On the indexer server, download, create config and run Logtash to get log from Redis and store them to ElasticSearch:

+ Download Logtash:

$ sudo mkdir /opt/logstash /etc/logstash
$ sudo cd /opt/logstash
$ sudo wget https://download.elasticsearch.org/logstash/logstash/logstash-1.2.2-flatjar.jar


+ Create Logtash config file /etc/logstash/logstash-indexer.conf with the following content:

input {
        redis {
                host => "127.0.0.1"
                data_type => "list"
                key => "logstash"
                codec => json
        }
}
output {
        elasticsearch {
                embedded => true
        }
}


+ Run Logstash, this will also activate the Kibana web interface on port 9292:

$ java -jar /opt/logstash/logstash-1.2.2-flatjar.jar agent -f /etc/logstash/logstash-indexer.conf -- web


 4. On the shipper machine (my computer), download Logstash, and create config file for Logtash to copy my Django app's logs to the indexer server:

+ Download Logstash:

$ sudo mkdir /opt/logstash /etc/logstash
$ sudo cd /opt/logstash
$ sudo wget https://download.elasticsearch.org/logstash/logstash/logstash-1.2.2-flatjar.jar

+ Create a config file at /etc/logstash/logstash-shipper.conf for Logstash to copy logs file redis at the indexer server:

input {
        file {
                path => "/home/projects/logs/*ecap.log"
                type => "nginx"
        }
}
output {
        redis {
                host => "indexer.server.ip"
                data_type => "list"
                key => "logstash"
        }
}



+ Run Logstash:

$ java -jar /opt/logstash/logstash-1.2.2-flatjar.jar agent -f /etc/logstash/logstash-shipper.conf


5. From a random machine on my network, open browser to access the kibana web interface to manage all the logs:




From now on, If I want to monitor any services's logs, I just need to run a Logstash instance on the server which runs that service.


But, there is one annoying thing: the CPU usages on the indexer server is very high. It's because I'm running all the services (logstash, redis, elasticsearch, kibana) on a same server, and the java processes consume a lot of CPU. Look at the following htop screenshots and you will see:

  • Indexer server, before running all the services:


  • Indexer server, after running all the services:



These are all listening ports on the indexer server:


Some tuning on ElasticSearch maybe helpful. http://jablonskis.org/2013/elasticsearch-and-logstash-tuning/




References:
[0] http://michael.bouvy.net/blog/en/2013/11/19/collect-visualize-your-logs-logstash-elasticsearch-redis-kibana/
[1] http://logstash.net/docs/1.2.2/tutorials/getting-started-centralized
[2] http://logstash.net/docs/1.2.2/tutorials/10-minute-walkthrough/