Saturday, December 27, 2014

A Bookmarklet to Tick All Checkboxes on a Webpage

Recently, I found myself on a website where I had to tick 100+ checkboxes and there was no "Select all" button on the page. Obviously, I couldn't do this manually, so I wrote the following bookmarklet.

javascript:(function(){for(i of document.getElementsByTagName('input')){if(i.type=='checkbox') i.checked=!i.checked;}})()

Save this as a bookmark in your browser and click on it to toggle all checkboxes on the page you are on.

Sunday, November 30, 2014

Logging the Duration of Ext Ajax Requests

The following snippet shows how to log the duration of ajax requests in your Ext JS application:

Ext.Ajax.on('beforerequest', function(conn, response, options){
    console.log("Requesting:" + response.url);
    console.time(response.url);
});
Ext.Ajax.on('requestcomplete', function(conn, response, options){
    console.timeEnd(options.url);
});

Note that this code uses console.time(), which is a non-standard feature and may not work in all browsers.

Saturday, November 22, 2014

Ext JS - Caching AJAX Responses to HTML5 Web Storage for Better Performance

This post shows how you can improve performance of your Ext JS applications by using a "Caching AJAX Proxy". This proxy saves URL responses to HTML5 Web Storage (e.g. session storage or local storage), which means that when the same URL is requested multiple times, a cached response is returned, instead of sending a request to the server each time. This makes the application more responsive and also reduces load on the server handling the requests.

/**
 * A Caching Ajax Proxy which uses AJAX requests to get data from a server and
 * then stores the data to HTML5 Web Storage. If the storage fills up, it removes
 * entries from the cache until space is available.
 * (Compatible with Ext JS 4.2)
 */
Ext.define('App.data.proxy.CachingAjax', {
  extend: 'Ext.data.proxy.Ajax',
  alias: 'proxy.cachingajax',

  // use session storage, but can be configured to localStorage too
  storage: window.sessionStorage,

  // @Override
  doRequest: function(operation, callback, scope) {
    var cachedResponse = this.getItemFromCache(this.url);
    if (!cachedResponse) {
        this.callParent(arguments);
    }
    else {
        console.log('Got cached data for: ' + this.url);
        this.processResponse(true, operation, null, cachedResponse,
                             callback, scope, true);
    }
  },

  // @Override
  processResponse: function(success, operation, request, response,
                            callback, scope, isCached) {
    if (success === true && !isCached) {
        this.putItemInCache(this.url, response.responseText);
    }
    this.callParent(arguments);
  },

  /**
   * @private
   * Returns the data from the cache for the specified key
   * @param {String} the url
   * @return {String} the cached url response, or null if not in cache
   */
  getItemFromCache: function(key) {
    return this.storage ? this.storage.getItem(key) : null;
  },

  /**
   * @private
   * Puts an entry in the cache.
   * Removes a third of the entries if the cache is full.
   * @param {String} the url
   * @param {String} the data
   */
  putItemInCache: function(key, value) {
    if (!this.storage) return;
    try {
      this.storage.setItem(key, value);
    } catch (e) {
      // this might happen if the storage is full.
      // Remove a third of the items and retry.
      // If it fails again, disable the cache quietly.
      console.log('Error putting data in cache. CacheSize: ' + this.storage.length +
                  ', ErrorCode: ' + e.code + ', Message: ' + e.name);

      while (this.storage.length != 0) {
        var toRemove = this.storage.length / 3;
        for (var i = 0; i < toRemove ; i++) {
          var item = this.storage.key(0);
          if (item) this.storage.removeItem(item);
          else break;
        }
        console.log('Removed one-third of the cache. Cache size is now: ' + this.storage.length);
        try {
          this.storage.setItem(key, value);
          break;
        } catch (e) {
          console.log('Error putting data in cache again. CacheSize: ' + this.storage.length +
                      ', ErrorCode: ' + e.code + ', Message: ' + e.name);
        }
      }
      if (this.storage.length == 0) {
        console.log("Cache disabled");
        this.storage = null;
      }
    }
  }
});
Usage:
var store = Ext.create('Ext.data.Store', {
  model: 'User',
  proxy: {
    type: 'cachingajax',
    url : 'http://mywebsite/path'
  }
});

Obviously, you should only use this caching proxy when the server-side data is static, because if it is changing frequently your application will end up displaying stale, cached data.

This proxy can also be extended in the future to remove cached entries after specific time intervals or clear out the entire cache when the application starts up.

Saturday, October 25, 2014

stackoverflow - 90k rep

Five months after crossing the 80k milestone, I have now reached a reputation of 90k on stackoverflow!

The following table shows some stats about my journey so far:

0-10k 10-20k 20-30k 30-40k 40-50k 50-60k 60-70k 70-80k 80-90k Total
Date achieved 01/2011 05/2011 01/2012 09/2012 02/2013 07/2013 12/2013 05/2014 10/2014
Questions answered 546 376 253 139 192 145 66 58 32 1807
Questions asked 46 1 6 0 1 0 0 0 2 56
Tags covered 609 202 83 10 42 14 11 14 4 989
Badges
(gold, silver, bronze)
35
(2, 10, 23)
14
(0, 4, 10)
33
(2, 8, 23)
59
(3, 20, 36)
49
(0, 19, 30)
65
(2, 26, 37)
60
(5, 22, 33)
50
(2, 24, 24)
50
(7, 21, 25)
418
(23, 154, 241)

I'm a bit disappointed that I only managed to answer 32 questions over the last 5 months. It's because work has been keeping me so busy! For me, stackoverflow has not simply been a quest for reputation, but more about learning new technologies and picking up advice from other people on the site. I like to take on challenging questions, rather than the easy ones, because it pushes me to do research into areas I have never looked at before, and I learn so much during the process.

Next stop, 100k!

Tuesday, September 30, 2014

Ext JS CSV Data Reader

In a previous post, I showed how you can convert CSV to JSON in JavaScript. In this post, I will show how you can display CSV data in an Ext JS grid panel using a custom Reader.

Currently, there are three kinds of Readers available in Ext JS 4.2.2:

A CSV reader isn't available but is quite easy to create, simply by extending the JSON reader. The code for my CSV Reader is shown below. It first converts the CSV response from the server into JSON format and then invokes the parent JSON reader.

/**
 * The CSV Reader is used by a Proxy to read a server response
 * that is sent back in CSV format.
 */
Ext.define('CsvReader', {
    extend: 'Ext.data.reader.Json',
    alias : 'reader.csv',

    // converts csv into json
    toJson: function(csvData){
      var lines = csvData.split("\n");
      var colNames = lines[0].split(",");
      var records = [];
      for(var i = 1; i < lines.length; i++) {
        if (lines[i] == "") continue;
        var record = {};
        var bits = lines[i].split(",");
        for (var j = 0; j < bits.length; j++) {
          record[colNames[j]] = bits[j];
        }
        records.push(record);
      }
      return records;
    },

    // override
    getResponseData: function(response) {
        try {
            return this.readRecords(response.responseText);
        } catch (ex) {
            error = new Ext.data.ResultSet({
                total  : 0,
                count  : 0,
                records: [],
                success: false,
                message: ex.message
            });
            this.fireEvent('exception', this, response, error);
            console.log(error);
            return error;
        }
    },

    // override
    readRecords: function(strData) {
        var result = this.toJson(strData);
        return this.callParent([result]);
    }
});

Now let's use it to display an example CSV file containing book data in a grid.

author,title,publishDate
Dan Simmons,Hyperion,1989
Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979

Here is some sample code to read the CSV file into a grid using the CSV reader defined above:

Ext.define('Book', {
    extend: 'Ext.data.Model',
    fields: [
        {name: 'author', type: 'string'},
        {name: 'title', type: 'string'},
        {name: 'publishDate', type: 'string'}
    ]
});

var bookStore = Ext.create('Ext.data.Store', {
    model: 'Book',
    autoLoad: true,
    proxy: {
        type: 'ajax',
        reader: 'csv',
        url: 'data.csv'
    }
});

Ext.application({
    name: 'MyApp',
    launch: function() {
        Ext.create('Ext.Viewport', {
            layout: 'fit',
            items: [{
                xtype:'grid',
                title: 'Books',
                store: bookStore,
                columns: [
                   { text: 'Author', dataIndex: 'author' },
                   { text: 'Title', dataIndex: 'title', flex:1 },
                   { text: 'Publish Date', dataIndex: 'publishDate' }
                ],
                height: 200,
                width: 400
            }],
            renderTo: Ext.getBody()
        });
    }
});

Related posts:
Converting CSV to JSON in JavaScript

Sunday, August 31, 2014

My Git Aliases

A git alias gives you the ability to run a long or hard-to-remember git command using a simple name. They are configured in your .gitconfig file.

One of my favourite aliases is git ls which lists all your commits in a nice format. In addition, git ll shows you what files were committed in each commit.

My git aliases are shown below. (For the latest version of my .gitconfig, visit my GitHub dotfiles repository):

[alias]
    st = status
    co = checkout
    br = branch
    df = diff
    ci = commit
    ca = commit -a --amend -C HEAD
    desc = describe
    rb = rebase -i master --autosquash
    cp = cherry-pick

    who = shortlog -s --
    ls = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)[%an]%Creset' --abbrev-commit --date=relative
    ll = log --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)[%an]%Creset' --decorate --numstat
    ld = log --pretty=format:'%C(red)%h %Cgreen%ad%C(yellow)%d %Creset%s%C(bold blue) [%cn]%Creset' --decorate --date=short

    # list aliases
    la = "!git config -l | grep alias | cut -c 7-"

If you have any useful aliases, please share them in the comments section below.

You might also like:
My Bash Profile
My Bash Aliases

Saturday, August 30, 2014

Converting CSV to JSON in JavaScript

This post shows how you can convert a simple CSV file to JSON in JavaScript.

Consider the following sample CSV:

author,title,publishDate
Dan Simmons,Hyperion,1989
Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979

The desired JSON output is:

[{"author":"Dan Simmons","title":"Hyperion","publishDate":"1989"},
{"author":"Douglas Adams","title":"The Hitchhiker's Guide to the Galaxy","publishDate":"1979"}]

The following JavaScript function transforms CSV into JSON. (Note that this implementation is quite naive and will not handle quoted fields containing commas!)

function toJson(csvData) {
  var lines = csvData.split("\n");
  var colNames = lines[0].split(",");
  var records=[];
  for(var i = 1; i < lines.length; i++) {
    var record = {};
    var bits = lines[i].split(",");
    for (var j = 0 ; j < bits.length ; j++) {
      record[colNames[j]] = bits[j];
    }
    records.push(record);
  }
  return records;
}

A simple test:

csv="author,title,publishDate\nDan Simmons,Hyperion,1989\nDouglas Adams,The Hitchhiker's Guide to the Galaxy,1979";
json = toJson(csv);
console.log(JSON.stringify(json));

To read a CSV file in JavaScript and convert it to JSON:

var rawFile = new XMLHttpRequest();
rawFile.open("GET", "books.csv", true);
rawFile.onreadystatechange = function () {
  if (rawFile.readyState === 4) {
    if (rawFile.status === 200 || rawFile.status == 0) {
      var allText = rawFile.responseText;
      var result = toJson(allText);
      console.log(JSON.stringify(result));
    }
  }
}
rawFile.send(null);

You might also like:
Converting XML to CSV using XSLT 1.0

Saturday, July 19, 2014

Converting XML to CSV using XSLT 1.0

This post shows you how can convert a simple XML file to CSV using XSLT.

Consider the following sample XML:

<library>
  <book>
    <author>Dan Simmons</author>
    <title>Hyperion</title>
    <publishDate>1989</publishDate>
  </book>
  <book>
    <author>Douglas Adams</author>
    <title>The Hitchhiker's Guide to the Galaxy</title>
    <publishDate>1979</publishDate>
  </book>
</library>

This is the desired CSV output:

author,title,publishDate
Dan Simmons,Hyperion,1989
Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979

The following XSL Style Sheet (compatible with XSLT 1.0) can be used to transform the XML into CSV. It is quite generic and can easily be configured to handle different xml elements by changing the list of fields defined ar the beginning.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:output method="text" />

  <xsl:variable name="delimiter" select="','" />

  <!-- define an array containing the fields we are interested in -->
  <xsl:variable name="fieldArray">
    <field>author</field>
    <field>title</field>
    <field>publishDate</field>
  </xsl:variable>
  <xsl:param name="fields" select="document('')/*/xsl:variable[@name='fieldArray']/*" />

  <xsl:template match="/">

    <!-- output the header row -->
    <xsl:for-each select="$fields">
      <xsl:if test="position() != 1">
        <xsl:value-of select="$delimiter"/>
      </xsl:if>
      <xsl:value-of select="." />
    </xsl:for-each>

    <!-- output newline -->
    <xsl:text>&#xa;</xsl:text>

    <xsl:apply-templates select="library/book"/>
  </xsl:template>

  <xsl:template match="book">
    <xsl:variable name="currNode" select="." />

    <!-- output the data row -->
    <!-- loop over the field names and find the value of each one in the xml -->
    <xsl:for-each select="$fields">
      <xsl:if test="position() != 1">
        <xsl:value-of select="$delimiter"/>
      </xsl:if>
      <xsl:value-of select="$currNode/*[name() = current()]" />
    </xsl:for-each>

    <!-- output newline -->
    <xsl:text>&#xa;</xsl:text>
  </xsl:template>
</xsl:stylesheet>

Let's try it out:

$ xsltproc xml2csv.xsl books.xml
author,title,publishDate
Dan Simmons,Hyperion,1989
Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979

Saturday, June 28, 2014

Parsing an Excel File into JavaBeans using jXLS

This post shows how you can use jXLS to parse an Excel file into a list of JavaBeans.

Here is a generic utility method I wrote to do that:

/**
* Parses an excel file into a list of beans.
*
* @param <T> the type of the bean
* @param xlsFile the excel data file to parse
* @param jxlsConfigFile the jxls config file describing how to map rows to beans
* @return the list of beans or an empty list there are none
* @throws Exception if there is a problem parsing the file
*/
public static <T> List<T> parseExcelFileToBeans(final File xlsFile,
                                                final File jxlsConfigFile)
                                                throws Exception {
  final XLSReader xlsReader = ReaderBuilder.buildFromXML(jxlsConfigFile);
  final List<T> result = new ArrayList<>();
  final Map<String, Object> beans = new HashMap<>();
  beans.put("result", result);
  try (InputStream inputStream = new BufferedInputStream(new FileInputStream(xlsFile))) {
    xlsReader.read(inputStream, beans);
  }
  return result;
}

Example:
Consider the following Excel file containing person information:

FirstName LastName Age
Joe Bloggs 25
John Doe 30

Create the following Person bean to bind each Excel row to:

package model;

public class Person {

  private String firstName;
  private String lastName;
  private int age;

  public Person() {
  }
  public String getFirstName() {
    return firstName;
  }
  public void setFirstName(String firstName) {
    this.firstName = firstName;
  }
  public String getLastName() {
    return lastName;
  }
  public void setLastName(String lastName) {
    this.lastName = lastName;
  }
  public int getAge() {
    return age;
  }
  public void setAge(int age) {
    this.age = age;
  }
}

Create a jXLS configuration file which tells jXLS how to process your Excel file and map rows to Person objects:

<workbook>
  <worksheet name="Sheet1">
    <section startRow="0" endRow="0" />
    <loop startRow="1" endRow="1" items="result" var="person" varType="model.Person">
      <section startRow="1" endRow="1">
        <mapping row="1" col="0">person.firstName</mapping>
        <mapping row="1" col="1">person.lastName</mapping>
        <mapping row="1" col="2">person.age</mapping>
      </section>
      <loopbreakcondition>
        <rowcheck offset="0">
          <cellcheck offset="0" />
        </rowcheck>
      </loopbreakcondition>
    </loop>
  </worksheet>
</workbook>

Now you can parse the Excel file into a list of Person objects with this one-liner:

List<Person> persons = Utils.parseExcelFileToBeans(new File("/path/to/personData.xls"),
                                                   new File("/path/to/personConfig.xml"));

Related posts:
Parsing a CSV file into JavaBeans using OpenCSV

Sunday, May 25, 2014

Code Formatting Shortcut in Outlook 2010

I often have to write code snippets in my emails and find it such a hassle having to move my mouse to the font selection drop-down to change my font to Consolas every time. I thought that it would be so much easier if I had a keyboard shortcut to switch the font for me (a bit like stackoverflow). After searching around, I found that this can be achieved in Microsoft Outlook by creating a macro that changes the font of your selected text and then assigning a shortcut for that macro.

Creating the macro:
  1. In Outlook, press Alt+F8 to open the Macros dialog
  2. Enter a name for the macro e.g. SetCodeFont and press the Create button
  3. Paste the following macro code into the Visual Basic Editor that opens:
    Sub SetCodeFont()
      Dim objItem As Object
      Dim objInsp As Outlook.Inspector
    
      Dim objWord As Word.Application
      Dim objDoc As Word.Document
      Dim objSel As Word.Selection
      On Error Resume Next
    
      Set objItem = Application.ActiveInspector.CurrentItem
      If Not objItem Is Nothing Then
        If objItem.Class = olMail Then
          Set objInsp = objItem.GetInspector
          If objInsp.EditorType = olEditorWord Then
            Set objDoc = objInsp.WordEditor
            Set objWord = objDoc.Application
            Set objSel = objWord.Selection
            objSel.Font.Name = "Consolas"
          End If
        End If
      End If
    
      Set objItem = Nothing
      Set objWord = Nothing
      Set objSel = Nothing
      Set objInsp = Nothing
    End Sub
    
  4. On the menu bar of the Visual Basic Editor, click on Tools > References... and tick Microsoft Word 14.0 Object Library
  5. Save (Ctrl+S) and close the Visual Basic Editor
Assigning a keyboard shortcut for the macro:
  1. Open a new mail message
  2. Click on the small drop-down arrow on the Quick Access Toolbar (usually located at the very top of the message window) and select More Commands...
  3. In the Outlook Options dialog that opens, click on the Choose commands from: drop-down list and select Macros
  4. Pick Project1.SetCodeFont and press Add >> to add it to the toolbar
  5. Press OK and you should now see the SetCodeFont macro button appear on the Quick Access Toolbar
Running the macro:

You can run the macro by using Alt+NUM, where NUM is the position of the macro button on the toolbar. For example, if the macro button is the first button on the toolbar, use Alt+1 to run it. Try it out by typing some text in your email message, selecting it and pressing Alt+1 to change the font to Consolas. You can use Ctrl+Space to switch back to the default font.

Note: You may get a security popup when you run the macro, asking you to Allow or Deny access. You can change your security settings by going into File > Options > Trust Center > Trust Center Settings... > Macro Settings, but this will require Administrator privileges (I haven't tried this).

Reference:

Use Word Macro to Apply Formatting to Outlook Email by Diane Poremsky [www.slipstick.com]

Saturday, May 24, 2014

glibc detected - double free or corruption (fasttop)

I've been bashing my head against the wall trying to solve this error for days now!

*** glibc detected *** double free or corruption (fasttop): 0x0000002aa63dca00 ***

I've finally found the answer. I set the MALLOC_CHECK_ environment variable to 0 in my startup script to resolve the issue.

MALLOC_CHECK_=0

According to the GNU C Library Reference Manual, MALLOC_CHECK_ is used to check for and guard against bugs in the use of malloc, realloc and free. If MALLOC_CHECK_ is set to 0, any detected heap corruption is silently ignored; if set to 1, a diagnostic is printed on stderr; if set to 2, abort is called immediately.

Saturday, May 10, 2014

stackoverflow - 80k rep

Five months after crossing the 70k milestone, I have now reached a reputation of 80k on stackoverflow!

The following table shows some stats about my journey so far:

0-10k 10-20k 20-30k 30-40k 40-50k 50-60k 60-70k 70-80k Total
Date achieved 01/2011 05/2011 01/2012 09/2012 02/2013 07/2013 12/2013 05/2014
Questions answered 546 376 253 139 192 145 66 58 1775
Questions asked 46 1 6 0 1 0 0 0 54
Tags covered 609 202 83 10 42 14 11 14 985
Badges
(gold, silver, bronze)
35
(2, 10, 23)
14
(0, 4, 10)
33
(2, 8, 23)
59
(3, 20, 36)
49
(0, 19, 30)
65
(2, 26, 37)
60
(5, 22, 33)
50
(2, 24, 24)
365
(16, 133, 216)

I have been very busy over the last few months and haven't had much time to go on stackoverflow. As you can see, I only answered 58 questions over the last 5 months, but my previous answers have helped keep my reputation ticking along nicely. For me, stackoverflow has not simply been a quest for reputation, but more about learning new technologies and picking up advice from other people on the site. I like to take on challenging questions, rather than the easy ones, because it pushes me to do research into areas I have never looked at before, and I learn so much during the process.

90k, here I come!

Monday, May 05, 2014

Coursera class: Programming Mobile Applications for Android Handheld Systems

A few weeks ago, I completed the "Programming Mobile Applications for Android Handheld Systems" class led by Adam Porter. This Coursera class started in January 2014 and was around 8 weeks long. It was a great class in which we learnt about the Android Platform and how to build Android applications. The course covered Android Activities, Intents, Fragments, User Notifications, Services, ContentProviders, BroadcastReceivers, Location & Maps, Alarms and much more! There were many fun assignments along the way too. Whenever I'm using apps on my phone, I now have a better idea of how they are built and what they are doing behind the scenes.

Now, I just need to find the time to build some apps!

Update (30 Aug 2014): I have committed my assignment solutions to my GitHub repository.

Related posts:
Coursera class: Principles of Reactive Programming
Coursera class: Functional Programming Principles in Scala
Stanford's Online Courses: ml-class, ai-class and db-class

Saturday, March 22, 2014

Bash Redirection and Piping Shortcuts

Redirecting both stdout and stderr

In order to redirect both standard output and standard error to a file, you would traditionally do this:

my_command > file 2>&1
A shorter way to write the same thing is by using &> (or >&) as shown below:
my_command &> file

Similarly, to append both standard output and standard error to a file, use &>>:

my_command &>> file
Piping both stdout and stderr

To pipe both standard output and standard error, you would traditionally do this:

my_command 2>&1 | another_command

A shorter way is to use |& as shown below:

my_command |& another_command

Other posts you might like:
Shell Scripting - Best Practices
All posts with label: bash

Saturday, March 08, 2014

Use sqsh, not isql!

Sqsh is a sql shell and a far superior alternative to the isql program supplied by Sybase. It's main advantage is that it allows you to combine sql and unix shell commands! Here are a few reason why I love it:

1. Pipe data to other programs
You can create a pipeline to pass SQL results to an external (or unix) program like less, grep, head etc. Here are a few examples:

# pipe to less to browse data
1> select * from data; | less

# a more complex pipeline which gzips data containing a specific word
2> select * from data; | grep -i foo | gzip -c > /tmp/foo.gz

# this example shows the use of command substitution
3> sp_who; | grep `hostname`

2. Redirect output to file
Just like in a standard unix shell, you can redirect output of a sql command to file:

# write the output to file
1> sp_helptext my_proc; > /tmp/my_proc.txt

3. Functions and aliases
You can define aliases and functions in your ~/.sqshrc file for code that you run frequently. Some of mine are shown below. (Visit my GitHub dotfiles repository to see my full .sqshrc.)

\alias h='\history'

# shortcut for select * from
\func -x sf
    \if [ $# -eq 0 ]
        \echo 'usage: sf "[table [where ...]]"'
        \return 1
    \fi
    select * from $*; | less -F
\done

# count rows in a table
\func -x count
    \if [ $# -eq 0 ]
        \echo 'usage: count "[table [where ...]]"'
        \return 1
    \fi
    select count(*) from $*;
\done
You can invoke them like this:
# select * from data table
1> sf "data where date='20140306'"

# count the rows in the employee table
2> count employee

# list aliases
3> \alias
4. History and reverse search

You can rerun a previous command by using the \history command or by invoking reverse search with Ctrl+r:

1> \history
(1) sp_who
(2) select count(*) from data
(3) select top 10 * from data

# invoke the second command from history
2> !2

# invoke the previous command
3> !!

# reverse search
4> <Ctrl+r>
(reverse-i-search)`sp': sp_who
4> sp_who

5. Customisable prompt
The default prompt is ${lineno}> , but it can be customised to include your username and database, and it even supports colours. It would be nice if there was a way to change the colour based on which database you were connected to (for example, red for a production database), but I haven't been able to figure out if this is possible yet. Here is my prompt, set in my ~/.sqshrc:

\set prompt_color='{1;33}' # yellow
\set text_color='{0;37}'   # white
\set prompt='${prompt_color}[$histnum][$username@$DSQUERY.$database] $lineno >$text_color '

6. Different result display styles
sqsh supports a number of different output styles which you can easily switch to. The ones I frequently use are csv, html and vert (vertical). Here is an example:

1> select * from employee; -m csv
123,"Joe","Bloggs"

2>select * from employee; -m vert
id:        123
firstName: Joe
lastName:  Bloggs

7. For-loops
A for-loop allows you to iterate over a range of values and execute some code. For example, if you want to delete data, in batches, over a range of dates, you can use a for-loop like this:

\for i in 1 2 3 4 5
    \loop -e "delete from data where date = '2014020$i';"
    \echo "Deleted 2014020$i"
\done

8. Backgrounding long-running commands
If you have a long-running command, you can run it in the background by putting an & at the end of the command. You can then continue running other commands, whilst this one runs in the background. You will see a message when the background command completes and you can use \show to see the results. Here is an example:

# run a command in the background
1> select * from data; &
Job #1 running [6266]

Job #1 complete (output pending)

# show the results of the backgrounded command
3> \show 1

Further information:
You can download sqsh here and then read the man page for more information.
You can take a look at my .sqshrc in my GitHub dotfiles repository.

Sunday, February 23, 2014

Using "lockfile" to Prevent Multiple Instances of a Script from Running

This post describes how you can ensure that only one instance of a script is running at a time, which is useful if your script:

  • uses significant CPU or IO and running multiple instances at the same time would risk overloading the system, or
  • writes to a file or other shared resource and running multiple instances at the same time would risk corrupting the resource

In order to prevent multiple instances of a script from running, your script must first acquire a "lock" and hold on to that lock until the script completes. If the script cannot acquire the lock, it must wait until the lock becomes available. So, how do you acquire a lock? There are different ways, but the simplest is to use the lockfile command to create a "semaphore file". This is shown in the snippet below:

#!/bin/bash
set -e

# waits until a lock is acquired and
# deletes the lock on exit.
# prevents multiple instances of the script from running
acquire_lock() {
    lock_file=/var/tmp/foo.lock
    echo "Acquiring lock ${lock_file}..."
    lockfile "${lock_file}"
    trap "rm -f ${lock_file} && echo Released lock ${lock_file}" INT TERM EXIT
    echo "Acquired lock"
}

acquire_lock
# do stuff

The acquire_lock function first invokes the lockfile command in order to create a file. If lockfile cannot create the file, it will keep trying forever until it does. You can use the -r option if you only want to retry a certain number of times. Once the file has been created, we need to ensure that it is deleted once the script completes or is terminated. This is done using the trap command, which deletes the file when the script completes or when the shell receives an interrupt or terminate signal. I also like to use set -e in all my scripts, which makes the script exit if any command fails. In this case, if lockfile fails, the script will exit and the trap will not be set.

lockfile can be used in other ways as well. For example, instead of preventing multiple instances of the entire script from running, you may want to use a more granular approach and use locks only around those parts of your script which are not safe to run concurrently.

Note, that if you cannot use lockfile, there are other alternatives such as using mkdir or flock as described in BashFAQ/045.

Other posts you might like:
Shell Scripting - Best Practices
Retrying Commands in Shell Scripts
Executing a Shell Command with a Timeout

Saturday, February 08, 2014

Retrying Commands in Shell Scripts

There are many cases in which you may wish to retry a failed command a certain number of times. Examples are database failures, network communication failures or file IO problems.

The snippet below shows a simple method of retrying commands in bash:

#!/bin/bash

MAX_ATTEMPTS=5
attempt_num=1
until command || (( attempt_num == MAX_ATTEMPTS ))
do
    echo "Attempt $attempt_num failed! Trying again in $attempt_num seconds..."
    sleep $(( attempt_num++ ))
done

In this example, the command is attempted a maximum of five times and the interval between attempts is increased incrementally whenever the command fails. The time between the first and second attempt is 1 second, that between the second and third is 2 seconds and so on. If you want, you can change this to a constant interval or random exponential backoff instead.

I have created a useful retry function (shown below) which allows me to retry commands from different places in my script without duplicating the retry logic. This function returns a non-zero exit code when all attempts have been exhausted.

#!/bin/bash

# Retries a command on failure.
# $1 - the max number of attempts
# $2... - the command to run
retry() {
    local -r -i max_attempts="$1"; shift
    local -r cmd="$@"
    local -i attempt_num=1

    until $cmd
    do
        if (( attempt_num == max_attempts ))
        then
            echo "Attempt $attempt_num failed and there are no more attempts left!"
            return 1
        else
            echo "Attempt $attempt_num failed! Trying again in $attempt_num seconds..."
            sleep $(( attempt_num++ ))
        fi
    done
}

# example usage:
retry 5 ls -ltr foo

Related Posts:
Executing a Shell Command with a Timeout
Retrying Operations in Java

Saturday, January 25, 2014

Coursera class: Principles of Reactive Programming

A few weeks ago, I completed the "Principles of Reactive Programming" class led by Martin Odersky, Erik Meijer and Roland Kuhn. This Coursera class started in November 2013 and was around 7 weeks long. It was a great class in which we learnt how to write reactive programs in Scala. The course mainly covered Futures, Promises, Observables, Rx streams and Akka Actors. It was quite challenging but the assignments were very enjoyable. We wrote a virus simulation and a wikipedia suggestions app!

Update (30 Aug 2014): I have committed my assignment solutions to my GitHub repository.

Related posts:
Coursera class: Functional Programming Principles in Scala
Stanford's Online Courses: ml-class, ai-class and db-class

Wednesday, January 01, 2014

fahd.blog in 2013

Happy 2014, everyone!
I'd like to wish everyone a great start to an even greater new year!

In keeping with tradition, here's one last look back at fahd.blog in 2013.

During 2013, I posted 22 new entries on fahd.blog. I am also thrilled that I have more readers from all over the world! Thanks for reading and especially for giving feedback.

Top 5 posts of 2013:

I'm going to be writing a lot more this year, so stay tuned for more great techie tips, tricks and hacks! :)

Related posts: