Results tagged “PostgreSQL”

Wouldn't it be nice to have a CGI script which would convert bunch of SQL queries into XLS file on-the-fly? And while we are at it, let's have multiple reports, each in it's own directory?

In a sense, it's simple REST API to SQL files on disk which produce Excel files. I first wrote something like this back in 2002, but until now, I didn't have subversion repository for it or announced it to the world.

Each file in current directory which ends in *.sql will be converted to Excel sheet. If you want to have specific order, you can prefix filenames with numbers which will be striped when creating sheet names.

Comments in sql files (lines beginning with --) will be placed in first line in bold.

To specify database on which SQL query is executed \c database syntax is supported.

You can also run script from command line, and it will produce sql_reports.xls file.

If run within directory, it will use files in it to produce file.

When called as CGI, directory name can be appended to name of script to produce report for any sub-directory within directory where sql2xls.cgi is installed.


Only required file is this script sql2xls.cgi

If your server is configured to execute .cgi files, you can drop this script anywhere, but you can also add something like

   ScriptAlias /xls-reports /srv/SQL2XLS/sql2xls.cgi

in Apache's virtual host configuration to get nice URLs

To configure default database, user, password and other settings create file in same directory in which sql2xls.cgi is with something like this:

  $dsn      = 'DBI:mysql:dbname=';
  $database = 'database';
  $user     = 'user';
  $passwd   = 'password';
  $path     = 'sql_reports.xls';

  $db_encoding     = 'utf-8';
  $xls_date_format = '';
  $debug = 1;


There is none. Use apache auth modules if you need it.

Publish your data with Exhibit

As you might remember, back in 2007 I wrote about Exhibit which in meantime released version 2.0 and moved to google code.

This time, I made few more converters which enable you to:


This is probably best test of JavaScript speed in your browser. Exhibit seems to work best with around 500 items in older browsers, but Firefox 3.1b2 works with 3000 objects, even on EeePC :-)

Why does console refuse to die?
- It's because of pipes!
This post is result of my long addiction to console applications. Somehow, when I want to get a quick view of things on my system, I always turn to pipes and do something with them. In that process, I developed few of useful scripts for use within shell pipes, and I would like to introduce my readership to them.

PostgreSQL database size

When I want to see size of all databases on my system or size of tables in one database I turn to pg_size. It's a short and sweet script which will do a little shell magic (take a look in it) and display size of all databases on system (without any options) or size of all tables in database and number of rows (when used like pg_size database_name) like this:
dpavlin@llin:~$ pg_size dellstore2 | grep -v sql_
4890624 customers 20000
3153920 orderlines 60350
2678784 cust_hist 60350
991232 products 10000
966656 orders 12000
450560 inventory 10000
8192 categories 16
0 reorder 0
This is all nice and well, but doesn't really gives us the right overview, so move along for...

Nice console graphs

First, a caveat: this tools assumes that it will get number, space, and optional description. Output above seems to fit into this description, so let's try it:
dpavlin@llin:~$ COLUMNS=80 pg_size dellstore2 | grep -v sql_ | -h
customers 20000    4776k OOOOOOOOOOOOOOOO                                 4776k
orderlines 60350   3080k OOOOOOOOOO-----------------                      7856k
cust_hist 60350    2616k OOOOOOOOO---------------------------               10M
products 10000      968k OOO-------------------------------------           11M
orders 12000        944k OOO----------------------------------------        12M
inventory 10000     440k O-------------------------------------------       12M
categories 16      8192b ---------------------------------------------      12M
reorder 0              0 ---------------------------------------------      12M
This gives us nice output: description (followed by number of rows from above output), and running total of size in human readable form (if you don't like it, remove -h flag and you will get raw numbers).

Let's take another example (if you are still reading this and not interested in PostgreSQL database size). Let's see how much traffic did pppd transfer over very slow GPRS link on 8-day vacation:

dpavlin@llin:~$ grep 'pppd.*: Sent' /var/log/messages | awk '{ print $7 + $10 " " $1 " " $2 }' | -h
May 5         0                                                               0
May 5       39k                                                             39k
May 5     7512k OO                                                        7551k
May 6     6352b --                                                        7558k
May 6       20k --                                                        7579k
May 6     1183k ---                                                       8762k
May 8     6869k OO---                                                       15M
May 8       70k -----                                                       15M
May 9     3596k O------                                                     18M
May 9     1998k -------                                                     20M
May 10      32M OOOOOOOOOOOO--------                                        53M
May 10      13k --------------------                                        53M
May 11      44M OOOOOOOOOOOOOOOOO--------------------                       98M
May 12      12M OOOO--------------------------------------                 111M
May 13    7120k OO-------------------------------------------              118M
May 13      20M OOOOOOO----------------------------------------------      139M
Much more interesting! A long time ago, I had a bunch of quick one-lines which used to produce output from various other system counters, but somehow it got lost.

As I get only few comments on my blog, if you find this useful, leave one. I have few other examples, like the one which shows top 5 memory eaters on my system:

dpavlin@llin:~$ ps v | awk '{ print $8 " " $9 " " $10 }' | sort -rn | ~/private/perl/ | head -5
10.6 /usr/lib/iceweasel/firefox-bin 165092 OOOOOOOOOOOOOO                165092
4.3 perl                             67240 OOOOOO---------------         232332
0.5 awesome                           8504 ---------------------         240836
0.4 irssi                             6632 ----------------------        247468
0.3 vi                                5888 ----------------------        253356
but, if this is not interested to my readership, tell me so, and I will stop spamming your already full RSS reader with console output! :-)

Danas sam na Razmjeni vještina održao maratonsko četverosatno predavanje koje je nadam se bilo donekle korisno. Nažalost, nismo stigli ući u detalje onoliko koliko bih želio, ali ako ništa drugo ponovo sam koristio pgrestraier (koji se nekako indeksira previše sporo, morati ću pogledati zašto) i još jedan zgodan projektić koji sam napisao prošle godine za studente u Zadru pg-getfeed koji je zapravo mala perl stored procedura kojim možete raditi SQL upite na RSS feedovima.

I just finished set of shell scripts which test failover and replication using Slony-1 and PostgreSQL.

It's somewhat Debian specific (using pg_ctlcluster to control PostgreSQL instances), but other than that it should work on any platform. This is essentially codification of requirements for HA testing I had to do, and I hope that this automatic demonstration would be valuable to others to test their specific requirements. Without this throwaway model, I wouldn't be able to test production database failover, because there is just too much steps which had to be done in specific order.

So, my fascination with testing expanded from development to deployment.