Gkoenig's Blog

content is almost exclusive related to IT stuff (linux, database,…)

pgTAP ~ Unit Tests for PostgreSQL ~ Part II ~

as promised in my previous post regarding pgTAP ( aka Part I ) here, I’ll continue this topic with checking some data in the database. Setup and basic handling of pgTAP was described in Part I also, for this reason I’ll concentrate on the test functions, which are used in the following script frame (just replace “##put_ypur_tests_here##” with the test scenarios described later on):

-- Format the output for nice TAP.
\pset format unaligned
\pset tuples_only true
\pset pager
-- Revert all changes on failure.
\set ON_ERROR_ROLLBACK 1
\set ON_ERROR_STOP true
\set QUIET 1
BEGIN;
\i ./pgtap.sql
-- create two tables with referential constraint
create table tab1 (id integer not null, a_text varchar(200), dt timestamp default now(), CONSTRAINT tab1_pkey PRIMARY KEY (id));
create table tab2 (id integer not null, a_text varchar(200), id_ref integer, CONSTRAINT id_ref FOREIGN KEY (id_ref) REFERENCES tab1 (id));
-- insert some test data
insert into tab1 (id,a_text) values (1,'test entry one');
insert into tab1 (id,a_text) values (2,'test entry two');
insert into tab1 (id,a_text) values (3,'test entry three');
insert into tab2 (id,a_text,id_ref) values (1,'ref test entry one',1);
insert into tab2 (id,a_text,id_ref) values (2,'ref test entry one',2);
insert into tab2 (id,a_text,id_ref) values (3,'ref test entry one',3);
SELECT plan(10); -- run a maximum of ten tests
##put_your_tests_here##
-- get the results
SELECT * FROM finish();
ROLLBACK;

What do we want to test ?

  • Are the inserted values really there ?
    The preferrable way to check data is via prepared statements compared to your expected result/array of results, if you just want to test plain data.
    A very convenient way is the function results_eq(), easy to show with an example: 

    -- check inserted data
    -- ...with two prepared statements. 2nd example will be more flexible with cursors
    PREPARE ids_fetched AS select id from tab1 where id in (1,2,3) order by id asc;
    PREPARE ids_expected AS VALUES (1),(2),(3);
    SELECT results_eq( 'ids_fetched', 'ids_expected', 'fetched the expected ids from tab1');
    PREPARE ids_fetched1 AS select id from tab1 where id in (1,2,3) order by id asc;
    PREPARE ids_fetched2 AS select id from tab2 where id in (1,2,3) order by id asc;
    SELECT results_eq( 'ids_fetched1', 'ids_fetched2');

    You don’t have to put your expected values in a prepared statement (like in the 1st example), it is also possible to pass this values as function parameters for set_eq():

    -- ...with a mixture of prepared statements and predefined values
    PREPARE text_fetched AS select a_text::text from tab1 where id in (1,2) order by id asc;
    SELECT set_eq( 'text_fetched', ARRAY[ 'test entry one','test entry two' ], 'fetched the expected text from tab1');

  • Does a function return the expected result ?
    In addition to test the availability of expected values is to check the return values of your own designed functions. Let’s assume that we created a function which returns all the ids from tab1 in ascending order, now let’s test if it is working as expected. In this case there are two alternatives performing the test recordset by recordset. On the one hand by simple function/statement comparison, on the other hand by using cursors. I’ll show you both:


    -- create a (simple) function
    create or replace function get_ids_tab1() returns setof integer as 'select id from tab1 order by id asc' language 'sql';
    -- check output of function
    -- ...with simple calls
    SELECT results_eq('select * from get_ids_tab1()', 'select id from tab1 order by id asc','get_ids_tab1() should return ids from relation tab1 in ascending order');
    -- ...with cursors
    DECLARE ids_function CURSOR FOR SELECT * FROM get_ids_tab1();
    DECLARE ids_relation CURSOR FOR SELECT id FROM tab1 order by id asc;
    SELECT results_eq('ids_function'::refcursor,'ids_relation'::refcursor,'Gotcha ! function returned the expected values.');

  • Raising errors intentionally and catching them

    -- simulate some errors
    PREPARE throw_error AS insert into tab1 (id,a_text) values (1,'this insert throws a duplicate key error');
    SELECT throws_ok('throw_error','23505',NULL,'Insert failed because of duplicate key violation (id)');

    But how do one know this error number postgres will throw ?
    throws_ok() will tell you🙂 . Have a look at the next example. It will try to insert a row in tab2 with a reference to id “4” in tab1, which obviously doesn’t exist. Therefore we expect an error. Since we don’t know the error number, I put the value “-1” as error number and the function call will fail with a verbose message what was thrown, and what we did expect:


    PREPARE ref_fail AS insert into tab2 (id, a_text, id_ref) values (4,'ref test entry four',4);
    SELECT throws_ok('ref_fail','-1',NULL,'Insert failed because of (ref_id)');

    Running the test script will produce the following output (in my case this was the 6th test which had been executed):


    # Failed test 6: "Insert failed because of (ref_id)"
    # caught: 23503: insert or update on table "tab2" violates foreign key constraint "id_ref"
    # wanted: -1

    Aaaahhhh….the violated foreign key constraint has the errorcode 23503, so we can replace the “-1” by “23503” and the test will succeed.

These are just some of the available test functions pgTAP provides. A complete overview is here and it is worth looking.

Hopefully you got a first, short overview what pgTAP is for, please let me know if you miss something or there’s something wrong.

pgTAP ~ Unit Tests for PostgreSQL

I’ll show you a short introduction to pgTAP, a testing framework for postgres, including basic theory and short examples. I’ll just show the “simple test script” way, not the xUnit way of testing. My test db is postgres version 9.0.2

The framework around your tests is built on putting your tests in one transaction and rollback it at the end. Additionally statements for starting and showing the results are mandatory:
BEGIN;
SELECT plan();
…your tests are here…
SELECT * FROM finish();
ROLLBACK;

download source package from http://pgtap.org download section, or go directly to http://pgfoundry.org/frs/?group_id=1000389
install it like the common linux way, e.g. the .gz (or just run the pgtap.sql from the extracted sourcedir/sql in your target database):
tar -xzf pgtap-.gz
cd pgtap-
make
make install
make installcheck

This will work if the binary pg_control is in your PATH and you are a db superuser. Most likely you’ll need additional settings like:
env PG_CONFIG=/path/to/pg_config
make installcheck PGUSER=postgres

Detailed instructions are here: http://pgtap.org/documentation.html#Installation, probably you need additional perl modules..

Now let’s test some stuff….
If you need the pgTAP-functions in a different database, just run the sql-script called pgtap.sql (you’ll find it in the subdirectory sql under the sourcedir pgtap-) or include the call to pgtap.sql in your script with \i /pgtap.sql

HelloWorld:
-- Format the output for nice TAP.
\pset format unaligned
\pset tuples_only true
\pset pager
-- Revert all changes on failure.
\set ON_ERROR_ROLLBACK 1
\set ON_ERROR_STOP true
\set QUIET 1
BEGIN;
SELECT plan(1);
SELECT pass( 'Hello World !' );
SELECT * FROM finish();
ROLLBACK;

save this as helloworld.txt and call it: psql -U postgres -f helloworld.txt

The snipped output will look like:
1..1
ok 1 - Hello World !

What does this mean ?
1..1 => you ran tests from #1 to #1 (exactly one😉 )
ok 1 – Hello World ! => your test returned “ok”, it was test number “1” and the string “Hello World!” is the description (==output) of the pass() function.

Yes, this test doesn’t make any sense at all, but it shows the basic format.
Now you’re free to check any detail of your database structure. Let’s start checking some tables and columns. Create a table first with
CREATE TABLE tab_test
(
datetime timestamp without time zone NOT NULL DEFAULT now(),
"number" integer NOT NULL,
price real NOT NULL DEFAULT 0.000000000000000,
currency_id character varying(3) DEFAULT 'eur'::character varying
);
CREATE INDEX idx_number ON tab_test USING btree (number);
CREATE INDEX idx_price ON tab_test USING btree (price);

tables_and_columns.txt:
-- Format the output for nice TAP.
\pset format unaligned
\pset tuples_only true
\pset pager
-- Revert all changes on failure.
\set ON_ERROR_ROLLBACK 1
\set ON_ERROR_STOP true
\set QUIET 1
BEGIN;
SELECT plan(3);
SELECT has_table( 'tab_test', 'table tab_test exists' );
SELECT has_column( 'tab_test', 'number' );
SELECT col_type_is( 'tab_test', 'number', 'integer' );
SELECT * FROM finish();
ROLLBACK;

Run the tests: psql -U postgres -f tables_and_columns.txt
The output will be:
1..3
ok 1 - table tab_test exists
ok 2 - Column tab_test.number should exist
ok 3 - Column tab_test.number should be type integer

The call of has_table got the additional parameter “description”, which will be printed out instead of the default message you see for tests no. 2 and 3. It’s possible to put a schemaname as the first parameter to this function call, if your table is not in the default search schema.
To receive a “not ok” result, just modify e.g. the call of has_column to SELECT has_column(‘tab_test’,’id’);
Now the output tells you that something failed:
1..3
ok 1 - table tab_test exists
not ok 2 - Column tab_test.id should exist
# Failed test 2: "Column tab_test.id should exist"
ok 3 - Column tab_test.number should be type integer
# Looks like you failed 1 test of 3

extend tables_and_columns.txt by:
...
SELECT plan(5);
...
SELECT has_index('tab_test','idx_number', 'Index idx_number on tab_test');
SELECT has_index('tab_test','idx_price','currency_id', 'Index idx_price on tab_test');
...

Now you get a helpful hint in the output, which tells you that the table has an index called idx_price, but it is on a different column:
1..5
ok 1 - table tab_test exists
not ok 2 - Column tab_test.id should exist
# Failed test 2: "Column tab_test.id should exist"
ok 3 - Column tab_test.number should be type integer
ok 4 - Index idx_number on tab_test
not ok 5 - Index idx_price on tab_test
# Failed test 5: "Index idx_price on tab_test"
# have: idx_price ON tab_test(price)
# want: idx_price ON tab_test(currency_id)
# Looks like you failed 2 tests of 5

There are pgTAP functions for almost everything in your postgres db, like Triggers, Functions, Schemas, Tablespaces, …. just check the documentation.
It is also possible to create relationships of, or better conditional, tests. This means calling a testfunction only if certain conditions are met.
Again an example will illustrate this best:
The following code fragment will check the table tab_test in schema1 if there exists a primary key on the table. If yes the next test will check if the primary key contains the two columns an_id and another_id. If there’s no primary key found the function fail() will be called. You can call several test functions within collect_tap(…) to build something like a subfunction
...
SELECT CASE WHEN has_pk('schema1','tab_test',NULL) ~* '^ok'
THEN
collect_tap(
col_is_pk ('schema1','tab_test', ARRAY['an_id','another_id'],'table "schema1.tab_test" should have a combined primary key over two columns'),
pass('passed')
)
ELSE
fail('did not find any primary key in table "schema1.tab_test".')
END;
...

Now you should have a basic understanding of how to create a test scenario for testing a db structure, there are a lot more functions available than described in this post, so feel free to play with it.
In my next post I’ll show how to check data….

any hints are highly welcome

Postgres Partitioning

Sometimes it is very useful to store your data not in one big table, but in several so called partitions. A partition is just an inherited table storing the data that fit the given rule. Thereby you can avoid scanning a huge table if you are just looking for a small part of data, or you can put older data to a slower/cheaper storage (put the partitions in tablespaces on that storage).
You don’t have to adapt your code, the CRUD actions are offered to the application in the same way it is for one big table. The whole logic is in Postgresql itself, a trigger function puts the data in the desired partition and the parameter constraint_exclusion=on informs the planner to scan only the affected partitions.
A basic request is to split data on a given timestamp, e.g. create a partition on a monthly basis. In such a scenario you can dive into historical data with no performance impact to the current data. Additionally you can delete no longer needed, old data just by dropping the partition.

It will be best to explain the stuff by an example…..

create a test db

psql -U postgres postgres
postgres=> CREATE DATABASE pgtestdb encoding='UTF-8';
postgres=>\q
psql -U postgres pgtestdb
postgres=> CREATE SCHEMA part_test;

create master table

CREATE TABLE tab_test
(
datetime timestamp without time zone NOT NULL DEFAULT now(),
number integer NOT NULL,
price real NOT NULL DEFAULT 0.000000000000000,
currency_id character varying(3) DEFAULT 'eur'::character varying
);

create four partition tables (inherited from master table)

create table tab_test201011 ( CHECK (datetime >= DATE '2010-11-01' AND datetime < DATE '2010-12-01') ) INHERITS (tab_test);
create table tab_test201012 ( CHECK (datetime >= DATE '2010-12-01' AND datetime < DATE '2011-01-01') ) INHERITS (tab_test);
create table tab_test201101 ( CHECK (datetime >= DATE '2011-01-01' AND datetime < DATE '2011-02-01') ) INHERITS (tab_test);
create table tab_test201102 ( CHECK (datetime >= DATE '2011-02-01' AND datetime < DATE '2011-03-01') ) INHERITS (tab_test);

create trigger function

CREATE OR REPLACE FUNCTION test_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.datetime >= DATE '2010-11-01' AND NEW.datetime < DATE '2010-12-01' ) THEN INSERT INTO tab_test201011 VALUES (NEW.*);
ELSIF ( NEW.datetime >= DATE '2010-12-01' AND NEW.datetime < DATE '2011-01-01' ) THEN INSERT INTO tab_test201012 VALUES (NEW.*);
ELSIF ( NEW.datetime >= DATE '2011-01-01' AND NEW.datetime < DATE '2011-02-01' ) THEN INSERT INTO tab_test201101 VALUES (NEW.*);
ELSIF ( NEW.datetime >= DATE '2011-02-01' AND NEW.datetime < DATE '2011-03-01' ) THEN INSERT INTO tab_test201102 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'column datetime out of range. Fix the tour_insert_trigger() function!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

== UPDATE ==
to simplify the trigger function you can replace the code above with the following dynamic creation of the insert statement, moving the row in the appropriate target table. As a further enhancement you don’t have to extend the trigger function every time you’ve added new partitions:

CREATE OR REPLACE FUNCTION test_insert_trigger()
RETURNS trigger AS
$BODY$
DECLARE
date_part varchar(20);
tablename varchar(100);
BEGIN
date_part='';
date_part = to_char(new.datetime,'yyyy') || to_char(new.datetime,'mm');
tablename = 'tab_test' || date_part;
--raise notice 'target table: %', tablename;
execute 'insert into ' || tablename::regclass || '(datetime,number,price,currency_id) select $1,$2,$3,$4' using NEW.datetime,NEW.number,NEW.price,NEW.currency_id;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION test_insert_trigger() OWNER TO postgres;

create trigger on master table

CREATE TRIGGER insert_test_trigger BEFORE INSERT ON tab_test FOR EACH ROW EXECUTE PROCEDURE test_insert_trigger();

create some indices (not for functionality, just for select performance)

CREATE INDEX idx_test_dt_201011 ON tab_test201011 (datetime);
CREATE INDEX idx_test_dt_201012 ON tab_test201012 (datetime);
CREATE INDEX idx_test_dt_201101 ON tab_test201101 (datetime);
CREATE INDEX idx_test_dt_201102 ON tab_test201102 (datetime);

insert some test data (100000 rows in each partition)


insert into tab_test (datetime, number, price) (select DATE('2010-11-01') + (i%30) * INTERVAL '1 days', i,i::real from (select generate_series(1,100000) as i) as q);
insert into tab_test (datetime, number, price) (select DATE('2010-12-01') + (i%30) * INTERVAL '1 days', i,i::real from (select generate_series(1,100000) as i) as q);
insert into tab_test (datetime, number, price) (select DATE('2011-01-01') + (i%30) * INTERVAL '1 days', i,i::real from (select generate_series(1,100000) as i) as q);
insert into tab_test (datetime, number, price) (select DATE('2011-02-01') + (i%28) * INTERVAL '1 days', i,i::real from (select generate_series(1,100000) as i) as q);

test the stuff

pgtestdb=# set constraint_exclusion=off;
SET
pgtestdb=# explain select count(number) from tab_test where datetime between '2010-10-05 00:00:00' and '2010-10-20 00:00:00';

QUERY PLAN
——————————————————————————————————————————————————————-
Aggregate (cost=43.90..43.91 rows=1 width=4)
-> Append (cost=0.00..43.89 rows=5 width=4)
-> Seq Scan on tab_test (cost=0.00..10.75 rows=1 width=4)
Filter: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone)) -> Index Scan using idx_tour_dt_201011 on tab_test201011 tab_test (cost=0.00..8.28 rows=1 width=4)
Index Cond: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone)) -> Index Scan using idx_tour_dt_201012 on tab_test201012 tab_test (cost=0.00..8.28 rows=1 width=4)
Index Cond: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone)) -> Index Scan using idx_tour_dt_201101 on tab_test201101 tab_test (cost=0.00..8.28 rows=1 width=4)
Index Cond: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone)) -> Index Scan using idx_tour_dt_201102 on tab_test201102 tab_test (cost=0.00..8.28 rows=1 width=4)
Index Cond: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone)) (12 rows)

pgtestdb=# set constraint_exclusion=on;
SET
pgtestdb=# explain select count(number) from tab_test where datetime between '2010-10-05 00:00:00' and '2010-10-20 00:00:00';

QUERY PLAN ————————————————————————————————————————————————————— Aggregate (cost=10.76..10.77 rows=1 width=4) -> Append (cost=0.00..10.75 rows=1 width=4)
-> Seq Scan on tab_test (cost=0.00..10.75 rows=1 width=4)
Filter: ((datetime >= ‘2010-10-05 00:00:00’::timestamp without time zone) AND (datetime <= ‘2010-10-20 00:00:00’::timestamp without time zone))
(4 rows)

That’s it, with enabled constraint_exclusion just one partition will be scanned !
I tried the example in PostgreSQL version 8.3 and newer (up to latest 9.0.2), I don’t know which older versions of Postgres supports partitioning….

create virtual machines for kvm on fedora 12

In my last post I described how to enable kvm on fedora 12.
Now it’s time to create some virtual machines…..

1.) I copy the iso’s of the guest os in the directory /var/lib/libvirt/isos to prevent SELinux from throwing errors like “cannot open XYZ.iso…permissio denied”, even if it’s world readable.
Get your installation media for your guest os you want to install, e.g.

or whatever system you want…

2.) create the virtual machine on your kvm server
2.1) evaluate the “virt-install” command to check the parameters for creating a virt. machine => man virt-install
2.2) create FreeBSD8.0 vm called “fb8” with 1 vCPU, 512MB Ram, 16GB HDD, source install file under /var/lib/libvirt/isos, target vm under /vm/fb8.qcow2
virt-install --connect qemu:///system -n fb8 -r 512 --vcpus=1 -s 12 \\
-c /var/lib/libvirt/isos/8.0-RELEASE-i386-dvd1.iso \\
--vnc --noautoconsole --os-type unix --os-variant freebsd7 \\
--accelerate --network=bridge:br0 --hvm -f /vm/fb8.qcow2

3.) switch to your desktop and start virt-manager to access the virtual machine
open virt-manager

  • click File -> add connection
  • connection type (most probably) “remote tunnel over ssh”
  • type the hostname (or ip address) of the kvm server
  • click Connect

At first connection you’ll get a dialog which shows RSA fingerprint, type yes and click OK
In the next dialog you have to type the root password of your kvm server

Now you should see your vm in state “running”. Click “open” and type the root password again to login to your vm. You should have access to the graphical terminal and the installer of your guest OS should have been started. Now you can install your guest OS…

kvm on Fedora 12

Here are the steps how I setup kvm on my fedora12 workstation. Normally you would install kvm and the virtual machines on a real server and access it from
your desktop via virt-manager. If you are going to go this way perform steps 1) – 5) on the server and step 6) on your client.

1) check if your CPU supports hardware virtualization
egrep '(vmx|svm)' --color=always /proc/cpuinfo
..the output should contain some highlighted string like “..vmx..”, if not you can stop here since your processor doesn’t support hardware virt.

2) install required packages on kvm server
yum install kvm qemu libvirt python-virtinst
3) start libvirt daemon
/etc/init.d/libvirtd start
4) check if installation was successfull
virsh -c qemu:///system list
..output should look like
Id Name State
———————————-

5) to access the virtual machines like a “real” physical machine we need a network bridge
yum install bridge-utils
/usr/sbin/brctl addbr br0
/sbin/ifconfig eth0 0.0.0.0 promisc up
/usr/sbin/brctl addif br0 eth0
/sbin/dhclient br0
/sbin/iptables -F FORWARD

check via: ifconfig

6) install the virt-manager on a “client” to access the virtual machines
yum install virt-manager

I’ll describe how to create a vm in a different blog entry immediately😉

install JBoss on CentOS 4.5

Hello,

latest developments point to a so far different direction -> Java
Therefore we need an environment to publish those apps/servlets/beans/whatever…
Afterwards you’ll find the steps for an initial setup of JBoss on a 64bit CentOS 4.5 (virtual machine):

starting point: fresh centos 5.4 installation
a) install JDK, both 1.5 and 1.6 are suitable. I did the easiest way😉
yum install java-1.6.0*

check the installed JDK with the command java -version. The output should look like (I’m running the server in a virtual machine):
java version “1.6.0”
OpenJDK Runtime Environment (build 1.6.0-b09)
OpenJDK 64-Bit Server VM (build 1.6.0-b09, mixed mode)

b) install JBoss.
There are 2 options. Installation with a binary package or compile yourself out of the source package. Since I don’t need some special settings, … I’ll it install via the binary package.
There are different versions of JBoss available:
JBoss Enterprise Application Platform – for highly transactional Java EE applications
JBoss Enterprise Web Platform – for mid-sized, light and rich Java applications
JBoss Enterprise Web Server – for simple Java workloads that only require enterprise Apache Tomcat.
We need the Spring-functionality, therefore I use the second one, the Enterprise-Web-Platform-Edition. Currently released version is 5.1, version 6.0 isn’t finally released yet.

Download and extract
mkdir /srv
cd /srv
wget http://sourceforge.net/projects/jboss/files/JBoss/JBoss-5.1.0.GA/jboss-5.1.0.GA.zip
jar -xf jboss-5.1.0.GA.zip

set environment variable JBOSS_HOME
vi ~/.bashrc
add the following lines
export JBOSS_HOME=/srv/jboss-5.1.0.GA
export PATH=$PATH:$JBOSS_HOME/bin

and “source” it
source ~/.bashrc

Now JBoss is ready to run, but there’s one important note !!!!
By default JBoss listens only to localhost, keep this in mind, and provide the startup with the “-b 0.0.0.0” if you want your installation be accessible from anywhere

…startup
chmod +x $JBOSS_HOME/bin/*.sh
$JBOSS_HOME/bin/run.sh -b 0.0.0.0

…and test, point a browser to
http://:8080

create user jboss, under which jboss should be started
adduser jboss
chown -Rf jboss.jboss /srv/jboss-5.1.0.GA

and copy the init-script for starting/stopping jboss
cd /srv/jboss-5.1.0.GA/bin
cp jboss_init_redhat.sh /etc/init.d/jboss
chmod +x /etc/init.d/jboss
vi /etc/init.d/jboss

adjust the parameters:
JBOSS_HOME,JAVAPTH, JBOSS_CONF and JBOSS_HOST, like =>

JBOSS_HOME=${JBOSS_HOME:-“/srv/jboss-5.1.0.GA”}
JBOSS_USER=${JBOSS_USER:-“jboss”}
JAVAPTH=${JAVAPTH:-“/usr/bin”}
JBOSS_CONF=${JBOSS_CONF:-“default”}
JBOSS_HOST=”0.0.0.0″
JBOSS_BIND_ADDR=${JBOSS_HOST:+”-b $JBOSS_HOST”}

..done, ready to use JBoss…..but keep in mind, this was just the initial setup…..the real work isn’t done yet…..configuration and adjust it to your needs will start right now :-))

create a read-only user in postgres for all relations in several schemas

Hi,

recently I got the task to create a user in postgres limited to execute SELECT statements only (read-only user).
The problem is, that you have to grant this right table by table “manually”. Since we have several schemas with lots of relations, it would be a boring and long running task to to do this…..

A much better approach is to create a procedure, which is doing the job for you.
Assuming that we have the schemas “schema1”, “schema2” and “schema3” and we want to have a user called “ronly”.
Here are the steps, I made (within a psql session as superuser):

  • create new user ronly

    CREATE ROLE ronly NOSUPERUSER LOGIN password 'ronly';

  • create new group ro_group

    CREATE ROLE ro_group VALID UNTIL 'infinity';

  • add user ronly to group ro_group

    GRANT ro_group TO ronly;

  • grant access to the schemas in general

    GRANT USAGE ON SCHEMA schema1 TO ro_group;
    GRANT USAGE ON SCHEMA schema2 TO ro_group;
    GRANT USAGE ON SCHEMA schema3 TO ro_group;

  • now, let’s create a procedure which grants SELECT to all relations in our 3 schemas

    CREATE OR REPLACE FUNCTION grant_select_to_ro_group()
    RETURNS text AS
    $BODY$
    DECLARE
    sql text;
    rel record;
    BEGIN
    FOR rel IN SELECT pg_catalog.quote_ident(s.nspname) AS schema_name,
    pg_catalog.quote_ident(t.relname) AS relation_name
    FROM pg_class t, pg_namespace s
    WHERE t.relkind IN ('r', 'v','S')
    AND t.relnamespace=s.oid AND s.nspname in ('schema1','schema2','schema3') order by s.nspname
    LOOP sql := 'GRANT SELECT ON ' || rel.schema_name || '.' || rel.relation_name || ' TO ro_group';
    RAISE NOTICE '%', sql;
    EXECUTE sql;
    END LOOP;
    RETURN 'OK';
    END;
    $BODY$
    LANGUAGE 'plpgsql' VOLATILE
    COST 100;
    ALTER FUNCTION grant_select_to_ro_group() OWNER TO postgres;

  • and call it afterwards =>

    select grant_select_to_ro_group();

You can image how easy it is to create a function which e.g. gets a schemaname as parameter and grant SELECT to this schema only:

CREATE OR REPLACE FUNCTION grant_select_to_ro_group(character varying)
RETURNS text AS
$BODY$
DECLARE
the_schema ALIAS FOR $1;
sql text;
rel record;
BEGIN
FOR rel IN SELECT pg_catalog.quote_ident(s.nspname) AS schema_name,
pg_catalog.quote_ident(t.relname) AS relation_name
FROM pg_class t, pg_namespace s
WHERE t.relkind IN ('r', 'v','S')
AND t.relnamespace=s.oid AND s.nspname = the_schema order by s.nspname
LOOP sql := 'GRANT SELECT ON ' || rel.schema_name || '.' || rel.relation_name || ' TO ro_group';
RAISE NOTICE '%', sql;
EXECUTE sql;
END LOOP;
RETURN 'OK';
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
COST 100;
ALTER FUNCTION grant_select_to_ro_group(character varying) OWNER TO postgres;

…and call it like =>

select grant_select_to_ro_group('schema3');

hth……GERD…..

install latest OTRS (ticketing system) with postgres on CentOS 5.4

Hello,

again one more installation guide for CentOS users. I’m currently looking for a ticketing system and I heard a lot about otrs.
Since I want to use the latest version I didn’t use the yum-based installation, except to cover the perl dependencies😉

As prerequisites you should have a running apache2 and postgresql (I’m running 8.4).

First of all otrs needs a lot of perl modules. To resolve the dependencies I performed a

yum install otrs to install all of them in this step. Afterwards a
yum erase otrs deletes the “old” version of otrs.

Now we can start installing otrs itself.

  1. get latest rpm
    http://otrs.org/download/ -> CentOS -> OTRS 2.4.6 for RHEL 4,5,6 + CentOS 4,5,6 RPM (copy it to e.g. /tmp/ )
  2. install the rpm without dependencies, since I don’t want to use mysql
    rpm --nodeps -Uvh /tmp/otrs-2.4.6-01.noarch.rpm
  3. create database stuff
    • user and database

      psql -U postgres
      #psql>create role otrs password '' nosuperuser;
      #psql>create database otrs owner otrs;
      #psql>\q
    • create tables and insert initial data

      cd /opt/otrs/scripts/database
      psql -U otrs otrs -f otrs-schema.postgresql.sql
      psql -U otrs otrs -f initial_insert.sql
      psql -U otrs otrs -f otrs-schema-post.postgresql.sql
  4. set file permissions (I’m running otrs as user “otrs” and group “apache”)

    /opt/otrs/bin/SetPermissions.sh /opt/otrs otrs apache apache apache
  5. modify the default parameters, since mysql is the default database
    open the file /opt/otrs/Kernel/Config.pm and

    • comment the line
      $Self->{DatabaseDSN}= "DBI:mysql:database=$Self->{Database};host=$Self->{DatabaseHost};";
      to disable mysql, and
    • uncomment the line
      $Self->{DatabaseDSN} = "DBI:Pg:dbname=$Self->{Database};";
    • and don’t forget to set the dbuser and password to one you created some steps ago
    • edit start script of otrs
      open the file /etc/sysconfig/otrs and disable again the mysql stuff and enable postgres


      #OTRS_USED_DB=mysqld
      #OTRS_USED_DB_TEST="ps --pid $(cat /var/lib/mysql/$HOST.pid)"
      #OTRS_USED_DB_TEST="/sbin/service $OTRS_USED_DB status | grep 'is running'"
      # --- The Wonderfull redhat's mysql init script does not have a status check.. sucks
      OTRS_USED_DB=postgresql
      OTRS_USED_DB_RCSCRIPT="service $OTRS_USED_DB status| grep 'is running' > /dev/null 2>&1"

  6. let’s start otrs…
    service otrs start
  7. the first login…
    open http://localhost/otrs/index.pl

    and login with user “root@localhost”, password “root” !! change the password afterwards !!
    You have to modify the hostname “localhost” if you’ve installed otrs on a remote server, of course😉

Now you can start creating users, groups and a lot more. Check it out, OTRS is a great tool and with the ITSM addon you can easily extend this tool for some ITIL features like CMDB …

any comments welcome….GERD….

install vlc on CentOS 5.4 (libdvdread.so.3 libcucul missing)

I wanted to use vlc as client for multimedia files, but ran into some dependency errors with libraries libdvdread.so.3 and libcucul.so.0.
Installation was initially started by

yum install vlc

but finished with errors, described above.

I wondered why the installation cannot find the libraries, since they are installed.

ll /usr/lib | grep libdvd*
ll /usr/lib | grep libcu*

showed me both libraries. I tried to manually set a symbolic link for libdvdread.so.3 (since there only was a link for xxx.so.4), but with no success.

The solution of this problem was to resolve the mixture of libraries installed from different repositories.
In this case the libdvdread from epel and rpmforge caused the confusion. You can check this (if both repositories are enabled) by

yum provides libdvdread

Both repositories offer a libdvdread, but completely different.
Therefore I performed the following steps to get a working vlc installation.

  • deinstall current libdvdread and libcucul

    yum erase libdvdread
    yum erase libcaca

  • cleanup yum

    yum clean all

  • install vlc and dependencies from rpmforge

    yum --disablerepo \* --enablerepo base,updates,rpmforge install vlc

finished….installed 25 packages.

Check again which repository provided the libdvdread, just to be sure😉

yum list libdvdread

prints the following output (on my box)

...
Installed Packages
libdvdread.i386 0.9.7-1.el5.rf installed
Available Packages
libdvdread.i386 4.1.3-1.el5 epel

Besides this, it is a preferable method to use priorities in your yum repositorities in /etc/yum.repos.d/*.repo files to avoid a confusion about different versions of files installed from different repositories.
Details about yum and priorities here

any comment highly appreciated….

install tracks (project/task management tool) on centos 5.4 with postgres backend

Hello,
if you’re looking for a very nice and flexible tool to manage tasks/projects you should give Tracks a try.
It runs as a ruby app with its builtin web-server (others are possible, too) and is very easy to setup.
As a starting guide I took the blog-post here, and modified some parts (especially the database settings since I’m using postgres).

get latest release (used version is 1.7)

prepare database

  • open a postgres session, e.g. “psql -U postgres”
  • create user tracks LOGIN password ‘tracks’;
  • create database tracks owner tracks encoding ‘UTF-8’;

configure the database setting

  • cd /opt/tracks-1.7 (or wherever you’ve installed it)
  • vi config/database.yml
  • # enter the credentials for db connection to the recently created database/user
    production:
    adapter: postgresql
    database: tracks
    host: localhost
    username: tracks
    password: tracks
    encoding: utf8

create the db structure by running

  • RAILS_ENV=production rake db:migrate –trace

start tracks with builtin webserver. Listen on port 3030 (since default port 3000 is busy with redmine project management tool😉 ).

  • cd /opt/tracks-1.7
  • script/server –port=3030 -e production

That’s it…..now point your browser to http://localhost:3030 and login

Follow

Get every new post delivered to your Inbox.