Archive for February, 2009

installing TIBCO TRA 5.6 on a debian 64bit

Recently I got a hardware upgrade so I could finally switch to a 64-bit environment. To fully use that machine I wanted to install TIBCO in 64-bit mode. After starting the installation I got this message:

TIBINS202527: Error: ERROR: You are running a 64-bit product installer on a 32-bit system.
This is not supported.

The Problem was I was running a 64-bit OS with a 64-bit Kernel:

 uname -a
Linux client1 #1 SMP PREEMPT Tue Feb 17 17:42:33 CET 2009 x86_64 GNU/Linux

So I tried to find the problem. First I needed some more output what the installer is actually doing. So I used the logging option to get all the debug output.

 ./TRA.5.6.0-suite_linux24gl23_x86.bin -console -is:log output

So when you look closer to this log you can see that the command which is running the installer looks like this:

Executing launch script command: “/tmp/isjI8lFYy/bin/java” -cp “”:”TRA.5.6.0-suite_linux24gl23_x86.jar”:”TRA.5.6.0-simple_linux24gl23_x86.jar”:”tibrv.8.1.1-simple_linux24gl23_x86.jar”:”jre.1.5.0-simple_linux24gl23_x86_64.jar”:”Designer.5.6.0-simple_linux24gl23_x86_64.jar”:”tpcl.5.6.0-simple_linux24gl23_x86.jar”:”hawk.4.8.1-simple_linux24gl23_x86_64.jar”:”/tmp/isjA5jEsB/TRA.5.6.0-suite_linux24gl23_x86.jar”:”” -Dtemp.dir=”/tmp” -Dis.jvm.home=”/tmp/isjI8lFYy” -Dis.jvm.temp=”1″”/tmp/isjA5jEsB/TRA.5.6.0-suite_linux24gl23_x86.jar” -Dis.launcher.file=”/home/jens/tmp/TIB_tra-suite_5.6.0_linux24gl23_x86_64/./TRA.5.6.0-suite_linux24gl23_x86.bin” -Dis.jvm.file=”/tmp/isjI8lFYy/jvm” -Dis.external.home=”/home/jens/tmp/TIB_tra-suite_5.6.0_linux24gl23_x86_64/.” -Xms20m -Xmx128m run -home TRA.5.6.0-suite_linux24gl23_x86.jar “-console”

Now that I had the command which is starting the installer I began to trace what this process is doing. To do a strace properly you just have to prepend ‘strace’ and than redirect the error output to a file (cause it is quiet a lot). So after doing this I found something interesting in the log.

[pid 32651] execve(“/bin/uname”, [0xffffffffdf47dc88, “-p”], [/* 1757 vars */]) = 0

As you can see it runs the command ‘uname -p’. This command returns ‘unknown’ for a default debian system. You also have this problem if you are running a self compiled kernel from As for the TIBCO supported systems (SUSE ans redhat) they return something different. After trying the same command on a openSUSE I found that ‘x86_64’ should be the correct string. After I bit of trial and error I found out that the result of this command is written to a file name ‘kernelbits_jens.txt’ in the temp directory.

So here the simple solution to the problem.
You just need to create the arch file manually and make it read-only so the installer can’t overwrite it. Here the command:

echo 'x86_64' > kernelbits_`whoami`.txt

Now the installer worked absolutely fine for me.
I already notified the TIBCO support about the problem. As for now there will be no fix. But I hope they will correct this behavior for future installer.

, , ,

1 Comment

copy a table across databases via dblink

Recently I ran into the situation that I needed to copy a large subset of data from one database to another. Normally I would say, make a dump and then re-import the data into the new schema. But this solution has some serious drawbacks. First you have to copy the complete database. Second you have to maintain the structure of the data. A third problem could be that you have to copy the complete dump to the target location (in case it is not the same machine and your database is a bit larger e.g. some gigabyte). Having these drawbacks in mind I started searching for an alternative solution for my problem.

Here some facts to render my situation more precisely.

  • database containing multiple tables
  • only one has relevant data
  • only the subset of 1 month is needed

ddl for the original table:

name varchar(10),
date timestamp,
bid numeric,
ask numeric

ddl for the target table:

symbol varchar(10),
date timestamp,
price numeric,
"day" char(5),
max numeric,
avg numeric,
atr numeric

here the mapping: -> realtime.symbol ->
( + realtime.ask) /2 -> realtime.price
-> other columns filled by trigger

To get this task done I decided to use a dblink between those two database instances (how-to here).

So here is the select I used to transfer the month January to the new db:

insert into realtime (symbol,date,price)
select * from dblink('dbname=stocks',
              'select name,date,(bid+ask)/2 as price
              from realtime
              where date > to_date(''20081231'',''yyyyMMDD'') and date < to_date(''20090201'',''yyyyMMDD'')')
         as t1 (name character varying,date timestamp,price numeric);

As you can see this approach is pretty straight forward. You basically write an insert statement for the new table and use a dblink as source. In the dblink definition you can apply any given sql criteria.

One real drawback has this solution, because of the mode of operation of the dblink approach it is pretty slow. Here is what the postgres documentation has to say about this:

dblink fetches the entire remote query result before returning any of it to the local system. If the query is expected to return a large number of rows, it’s better to open it as a cursor with dblink_open and then fetch a manageable number of rows at a time.For me the performance was ok because I just copied several hundred megabytes.


1 Comment

bringing the yahoo finance stream to the shell

A little while ago a posted a primitive way to get to yahoo finance streaming data. As you can guess this was just the beginning. To raise the bar I tried to parse the received data and bring it to the shell. To get this done I needed several tools.

  • curl – to send and receive the http request
  • transform – a primitive tool to do streaming operations within one line
  • spidermonkey shell (a javascript shell which can parse and reformat the data)

The complete logic will be done in the javascript. So lets start with the curl command line:

curl -s -o - -N ',MSFT&k=l10&callback=parent.yfs_u1f&mktmcb=parent.yfs_mktmcb&gencallback=parent.yfs_gencb'

Let’s see what we have here. First we call the yahoo streaming api and want the current price (l10) for the stocks of Sun ans Microsoft. The callback part cannot be changed. If you change this part the whole request will not succeed. Also important is to get the output to STDOUT so that we can pipe the output to the next application.

Second part of the work is just to call the transform application (further explanation here).

The third part is to pipe the output of the transform process into the javascript shell. I started the shell with the following command:

js -f script.js

The script script.js look like this:

yfs_u1f = function(tmp) {
print("msft: "+tmp.MSFT.l10);
print("java: "+tmp.JAVA.l10);

yfs_mktmcb = function(tmp) {
/*ignore timestamp */

var parent=this;
parent.yfs_u1f = yfs_u1f;
parent.yfs_mktmcb = yfs_mktmcb;

var t = readline();
if(t.substr(0,3) == "try"){

First we have to implement the callback functions which will be called from the http response. Then we construct an object called parent where we map these functions into. Now we have a working construct to receive the data and are able to work with it in our shell. What we need now is a little while loop to continuously read from STDIN and wait for new data. By the way accessing the tmp variable in the callback function seems somewhat complicated to me. I’m sure there is an easier way to access it but I have no clue how. If you have an idea how to do it better please post it to the comments.

The complete bash statement would look like this:

curl -s -o - -N ',MSFT&k=l10&callback=parent.yfs_u1f&mktmcb=parent.yfs_mktmcb&gencallback=parent.yfs_gencb' | /tmp/transform | js -f script.js

If you run this you should get this output:

msft: 17.83
java: 4.47
msft: 17.84
msft: 17.86
msft: 17.81
java: 4.46

Now you can use whatever tools you want to work with that data. For me this will be piped directly into my postgres db for further processing.

, , ,