Sunday 2 October 2016

How Essbase pushes data into a relational table – Deep diving into errors and database administration



In this blog, I would be talking about how Essbase pushes data into a relational table while we use the DATAEXPORT command. This will involve us deep diving into the application logs, database administration and some SQL queries. 

This is a snapshot of the data that is exported into a relational table called “ESSDEMO_TARGET” from an Essbase application called “ESSDEMO”.




Now, I will show one of the reasons why I did not like the data export directly into the relational table. I updated the application to add a couple more members in the Entity hierarchy and when I ran a DATAEXPORT once again, the output is as shown in the below table.   

Observe that Jan, Feb and Mar are now in Account column, Version is mapped to the Account, Month has fiscal year, Entity has Scenario, Year has the Scenario and the Actuals column is now representing the data for Entity “E-A”. Too much mess all around. This is because I would have expected some amount of control in which columns the data gets mapped to which columns of the relational table. But the problem is DATAEXPORT in Essbase does not give a lot of control. 

(Actually the fault lies with me as well. I have not defined a dense member in the SET DATAEXPORTOPTIONS to be the focus of my data export. If this is defined, I can have some control of which columns are exported. However, my personal take on DATAEXPORT is that it is implemented in a tightly coupled manner. Change in one source system (read Essbase) can have a possible impact on my downstream system i.e. relational table)

Now, let us mess around with the system a bit. I drop the database table which is shown in the next snapshot.
I run the data export once again in the EAS console using the DATAEXPORT command in the calculation script. This is shown in the below snapshot. Observe that the status of calculation script is successful in the message below.  


Now, if we check the log file, it documents the error which is actually logged at ODBC layer saying that table or view does not exist. So far so good.

When I run the DATAEXPORT against a file, the contents of the file are as shown in the below snapshot.


Observe that the entity is present on the first line spread across. This is not something that I would like. I may prefer to have periods on the columns since most source systems are that way and secondly, it is a compact export format. In order to do this, we need to change the dense and sparse settings of the application. 

The current application setting is as shown in the below snapshot. Note that Entity is dense, hence it was spread across the first line.


I updated it a couple of times so the changes are shown in the next couple of snapshots. 


In the below snapshot, the new extract is visible. This is in a somewhat better format that I want since I now have periods spread across the columns. I have still not used the SET DATAEXPORTOPTIONS properly so that’s one cross for me.  






I now define a table in the file format that I have got above. This is shown in the below snapshot. 

On running the calculation script to export data into the relational table, it is exported successfully as shown in the below snapshot.   

Now let us modify a table to add an extra column called insert_time which will have a default value of timestamp mentioned. This is shown in the below snapshot. 

On running the export, it fails as shown in the below log file snapshot.   


Observe that the error says “not enough values”. This means that since the export had n columns and the table has n+1 columns, it is not exported into the table using an SQL insert table.

Let us check what kind of insert statement is used to export the data from Essbase into a table.

If you are using Oracle, there is a view called as V$SQLAREA which lists out statistics on SQL statements that are in memory, parsed and ready for execution. I check this view to see the SQL statements for the schema where the data is being pushed from Essbase. This is shown in the below snapshot.


I check one the insert statements for the table ESSDEMO_TARGET since the only way I have inserted data into this table is using the DATAEXPORT command. This is shown in the below snapshot.


If you observe the above snapshot, you will realize that there are no column names for the target table mentioned. It is an all or none export. This is the reason why the export fails if the number of fields in the export does not match the number of columns in the table.


2 comments:

  1. Nice post. In your example, you can force a dense dimension to be the columns by setting dataexportcolheader. This is why you need to pick a dimension like Periods that does not change or the export will start failing when you add a new member to the outline.

    It should be noted that the order of dimensions is the same as what would be in a regular export. Sparse dimensions in order, then dense dimensions. That means if you change the order of dimensions in the outline, you need to adjust the column order.
    Often people want to export with just a single data column. In my blog, I describe how this can be done.

    ReplyDelete
    Replies
    1. Hi Glenn,

      I have read your blog on DATAEXPORT. It was really cool. Actually, I have seen DATAEXPORT failing too many times because dense dimension is not defined and it always errors out saying either too many values or too few values. But once you set DATAEXPORTCOLHEADER you are golden.
      I actually wanted to see the SQL queries fired in the backend. I agree on the dimension order as well. It is basically Essbase parsing the records as it reads it. Index entries -> Block entries.

      Delete