Wednesday 30 March 2016

Validating files loaded in Essbase application using report scripts



Well, in this blog I am going to be talking about something that we always had to do at one point or the other…I used to get frequent requests whenever I was testing data files loaded into Essbase or Planning, is there a way to test the data loaded in the system. Many a times data may get kicked out due to some issue which was not checked ( happens in production environment where you do not maintain the system and are called for a review) or you may have tried to load data into a dynamic calc member… Now in a normal file I am talking about thousands of rows for a given month and you have to validate it…You could do it manually, but then you would probably be reading this blog on scrolls of papyrus… So let us see if we can fashion a solution to help in the same. 

BSOSAMP is the application I am going to be working on…

This is the file that I need to load in my application… It is a free form load but can very well be a file which was loaded using rule files(There is one small thing to note, if you are doing manipulations in rules files like splitting fields or adding a prefix and stuff, you may need some more work to get a solution to work)



So I load the file into the application and it loads successfully as shown in the below snapshots.




 
So now I have loaded the file and data rests in Essbase…Now although the system shows that data has loaded successfully, I may need to verify the data is present at level-0 (Generally, you will have to verify that the data rolls up correctly, but once you understand the example, it is a simple matter of manipulating the members that are there in the report script that I will show later…On a side note, there is a way to do rollup verification as well…I will show it later…It requires the outline XML and some coding in PLSQL…But it is beautiful!!!).

So now I will show one of the features that I personally feel is not used a lot, Essbase report scripts. Below is the report script that I have used to extract data from Essbase.

 

If you check the row specification you will find that the columns are in the following order:-
Account, Entity, Year, Period 

This is similar to the order in which the data was in the source file. Assuming you have a file that is in a different format, all that you have to do is map the row specification to the format of your file and you are golden.

On running the above repot script, you will get a file as shown below (I have used the console here to show for this demo, you can redirect it to a file as well using MAXL)


 
This data is in the same format as the file that you loaded…Now you can either use VLOOKUP or you can load the data in Oracle and run a minus query to check mismatch between Essbase and the data loaded…To each, his own hammer and nails but the data can be validated much more easily now

[On a side note, I once worked on a system where we were getting variances during the year end for HQ values. The business managers were not sure why the data was not tallying and more so at HQ level. Lot of manipulations led us to a level 0 data combination that for some reason had doubled the values…That was the crisis that gave birth to this idea and it has helped me ever since…Using report scripts for focused data validation]

Tuesday 29 March 2016

Hyperion Essbase Log file parser on GitHub

After some working around, I have finally been able to get the Hyperion Essbase log file parser on GitHub...

This is the download link for the parser... It is a jar file and you will require a JVM to run it...

https://github.com/TechNinja47/EssbaseLogFileParser/releases/tag/v1



The way to run the jar file is as shown in the below snapshots... ( It is pretty basic and I am sure most of you must have done this before as well...)



Sunday 27 March 2016

Aggregate Storage Outline conversion wizard



In this blog, I am going to be talking about the Aggregate Storage Outline conversion Wizard. This Wizard basically takes a BSO cube outline and converts it into as ASO cube outline (in the best case… there is a worst case scenario as well, but let us not get ahead of ourselves and spoil the fun). 

On a personal note, I have used this utility a couple of times but I find it easier to build the hierarchy from rules file than a wizard. Old school perhaps, but it is easier to do it using rules file that is my personal take.

To launch the Wizard, you need to login to EAS console and then go to File -> Wizards -> Aggregate Storage Outline Conversion as shown in the below snapshot.



The next step is to select the Source BSO outline. In the below snapshot, I have chosen BSOSAMP cube as my source. Once the outline is chosen, hit the next button.



 
The next step is the wizard shows the corrections that need to be done in the outline. It basically verifies that the outline conforms to the ASO outline guide. In short, it basically runs the outline verification and shows error if any. 
The thing to note is the errors that are shown in the below snapshot.

 
The reason for the outline verification errors is that ASO is slightly different from BSO cubes in that there can be three different types of hierarchies possible in ASO:-

  • ·         Stored
  • ·         Dynamic
  • ·         Multiple Hierarchies Enabled

These hierarchies have their own set of rules and they lead to quite a few outline verification errors.
(The BSOSAMP is the most basic outline that I have created and I thought it would pass of without any outline verification errors when I am using the utility. But it did not. I had run this utility once on a huge outline and it just gave the verification failed and I had to fix it manually. So much for having a Wizard.)

The screenshot shows a high level summary of the members that were fixed. 



Once this is done and next is clicked, the wizard will ask for a destination for your converted outline. You can either choose an existing ASO application or you can create a new one. In the below snapshot, I have converted the outline and saved it to a new application called ASOSAMP.




 
The key points that I would like to highlight is as follows:-

The utility is used for converting BSO cube outlines to ASO outlines. However, I would put it in a broader way. The utility can show you which members and hierarchies would cause probable issues if you convert the application from BSO to ASO. This allows you to focus your efforts on these hierarchies so that they can be built easily. Once you know the pitfalls, you can easily design an ASO application in the traditional way.

For example, in the above example, I now know that my Account hierarchy is not ideal and I may have to look at it. But all the other dimensions can be easily migrated as is. This is quite useful when I have an actual system with thousands of members spread in eight to ten dimensions.

Do let me know what you think…

Friday 25 March 2016

Hyperion ASO Application level 0 data analysis



In one of my previous blogs I had done some basic analysis on the structure of the BSO Level-0 data extract. Well, now it is time for the data structure analysis of an ASO Essbase application. (I am an ardent ASO fan for some reason. Maybe this has to do with the fact that when I started working in Hyperion initially it was on an ASO application. It was a very cool application and ASO is a bit more technical as compared to BSO, with its tablespaces, bits and stuff.)  

The application that I am going to load data and then analyze the extract is called as ASOSAMP(no surprises there). The structure is as shown in the below snapshot.






I loaded some dummy placeholder data into this application as shown in the below snapshot.





Now, let us start with the export of level 0 data. This is an interesting snapshot.





Now, if you observe the dialog, you will see that for an ASO application, you cannot export data in columnar format nor can you export all level data. This is more of a tradeoff. Traditionally, ASO applications are built for inherently sparse applications, with more than 10 dimensions. If you try to export such a huge data domain of inherently sparse members in columnar format, the file would be enormous and it will be a waste of precious space since the data is the same, formatting is what is taking up the space.

(This is the concept of a tradeoff or as my statistician friends will call, cost benefit analysis. Does doing something actually lead to a benefit or not? If the efforts of doing something outweigh the benefit, there is no point of doing the activity except perhaps as a research curiosity.)


Now the data export is in the format as shown below:-




The ASO data export is very easy to read and in another blog, I plan to actually have a working hypothesis on how the structure is actually derived and how the data is actually stored in the tablespaces. 

This is the nerd talk, you have to pardon me for it. The encoding scheme that is followed for ASO level 0 data is the encoding technique which is technically used for encoding audio files and called as Differential Pulse Code Modulation or DPCM applied to Pulse Code Modulation (PCM). DPCM can be explained as follows from the book Multimedia: Computing, Communications and Applications by Ralf Steinmetz and Klara Nahrstedt.

It is not necessary to store the whole number of bits of each sample. It is sufficient to represent only the first PCM-coded sample as a whole and all following samples as the difference from the previous one.

So, the first record in the level 0 export is an entire row with all the dimensions and data represented.
“Sales” -> “Jan” -> “E1” -> “FY16” = 999

 The second line in the extract is Feb=880. This gets interpreted as,
“Sales” -> “Feb” -> “E1” -> “FY16” = 880

The third line in the extract is "Cost of Goods Sold" "Jan" 308. This gets interpreted as follows:-
Cost of Goods Sold” -> “Jan” -> “E1” -> “FY16” = 308

Thus, at each stage, the current data line is the difference of the members between the previous line and the current line. By only coding what changed between the previous line and the current line, one can have a highly compressed data extract.



Hope you like this analysis of the structure... In one of my next blogs, I will talk about how the export actually happens...Do stay tuned...