In this blog, I will be talking about the Hyperion Essbase
applications black box, namely, the Essbase application log files. Well if I
check Wikipedia for what it means to be a black box, it is, I quote “an
electronic device placed in an aircraft for the purpose of facilitating the
investigation of aviation accidents and incidents”. Hyperion Essbase
application log file also does the same thing but thankfully, we do not have to
deal with accidents only incidents and being able to read an application log file
gives a wealth of information about the overall nature of the system that we
have.
Well, the first thing about the Essbase application log
file, is that these files can be huge. I have personally worked with Essbase
application log files that ran in gigabytes and working with them trying to
access information in a production system about outage, where you can have
multiple users connecting to the system and at the same time have data loads
running on a daily basis is particularly time consuming.
Also, information is not visible exactly like the format we
are looking for and some parsing is required to work with this. The best thing
that Hyperion system administrators can do is archive the log files on a
weekly, monthly or quarterly basis so that the parsing effort is
minimized.
Now, the Essbase application log file generally will look as
shown in the below snapshot:-
The general format (speaking in a computer science parlance)
would be in the following syntax:-
[Timestamp
information][Essbase application/database operation high level details along
with code]
[Essbase
application/database operation in details]
The timestamp information is usually in this format : - <Day
of the week> <Month> <day of the month> <time in 24 hour
format> <year>
Another thing to note is that there can be multiple
entries for a single timestamp. This is generally the nature of any logging
software since the system would be doing multiple operations simultaneously.
For example, in the above snapshot, most of the entries are for November 16
2015 11:31:55.
The Essbase application/database operation high level
details basically contains the following information:-
<Hardcoded
value: Local>/<Application name>/<Database name>/<User
name>/<opcode>/<operation logging details>
The first field has a value of “Local” in most of the log
files that I have seen. If anyone of you has ever seen any different value
here, do mail me a snapshot of the same so I can analyze it.
Since every Essbase application has a single log file,
the Essbase application high level details include information about the
application name and cube name to distinguish among different databases within
the application.
In addition it also includes the user name who performed
the operation.
Opcode is basically a number that I could not find any significance
for since it appears to be some kind of internal code.
The operation loggings details basically indicates if the
operation caused an error, warning or was executed successfully and has the
following three values that I have commonly seen:-
·
Warning
·
Info
·
Error
The below snapshot has all three types of information
shown in a single file:-
I have observed that the database name and user name
entries are optional and can be blank in the log file.
Also note that the each of the operation comes with a
corresponding message code tagged to it.
The next section is the operation details with a detailed
message on what exactly the Essbase system did. Some examples are as shown
below:-
·
Essbase needs to retrieve [1] Essbase Kernel
blocks in order to calculate the top dynamically-calculated block.
·
The Dyn.Calc.Cache for database [Plan1] can hold
a maximum of [22] blocks.
·
The Dyn.Calc.Cache for database [Plan1], when
full, will result in [allocation from non-Dyn.Calc.Cache memory].
·
Writing Parameters For Database [Plan1]
·
Reading Parameters For Database [Plan1]
I am currently working on designing a parsing system to
get useful information out of this log file and build a reporting application
around these files. For the time being I have managed to get the data to be
loaded into an Oracle database in a format that has some meaning. Below is a
snapshot of the parsed log file details:-
I am now trying to design a system that can actually sift
through this data and generate some useful statistics.
Will keep you updated on the same…Do watch out my blog
for more…