top of page
  • Kevin Corlett

Essbase Performance Tuning

Updated: Nov 19, 2018

Tips and tricks to optimize Essbase ASO and BSO performance.

The approach to resolving query performance issues is to examine the problematic queries and restructure the database to achieve optimal performance for these queries with the least possible impact to the types of reporting and analytics supported by the cube .

Optimizing ASO

The following is a list of items to consider when addressing ASO optimization:

  • Compression. Compression dimension is not mandatory, but helps performance. The compression dimension is a dynamic dimension. It should be the column headers in a data load file. Ideal compression is achieved if the leaf level member count is evenly divisible by 16.

  • Accounts. Accounts is a dynamic dimension that allows for non additive unary operators (minuses in a structure still make a hierarchy dynamic). The only reason to make a dimension Accounts in ASO is for time balancing. Expense flags are accomplished through UDAs and member formulae.

  • Time. Can make a good candidate for compression dimension. Should be stored. Use multi-hierarchy if formulae are necessary. Prior to 9.3.1, to-date is best performed in Time dimension. 9.3.1+, use an "analytic dimension" with to-date members like MTD, QTD, and YTD.

  • Meta-data vs. data. Don't evaluate data when a metadata check will suffice. For instance, IIF(Is(Scenario.CurrentMember,Actual) is faster than IIF(Scenario.CurrentMember=Actual) because the latter IIF actually compares values.

  • MDX Optimization. Don't use the MDX Round() function: rounding is a function of formatting in a reporting tool not the database. Remove CurrentMember if possible, because that's what's already being calculated. Use LastPeriods() instead of Lag() when doing of range of Time periods. Don't use a function where direct referencing can be performed. (Call out specific members instead of functions, for instance.) Only perform calculation when data to support the math exists [i.e., start off with Case When Not IsEmpty()]. In 11.1.1, there is a new NonEmptyMember directive to only calculate when data exists.

  1. However one drawback with ASO is that in some of the more complex ASO cubes I found that retrievals could be quite slow. This is because members with formulas are always handled as dynamically calculated in ASO, unlike BSO where you can specify a calculated member to be stored. Usually this is not a problem, but if you have a very large cube, and a relatively complex piece of MDX code as a member formula, and you do a large retrieval, performance can be quite bad.

  2. Because ASO does not offer many option for optimization, there are workarounds for these cases, either by making the formulas less complex, or using much more focused retrievals. However recently a colleague of mine alerted me to a MDX function which can, in the right cases, make these type of retrievals much, much faster. So thanks to Johan Hammes, all the credit for this goes to him for alerting me to his functionality.

  3. Two functions in MDX called NONEMPTYMEMBER and NONEMPTYTUPLE. Basically you put them as the first line in a member formula (of course you can use them in MDX scripts as well, but in this post I will focus on member formulas). As an argument you specify either a single member or member list (for NONEMPTYMEMBER), or a cross-dimensional member specification (for NONEMPTYTUPLE). This basically acts as a filter, which checks the specified member or member combination, and if it is empty, it stops execution of the rest of the member formula. So is almost acts like an IF statement where you test a member and just do nothing if it is empty, however it seems to speed up things a lot more than using a IF. An example of how this function looks is as follows, assume all the quoted text is a member formula:


o SUM(Children([Entity].CurrentMember),[Sales])

o So if Sales is empty, the SUM is not executed.The challenge with these functions is to choose the right member(s) for the filter, but it could make a huge difference. In one case where this was used, a large retrieval which took 10 minutes now takes 2 minutes, with the only difference being the usage of the NONEMPTYMEMBER function.

  • Materialization. You can turn Query Hints (Level Usage for Aggregation) on for specific members (the member information tab in the member properties in EAS). You can also specify a specific level intersection to materialize via EAS. Both types of Query Hints can only be set through EAS. He mentioned that only 1,024 level intersections can be materialized which I've never seen anyone ever come close to.

  • Slices. Is the primary feature enabling Excel (Lock&)Send and trickle feeding functionality. Creates "subcubes" alongside the primary slice of the database. Dynamic aggregations are performed across the necessary slices to provide query results.

  • Data load. Data should be loaded as additive values (instead of replace even on an empty database). Multiple buffers can be used to parallel load the database. Requires simultaneous MaxL processes to be executed. Ignore Zeros and Missing values whenever possible (buffer setting which was available in 9.0).

  • Tablespaces. Tablespaces in Aggregate Storage Database help to optimize the data and work file storage and retrieval. Configuring the location of database (default tablespace) and work (temp tablespace) files on different physical drives helps to improve the performance. The data load files should also be placed on different drives to improve performance however it doesn’t matter if it’s on the same location as “default” tablespace.

  • 64-bit Essbase. Because processes on 64-bit have greatly increased memory addressability over 32-bit processes, the 64-bit edition of Essbase can handle larger outlines and cache sizes than 32-bit Essbase. In computing environments that support it, implementing 64-bit Essbase can improve the performance of existing applications and can sustain much larger applications.

o Cache Size: on 64-bit Essbase, you can set cache sizes larger than the existing 32- bit limits. In Essbase clients, the maximum values you can set for the data cache, data file cache, and index cache is 4 GB. You can enable larger values for the data cache and data file cache using the MEMSCALINGFACTOR configuration setting. See the Oracle Essbase Technical Reference.

o Thread Settings: the maximum thread settings are higher for 64-bit Essbase than for 32-bit Essbase.

o Buffer Size: default retrieval buffer settings for 64-bit Essbase are higher than for 32-bit Essbase.


Optimizing Essbase Retrievals

The following is a list of items to consider when addressing ASO query optimization:

  • Standard essbase.cfg settings:








  • Some of the rules people follow are not correct such as (all of those rules are still taught in Essbase classes around the world, and they're all either out-of-date or wrong some of the time):

1. Hourglass order for dimension ordering

2. Accounts first, time second

3. Dynamic calculated members slow retrieval

  • Optimize memory management, caches (data cache, data file cache, index cache, dynamic calculator cache), and some other settings (like retrieval buffers and sort buffers). The old "load your entire index into RAM" rule is out-of-date then set your index only as large as is necessary.

  • Server maintenance routines (defragmentation, log purging, recycling of services to clear memory, and archiving off older data sets into things like history cubes).

  • Report optimization: the most important thing, of course, is the layout of the dimensions on the report. He gave two good tips:

1. Make reports symmetrical

2. Try to order your dimensions like a columnar export: dense dimensions across columns and near the data in your rows, sparse dimensions in your rows.

  • Excel retrieves: turning off preserve formulas speeds up retrieves (and just use Excel protection to keep the formulas).

  • Hyperion Reports: use the MemberProperty() function for filtering along attribute dimension associations. In some reports, it might be better to calculate within Reports instead of the server if a lot of rows are suppressed.


Optimizing BSO Calc Scripts

The following is a list of items to consider when addressing BSO calculation optimization:

  • To optimize calc time, always minimize reading from the hard drive

  • Make sure your database is defragmented

  • Avoid two-pass members

  • No left-handed cross-dims


  • Repeating CLEARDATA commands causes multiple database passes

  • CALCDIM vs. CALCALL vs. AGG (use AGG)

  • IF vs. FIX (it's not always true to FIX on sparse and IF on dense)

  • Avoid unnecessary IFs when, say, a FIX would do.

  • You can use @ISxxx functions outside of IF statements

  • How to check for missing or zero

  • Changing zeros to missings


  • @XREF being called in a dynamically calculated member

  • Try not to use @ALLOCATE or @MDALLOCATE

  • AGGMISSG ON speeds up aggregations

  • Pivot calculations to avoid cross-dims

  • Use VARs and ARRAYs (especially when pulling info in from an XREF)

  • Compile calc scripts into default calc scripts

  • CDFs vs. built-in functions (built-in functions are faster)

  • DATAEXPORT is fastest way to export

  • Use substitution variables to improve maintenance

  • CLEARBLOCK re aggregating? Depends but can often slow down calculations

  • Use CALCPARALLEL set to max in most cases.

  • FIX arguments are reordered

  • SET CREATENONMISSINGBLK is the devil's work (second only to left-handed cross dims)

  • Use SET FRMLBOTTOMUP ON to speed AGGs (in most cases) but watch out for incorrect results

  • Avoid functions that cause Cell Mode calculations

5,201 views0 comments


bottom of page