locked
SSAS Tabular small database fails in Excel pivottable because of memory shortage?! RRS feed

  • Question

  • Hi, 

    I have a very small SSAS database with around 35 Mb. I opened it on Excel 32 bits and started dragging fields to a pivot table and it started failing with memory errors. The behavior on the SSAS server was that memory started growing very fast until 8 GB (vm memory total) and then the error is reported in excel.

    What might be the issue in such a small database? I would understand in a big database, but not on this one.

    Thank you

    Thursday, May 7, 2015 4:40 PM

Answers

  • Pedro, try setting up a trace to track the MDX queries on the SSAS instance. That way, you will have a trace to review and which MDX queries the Excel is firing on the SSAS back end. 

    The size of the SSAS cube doesn't matter in the memory  consumption, its the amount of calculations the SSAS has to do to give the data. 


    Suman

    Monday, May 11, 2015 3:06 AM
  • Hi Pedro,

     This can happen if the MDX query Excel is sending to SSAS server generates a Cartesian product (cross join) with large resultset. One possibility is that the relationship between the fact and dimension tables are not setup correctly. Also, drag a calculated measure (not calculated column) to pivot table before adding dimension attributes. This will allow SSAS to eliminate empty cells with no data.

    Start with trace suggested by Suman.

    Hope this helps.

    Arun

    Monday, May 11, 2015 2:43 PM
  • Monday, May 11, 2015 7:33 PM
  • Hi Pedro,

    According to your description, you get lack of memory when working with SSAS tabular in Excel. Right?

    In Analysis Services Tabular, it will use the xVelocity technology to load your entire database in memory (greatly compressing the database). So it can be easy to get the memory error. I suggest you change some memory settings. And there is limited memory available for Excel 32 bit when loading in-memory database (Tabular). Also please refer to an article below to optimize the Tabular model:

    Optimize memory in #powerpivot and #ssas #tabular

    Regards,



    Simon Hou
    TechNet Community Support


    Tuesday, May 12, 2015 11:48 AM

All replies

  • Anybody have a suggestion for Pedro?

    Thanks!


    Ed Price, Azure & Power BI Customer Program Manager (Blog, Small Basic, Wiki Ninjas, Wiki)

    Answer an interesting question? Create a wiki article about it!

    Monday, May 11, 2015 2:45 AM
  • Pedro, try setting up a trace to track the MDX queries on the SSAS instance. That way, you will have a trace to review and which MDX queries the Excel is firing on the SSAS back end. 

    The size of the SSAS cube doesn't matter in the memory  consumption, its the amount of calculations the SSAS has to do to give the data. 


    Suman

    Monday, May 11, 2015 3:06 AM
  • But this is a tabular model, all the data should be in memory right?
    Monday, May 11, 2015 10:05 AM
  • Hi Pedro,

     This can happen if the MDX query Excel is sending to SSAS server generates a Cartesian product (cross join) with large resultset. One possibility is that the relationship between the fact and dimension tables are not setup correctly. Also, drag a calculated measure (not calculated column) to pivot table before adding dimension attributes. This will allow SSAS to eliminate empty cells with no data.

    Start with trace suggested by Suman.

    Hope this helps.

    Arun

    Monday, May 11, 2015 2:43 PM
  • Can you help me on how to set the trace up?

    Thank you

    Monday, May 11, 2015 2:52 PM
  • Monday, May 11, 2015 7:33 PM
  • Hi Pedro,

    According to your description, you get lack of memory when working with SSAS tabular in Excel. Right?

    In Analysis Services Tabular, it will use the xVelocity technology to load your entire database in memory (greatly compressing the database). So it can be easy to get the memory error. I suggest you change some memory settings. And there is limited memory available for Excel 32 bit when loading in-memory database (Tabular). Also please refer to an article below to optimize the Tabular model:

    Optimize memory in #powerpivot and #ssas #tabular

    Regards,



    Simon Hou
    TechNet Community Support


    Tuesday, May 12, 2015 11:48 AM
  • Hi, i'm back to this topic.

    I checked the trace log, marking Query Begin, Query End and Errors.

    I could see the MDX query and although it looked silly, I went to SQL Server Management Studo and write myself the MDX query with the same result: 

    The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.

    The result of the cartesian product of this 4 dimensions would generate around 12.000 rows of data (5 columns, 4 dimensions and 1 measure). I checked tabular model relationships and they seem to be correct with every dimension being connected to the fact table by integer surrogate keys.

    I will continue reading the articles provided but any insight would be helpful to continue.

    EDIT: If I do this via DAX it returns results almost immediately

    Best regards


    • Edited by Pedro Samuel Wednesday, May 20, 2015 11:04 AM DAX result
    Wednesday, May 20, 2015 10:14 AM
  • I found out that the issue is related to the calculated measure that I'm using, because if I use a normal fact measure it returns values without problems.

    NR_ESTUDANTES_INSCRITOS:=CALCULATE (DISTINCTCOUNT ( Individuo[ID_INDIVIDUO] ); Inscricoes)

    What I'm doing is calculating the number of distinct appearances of the field ID_INDIVIDUO in table "Inscricoes".

    Can this be done in a more performant way? It works very well when it's aggregated but when the granularity is at the row level where the distinct count is 1 it consumes all the memory.

    Thank you

    Wednesday, May 20, 2015 1:12 PM
  • I found out that the issue is related to the calculated measure that I'm using, because if I use a normal fact measure it returns values without problems.

    NR_ESTUDANTES_INSCRITOS:=CALCULATE (DISTINCTCOUNT ( Individuo[ID_INDIVIDUO] ); Inscricoes)

    What I'm doing is calculating the number of distinct appearances of the field ID_INDIVIDUO in table "Inscricoes".

    Can this be done in a more performant way? It works very well when it's aggregated but when the granularity is at the row level where the distinct count is 1 it consumes all the memory.

    Thank you

    I have the same problem you do, anyone else have a fix to this?

    DISTINCTCOUNT inside CALCULATE filtered against a table...

    Monday, August 10, 2015 8:20 PM