locked
MDX performance tips RRS feed

  • Question

  • Hi 

    Following is my MDX. Do you see there any scope for the improvement of the performance of MDX. 

    Aggregate function of [Measures].[Value1] is MAX and value in the underlying fact table column for this measure is either 0 or 1 value only. 

    WITH SET [SET1] AS
    NonEmpty
     ( 
    	(
    	[DimMD].[SID].[SID] *
    	[DimMD].[MID].[MID] *
    	[DimMD].[TAP].[TAP]	
    	)
    	,[Measures].[Value1]
    )
    
    Member [Measures].[Y] As
    Sum (Existing [SET1],[Measures].[Value1])
    SELECT {[Measures].[Y]} ON 0,
    Extract ([SET1],[DimMD].[MID]) ON 1
    FROM [Reports]
    WHERE ([Date].[Calendar].[Year].&[2017],[Customer].[CustomerId].&[10])


    Aniruddha http://aniruddhathengadi.blogspot.com/


    Tuesday, March 20, 2018 2:33 PM

Answers

  • Really depends on sparsity/density of your data (even on individual attribute level) and data model (if there's no direct relation between MG and DimMD - say it goes via M2M, or it has extra background calculations killing performance) , technically all attributes are from same dimension, thus should be resolved fairly quickly (right attribute relationship structure may significantly boost resolution - so it doesn't have to go via dim-key all the time if you have bushy model) - although depends on element counts too (huge cross-joins)

    Do some data/query profiling, compare NonEmpty case to scenario with manually forced Exists(..,MG) - although autoexists should kick in in this case anyway, or even run different hybrid options (including nonempty on individual attribute)

    Check if scoping Sum() function on this granularity on Cube' MDX script side makes any difference (if business model/usage allows it, + validating conflicts with other reports).

    From my understanding you're trying to get custom sum/totals from different granularity and from different leaf (?) aggregation

    Try limited forced/constrained cell-by-cell (row-by-row in this case) mode on possibly smaller computational set - if it makes any difference having just nonempty MID (based on original max aggregation) on axis and Sum({MID.currentmember*SID*TAP}<different combinations/options>,Val1) as calculated measure.

    In most cases it's experimental trial and error (pass/fail) comparative/iterative logic approach if it's worth spending time on it.
    Tuesday, March 20, 2018 10:46 PM

All replies

  • Really depends on sparsity/density of your data (even on individual attribute level) and data model (if there's no direct relation between MG and DimMD - say it goes via M2M, or it has extra background calculations killing performance) , technically all attributes are from same dimension, thus should be resolved fairly quickly (right attribute relationship structure may significantly boost resolution - so it doesn't have to go via dim-key all the time if you have bushy model) - although depends on element counts too (huge cross-joins)

    Do some data/query profiling, compare NonEmpty case to scenario with manually forced Exists(..,MG) - although autoexists should kick in in this case anyway, or even run different hybrid options (including nonempty on individual attribute)

    Check if scoping Sum() function on this granularity on Cube' MDX script side makes any difference (if business model/usage allows it, + validating conflicts with other reports).

    From my understanding you're trying to get custom sum/totals from different granularity and from different leaf (?) aggregation

    Try limited forced/constrained cell-by-cell (row-by-row in this case) mode on possibly smaller computational set - if it makes any difference having just nonempty MID (based on original max aggregation) on axis and Sum({MID.currentmember*SID*TAP}<different combinations/options>,Val1) as calculated measure.

    In most cases it's experimental trial and error (pass/fail) comparative/iterative logic approach if it's worth spending time on it.
    Tuesday, March 20, 2018 10:46 PM
  • Hi Aniruddha,

    Thanks for your question.

    It is pretty hard to answer this without accessing your cube. Please refer to below blogs talking about improing MDX query performance:
    https://www.mssqltips.com/sqlservertip/4432/improve-sql-server-analysis-services-mdx-query-performance-with-custom-aggregations/
    https://sqldusty.com/2012/03/28/top-3-simplest-ways-to-improve-your-mdx-query/


    Best Regards
    Willson Yuan
    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com

    Wednesday, March 21, 2018 5:32 AM
  • Hi Yuri

    Thanks for your valuable feedback .

    I tried SCOPE statement and I was amused to see that the MDX query which used to take 14 sec with cache is executing in a 1 sec. What happens with SCOPE that it improved performance exceptionally. 


    Aniruddha http://aniruddhathengadi.blogspot.com/

    Thursday, March 22, 2018 4:16 PM