locked
xVelocity engine representation RRS feed

  • Question

  • Hi,

    I'm searching a picture that represents how the data from the sources flow vs the storage engine, with data compression, when a PowerPivot model is processed and then the data are pulled from PowerPivot to Excel, after a data uncompression, for a query. So in a such picture I'd like to figure the PowerPivot processing process and the PowerPivot querying process.

    For now, I haven't found any pictures.

    Any helps to me, please?

    Many thanks

    Friday, October 30, 2015 9:48 PM

Answers

  • You're asking a very complex question, and I don't know of any resources that offer a complete answer. You'll have some reading to do. I'd start with the white paper linked first below, and follow up with reading on the rest of the articles, as they talk a good deal about memory utilization and compression optimization in a Tabular model.

    White paper on using Tabular models in large-scale enterprise analysis solutions.

    Creating memory-efficient models.

    Optimizing high-cardinality columns.

    Checklist for memory optimizations.

    Understanding why the strategies for optimizing models will help to understand what's going on. The white paper linked in the first article is very thorough, and does have some visual representations of what the physical layout of a model looks like. Like I said, though, you've got reading ahead of you if you want to understand this more thoroughly.

    Edit: You'll want to look at the section starting on page 15 for the compression pieces, but I'd suggest reading the whole paper.

    GNet Group BI Consultant


    • Edited by greggyb Tuesday, November 10, 2015 4:18 PM more details
    • Proposed as answer by Michael Amadi Sunday, November 15, 2015 10:07 AM
    • Marked as answer by Michael Amadi Friday, December 11, 2015 5:43 PM
    Tuesday, November 10, 2015 3:54 PM

All replies

  • Hi Pscorca,

    Here is the data refresh steps.

    1. Check authorization, verifies that the user has sufficient permissions to request updated data for the PowerPivot data source.
    2. Reads the list of data sources that are scheduled for the current data refresh operation.
    3. Opens a connection to each data source using the connection string that is stored inside the PowerPivot data source.
    4. Open and query each data source in parallel.
    5. Save workbook to content database.

    Reference:
    https://technet.microsoft.com/en-us/library/ee210690%28v=sql.105%29.aspx#drprocessing

    Regards,


    Charlie Liao
    TechNet Community Support

    • Proposed as answer by Charlie Liao Monday, November 9, 2015 6:46 AM
    • Marked as answer by Charlie Liao Tuesday, November 10, 2015 1:06 AM
    • Unmarked as answer by pscorca Tuesday, November 10, 2015 7:55 AM
    Wednesday, November 4, 2015 2:09 AM
  • Hi Charlie,

    the described steps aren't what I'm searching.

    I'd like to find a first picture that figures the data processing for a PPV data model and a second picture that figures the data querying against the data model.

    I'd like to see the representation of the data compression phase during the data processing and the data uncompression phase during the data querying.

    Thanks

    Tuesday, November 10, 2015 7:58 AM
  • You're asking a very complex question, and I don't know of any resources that offer a complete answer. You'll have some reading to do. I'd start with the white paper linked first below, and follow up with reading on the rest of the articles, as they talk a good deal about memory utilization and compression optimization in a Tabular model.

    White paper on using Tabular models in large-scale enterprise analysis solutions.

    Creating memory-efficient models.

    Optimizing high-cardinality columns.

    Checklist for memory optimizations.

    Understanding why the strategies for optimizing models will help to understand what's going on. The white paper linked in the first article is very thorough, and does have some visual representations of what the physical layout of a model looks like. Like I said, though, you've got reading ahead of you if you want to understand this more thoroughly.

    Edit: You'll want to look at the section starting on page 15 for the compression pieces, but I'd suggest reading the whole paper.

    GNet Group BI Consultant


    • Edited by greggyb Tuesday, November 10, 2015 4:18 PM more details
    • Proposed as answer by Michael Amadi Sunday, November 15, 2015 10:07 AM
    • Marked as answer by Michael Amadi Friday, December 11, 2015 5:43 PM
    Tuesday, November 10, 2015 3:54 PM
  • Hi Greg, thanks for your reply but I'm searching a simple image, obviously if it exists.

    Thanks

    Friday, November 20, 2015 5:00 PM
  • There is no way to show what you've asked for in a single simple image. You could have a very large and complex image, or lots of pieces. The section on compression in the white paper that I pointed out (starting on page 15) that helps with the internal pieces of compression.

    You've asked for two very complex and heavily optimized query processes (source to Tabular engine, Tabular engine to Excel). The first encompasses a full understanding of every potential source. The latter touches on the Tabular storage engine, Tabular formula engine, the entirety of the DAX query optimizer, an MDX to DAX translation layer (Excel is firing off MDX queries), the entire process of accessing compressed data, materializing the appropriate columns and returning them.

    Each of those pieces is deserving of at the very least a long essay.

    GNet Group BI Consultant

    Friday, November 20, 2015 7:30 PM