Cubes, partitions and performance | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Cubes, partitions and performance

Sorry about reposting this here as a new thread, but maybe the post was so deep in the thread that nobody pay attention to it. Here it goes again. I’m trying to speed up the cube processing time, by means of partitions.
Let’s suppose I have defined the partitions in such a way that I’m getting processed just the more recent records.
So I process just the partition with the last added records.
The problem is with the virtual cubes that rely on my just recent processed cube. How can I avoid processing all the virtual cube? there is no partitions in the virtual cube….. All the time I gain processing just the partition with new data is lost when I’m forced to do full process (or refresh data) in the virtual cube. Thanx.

What is the criteria of partition, as it can have great performance advantages for Analysis Services and make BI data easier to manage.
Cube processing is included in the tasks performed by Master Update, but cube processing is not the bottleneck: cube processing performs at more than twice the speed of Master Update. Thus, the RDBMS operations performed by Master Update deserve the most careful characterization. One of the technet document refers :
Another approach might be to add a clustered index on the time key field of the fact table in the Staging database, which should enable Master Update to select the rows that belong in each partition table in the Subject Matter database more quickly. A clustered index slows Master Import somewhat, but its impact can be minimized if you presort the data.
Satya SKJ
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
The criteria is simple. Historical Partition: "DimDate"."DATE" < ‘20041230’
Fresh PArtition : "DimDate"."DATE" >= ‘20041230’ So my fresh Partition always has today and yesterday. (each day this filter is updated programmatically thru DSO) So when new records are added to the Fact Table, I just do Partition.Process(DSO.ProcessTypes.processRefreshData). I do this every 5 minutes or so, according if there were new records added to the fact table. So the processing of the new partition is fast. But the problem is with the virtual cube that depends on this one.
How should I process it? I think I have no other choice than do a full process on the virtual cube?
Is that right? I don’t understand what you mean by Master Update…what is a "Master Update" ???

Hello Gus I must have not read your question in the last tread. In regards to Virtual cube processing I don’t think it is necessary since the virtual cubes only contain the structure of the source cube and not the actual data, think of the a virtual cube as a View in SQL Server. Only process a virtual cube the first time you create it and if you change the schema of the source cubes i.e calculated members, dimensions, measures etc. Raulie
Hewlett-Packard Company
Correct me if I’m wrong, but I think that a virtual cube _must_ be processed when the underlying cubes have a change in their data or structure of course. As far as I know it’s not like a VIEW in SQL. If you don’t process the virtual cube, how all the calculated members defined there got processed?
You’re WRONG. The first line in Books on line regarding virtual cubes cleary stipulates that a virtual cube is similiar to a view, it’s basically saying in theory not physically like View. Why don’t you test this out for your self for your own feature reference. On a non production cube for instance the FoodMart Cube create a Virtual Cube then process it, notice that it doesn’t ask for storage mode nor do any dimensions get processed. "Because virtual cubes store only their definitions and not the data of their component cubes, they require virtually no physical storage space." Then modify the data in the source cube then reprocess the source cube, notice how the data changes to source cube are applied to the virtual cube. BTW I did say that the virtual cube requires a process if the structure changes in the source cube.
Hewlett-Packard Company
Ok.<br /><br />I’ve tested it, and you’re right.<br /><br />There’s no need to process the virtual cube.<br />It seems it is like a view after all.<br /><br />There’s only one thing I don’t quite understand.<br /><br />Why is that when you process a cube (a cube, not a partition) in Analisys Manager (don’t matter if you do refresh data o full process) the program automatically process the virtual cubes that rely on the one just processed?<br /><br />This was one of the thing that confused me.<br />Is is some kind of error of Analisys Manager? Why is that it automatically reprocess the virtual cube?<br /><br />Thanx a lot.<br />Gracias manito.<br />We’ll have a taco and tequila any time you like <img src=’/community/emoticons/emotion-5.gif’ alt=’;-)’ /><br />
Hey thanks a lot I’ll take you up on that offer next time I visit Argentina or when ever you’re in the area in Guadalajara Mexico. Raulie
Hewlett-Packard Company