Slow cubes | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Slow cubes

Hello I have just moved my SQL server across to a new dual xeon 3.4, 2 gigs of ram, Windows 2003 with SQL2000 Standard edition Im very new to SQL and cubes but the problem is this; I run a cube that took around 8 hours to complete on the old single processor xeon 1gig with 1 gig of ram. CPU utilization was around 100% The new server seems to take the same amount of time but the cpu utilization stays very low around 10%,,, its like the cube isnt even running?? any ideas would be great Nat

How about rebuilding the cubes, i think the dimensions must have changed.
Is it that the cubes gives any errors while processing.
Or after 8 hrs it processes the cubes successfully.
I think there can be service pack issues of Analysis Services too.
Check for any hot fixes
Hi Unfortunately I don#%92t know how to build cubes at all. The cube is successful after 8 hours. I would have just assumed with a much fast machine that the cube would have processed faster? The guy who built the cubes said to me that the server i got was too powerful and that is the reason for the performance being the same… this to me sounded absurd, so i thought seeing i am not an expert in the field i would just see what you guys had to say about the topic. I have already service packed up SQL to Service pack 4 and i have the latest Analysis Service SP also. Thanks for your time Nat

Hi Nat,
As you said cubes processess successfully then there is no problem.
Regarding the speed of process depends higly on the aggregation you have chosen while creating Cubes.
If u choose performance then ofcourse aggregations will be more so cube needs to go with all possible combinations or cross joins of those shared dimensions.
One more thing can be the amount of data on which you are doing trend analysis will be huge so its taking time.
Anyhow i hope you must be processing the cube monthly.
Hi Ranjitjain Thanks for your replies! Unfortunatly this particular cube runs every night! There are several others that take 1 hour or so each. The main reason why i upgraded the old server was because the cpu utilization was so high and it was taking so long for these cubes to run, that i never had a chance to back up the server. I guess i was just expecting too much out of the new server! I think its time i learnt something about these cubes!
Hi Nat,
Can you check whether the cube is set for incremental processing or it processes the whole cube as if you are running everyday then the data incremented must not be greater than 1 gb and to process it should not take that much time.
Check for the cube properties.
Nat, One thing you could try is in Analysis Services right click on the server and choose Properties, you can then change various processing variables. Doing this is bit of a black art and you therefore need to do a fair amount of testing to take advantage of the best setting and keep a note of the original settings that it was optimised on originally. Have a look at the link to get some explanation of what each one does. It isn’t really very well documented how to get the best out of your server and really you have to suck it and see. Regards, Robert.

Do you have aggregations set on your cube and if so what percentage or number are they set to? How many dimensions does this cube contain? RGKN (nice article). Raulie
There are many reasons for slow processing, but there are usually two main causes, volume of data and number of aggregations. One way to check if it is one rather that the other is to monitor the cube build. Basically AS loads data from the source (usually SQL Server) and then aggreagtes it (to make querying faster). Therefore, if your volume of data is high AS could be taking a while to read the data, especially if it is over a slow network or your readahead buffersize is still set to the default 4mb. If this is the case you may want to look into upping your readahead buffersize or reducing the volume – strategies include partitions, aggregating in SQL Server or reducing the amount of historic data you provide etc. If the data load is adequate you could be losing time aggregating, this could be because of the number of and size of your dimensions. The earlier link should provide details of the impact of many dimensions and the sizing issues of dimensions with many levels. Another thing to look at is the number of aggregations the cube has. When you set aggregations you can set the performance gain to 100%. If you set it to 100% as ranjitjain has mentioned you calculate every single possibility. This is rearly needed, OK retrieval times will be quicker, but by how much? If you load a lot of data into a cube with many dimensions you want to keep the aggregations much lower than 100 performance gain, retrieval times will be lower but probably acceptable (only tests would tell). Hope this helps. wildh