Testing Performance: How can this ever happen ? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Testing Performance: How can this ever happen ?

Hi all, I have the same Analysis services database on three different machines (including my local machine). Somehow the fastest machine (which is our reporting server) gives the slowest performance. I am running scripts and executing 40 different MDX queries and noting the time it takes to run them (logging everything in Excel). The DEV machine has 1 processor and 500 MB of RAM is the fastest.
The RPT machine has 4 processors and 4 GB of RAM and is the slowest. Note: The same database (with the same cube/aggregations/dimensions) is what I’m testing on both machines. I am running a VB application from my local box, connecting to the different boxes and running queries against them. Can anyone explain why this would ever happen and how I could make the RPT machine perform better? Thanks,
Amir

When you test RPT is in production time?
Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
No. No one is using the cubes except my process. But the RPT server is a reporting server where there are other applications being run. But when I monitor the CPU Utilization on the machine (it is only 25%) i.e. 1 CPU being used out of the 4. Also, the memory used by msmdsrv is not too large. Is there a way to ask Analysis Services to all the CPUs and more memory? Amir

To get multi cpu performance from AS you will need to have i) partitioned your cubes; and ii) be running queries that access data from the multiple partitions. If you cube is all in one partition, then there is effectively only one entry point for the process attempting to get the data out of the cube. I would start looking at:
1. physical network between yuor local (client caller) machine and the servers. Are you on the same switch with one box but a hop away from the other.
2. use the performance counters on the servers to capture the performance on the servers.
3. How are you timing your queries? If it’s not in code (ie you’re using a stop watch or similar), then if you have the source to your VB app, write some logging in there. Make sure you set your ‘completed’ time to be when the cellset is returned to the client, not at the point when the client has rendered the result/s. ie if you test using the sample mdx app, you need to mod the code, as the UI takes some time to render, especially large result sets
4. look on the msdn site for the performance docs (i think satya has posted links to them numerous times on this forum
5. Are you physically archiving and restoring the cubes from server to server, or are cubesbeing built on each one (ie the server settings could be different for ‘block’ sizes written to disk.
HTH, Steve
forgot to mention -> are you monitoring your own machines performasnce too? This will definitely have an impact as PTS is affected by what your machine is doing. also, make sure you’re flushing all caches when doing your tests. See the mdsn docs for how to do this effectively.
HTH, Steve
Hi Steve, Thanks for the detailed response. I will check into the network and performance counters. I am tracking the time programatically and the data is not being rendered at all. I just open the cellset, note the time and then move on. So that would not be a problem. Can you tell me more about ‘block sizes’ written to disk? How do I find that out? How do I change that if necessary? If anyone else has any thoughts, please feel free to mention. Thanks,
Amir

Also, in this case, which performance counters would you recommend paying attention to ? Amir

I would propose this link http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ansvcspg.mspx] for detailed Analysis Services Performance Guide. HTH Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
HAve you checked if the statistics on the slow machine are up to date? May be its just a bad plna selection by the optimizer. What about the disk counters? Do they look good? Gaurav
Moderator
Man thrives, oddly enough, only in the presence of a challenging environment- L. Ron Hubbard

The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Gaurav I beleive the Analysis Server service optimizer functions differently as compared to SQL server (I think so, correct me if I’m wrong). Ta Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
As far as I know there is no way to view execution plans. The setting for ‘process buffer size’ affects the size of data ‘blocks’ (using quotes as I’m not sure if this is the absolute technically correct name) that are written to disk. By default this setting is absurdly small. There is a good presentation, downloadable from the web, that gives indication on a good starting size. I have posted the link/s previously on sqlservercentral.com but will dig them up and post here also. HTH, Steve
The links you should look up on are: Parallel Processing Utility:
http://www.microsoft.com/downloads/…73-6df7-4688-8211-e02cf13cbdb4&DisplayLang=en Analysis Services Operations Guide and webcast:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/anservog.mspx
http://www.microsoft.com/downloads/…ac-8fff-4131-ac45-712e56b2296e&DisplayLang=en
http://www.microsoft.com/downloads/…69-df72-419e-9965-a036e40aab5e&DisplayLang=en Analysis Services Performance Guide and webcast:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ansvcspg.mspx
http://www.microsoft.com/downloads/…0f-fd0e-49ba-935f-e1d1803e8f42&DisplayLang=en OLAP Scribe tool:
http://www.microsoft.com/downloads/…5b-4022-429f-a9d5-dfe00e8545bd&DisplayLang=en DSO/XML tool:
http://www.microsoft.com/downloads/…18-a2be-4534-8092-6be753fcc7bf&DisplayLang=en
(includes pointer to source code) A quick excerpt from the Performance Guide, specifically on the Process Buffer and Read-Ahead Buffer. Process Buffer
Analysis Services creates a process buffer in memory for each partition it processes. It allocates memory to each buffer as it is needed, and releases this memory from each buffer when partition processing is complete. Analysis Services uses each buffer for two separate tasks.
•First, Analysis Services loads fact data for the partition from the read-ahead buffer into the process buffer and then sorts, indexes, and writes the fact data into the fact level of the MOLAP partition file in segments. The sorting process includes as much data as can be held in the process buffer at one time.
•Second, Analysis Services uses the process buffer for calculating aggregations. If the process buffer in memory is not large enough to hold all calculations during this phase, Analysis Services supplements the process buffer with temporary files on disk to complete the calculation of aggregations.
The Process buffer size setting on the Processing tab in Analysis Manager (the ProcessReadSegmentSize value in the registry) determines the maximum size of each process buffer. By default, the maximum size of each process buffer is approximately 32 MB. For most applications, this is probably too small and should be immediately increased. A more effective setting is at least 150 to 200 MB.
If the size of each process buffer is large enough to efficiently sort and index large portions of the fact data before it is written to segments in the partition file, overall data organization and query responsiveness improve. Furthermore, if the fact table contains many duplicate records, a large process buffer allows Analysis Services to merge duplicate records in memory, saving space and improving query performance.
If Analysis Services exceeds the size of a process buffer while creating aggregations, Analysis Services changes its algorithm to use temporary files that augment the memory allocated to the process buffer. If temporary files are used, Analysis Services moves aggregations between these temporary files and the memory allocated to the process buffer as the aggregations are being calculated. Reading and writing these temporary files is much slower than in-memory calculations and very I/O-intensive. You should tune your system to eliminate the use of these temporary files by increasing the Process buffer size setting when possible. All aggregations for a partition are calculated at once and must fit into memory; otherwise temporary files are used.
When processing multiple partitions in parallel or processing an entire cube in a single transaction, you must ensure that the total memory required for the process buffers, dimensions, shadow dimensions, replicas, and other memory requirements do not exceed the Memory conservation threshold setting. If Analysis Services runs out of virtual address space for these simultaneous operations, you receive an out-of-memory error. If you have insufficient physical memory to back the virtual memory, the Windows operating system uses the virtual memory paging files to supplement the available physical memory. While the use of the paging files has performance implications if excessive paging occurs, a small amount of paging (approximately 100 to 200 MB) is generally acceptable if necessary.
On the other hand, if the process buffer setting is too large and if the number and size of the aggregates is large enough to fill the process buffer during processing, Analysis Services may exceed the memory conservation threshold (which causes the query response cache to be trimmed or dumped). Exceeding the memory conservation threshold during processing causes temporary files to start being used. Remember that if you are processing partitions in parallel, each partition uses a separate process buffer.
Tip If you have sufficient memory, increase the Process Buffer Size setting to at least 150 – 200 MB to eliminate the use of temporary files during processing. It is not uncommon to set the process buffer size to 300 or 500 MB on servers with large cubes. To determine an appropriate process buffer size, follow the procedure in Appendix C, "How to Tune the Process Buffer Size," later in this paper.
HTH, Steve
]]>