SSIS – An Inside View Part 4
Parallel execution improves performance on computers that have multiple physical or logical processors. To support parallel execution of different tasks in the package, SSIS uses two properties: MaxConcurrentExecutables and EngineThreads.
The MaxConcurrentExecutables property is a property of the package. This property defines how many tasks can run simultaneously; by specifying the maximum number of SSIS threads that can execute in parallel per package. The default value is -1, which equates to the number of physical or logical processors plus 2. Using a package which calls another package, in this example MaxConcurrentExecutables has its default value set as -1 and the server running the package has 8 processors, all 10 tasks (taking the Execute Package task in the example, though it applies in the same way to other tasks as well) are executed in parallel, as shown below:
If MaxConcurrentExecutables was changed to 4 in the above package and run it on the same server, then only 4 tasks will runn in parallel at a time (Note the image below shows tasks are executed in a batch of 4, once 4 tasks are executed another batch of 4 tasks will be executed)
If SSIS runs on a dedicated server and there are a lot of operations that run in parallel, it can be beneficial to increase this setting if some of the operations (threads) do a lot of waiting for external systems to reply. However if there’s no dedicated SSIS machine and the data integration application runs alongside several other applications, reducing this setting should avoid resource conflicts.
The EngineThreads property is a property of each Data Flow task. This property defines how many threads the data flow engine can create and run in parallel. The EngineThreads property applies equally to both the source threads that the data flow engine creates for sources and the worker threads that the engine creates for transformations and destinations. Therefore, setting EngineThreads to 10 means that the engine can create up to ten source threads and up to ten worker threads.
A source thread pulls the data from the data source whereas worker thread performs transformation and upload data into the destination.
To understand how this property works, consider the sample package with three Data Flow tasks. Each Data Flow task contains ten source-to-destination execution trees. If the EngineThreads are set to 10 on each Data Flow task, all 30 execution trees can potentially run simultaneously (considering all three data flow tasks run in parallel because of MaxConcurrentExecutables property of the package has either -1 or 3).
As discussed above, the execution tree is a unit of work for a worker thread in the SSIS process. A worker thread may execute one or more execution trees, but each transformation listed in the execution tree will be executed on the same thread. A thread will process one buffer at a time, executing it against all transforms in the execution tree before working on the next buffer in the flow, at which point it would pass the current buffer to another thread executing another execution tree and it would pull a new data buffer from its buffer list which was queued from an upstream component (either a data source or the last asynchronous transform before this execution tree started).
Because an execution tree is tied to a thread, it is often recommended to increase the EngineThreads setting to equal or greater than the number of execution trees listed in the PipelineExecutionTrees log event.
The default is 5 in SQL Server 2005 and 10 in SQL Server 2008, with a minimum value of 2. However, the engine will not use more threads than it needs, regardless of the value of this property. The engine may also use more threads than specified in this property, if necessary to avoid concurrency issues.
However, the general rule is to not run more threads in parallel than the number of available processors. Running more threads than the number of available processors can hinder performance because of the frequent context-switching between threads.
Parallelism is a double-edge sword, on one hand a properly designed package noticeably improves performance by parallel running several tasks of the package or several execution trees of data flow; however a poorly designed package can severely impact performance if the number of threads exceed the number of available processors, throughput may be harmed due to an excessive amount of context switching. Therefore it’s recommended that testing takes place considering available system resources or environment, loaded on the server, if the server is only ETL server or it is a database server as well.
Normally when packages are designed only the functional requirements at the time are considered. Performance requirements are most often addressed at the end of the development lifecycle (or sometimes after application deployment at Production when performance is not what was expected) resulting in limited options to tune the package as particular design decisions have already been made. This section contacts some performance guidelines to leverage or consider during the package design aside from what has been discussed above.
- Avoid using “Table or View” access mode on Source adapters as it calls Openrowset to fetch meta-data from source and takes additional round to the server. Always use a SELECT statement with only the required columns. Removing superfluous columns makes the data flow in a package more efficiently because fewer columns create a smaller row. Smaller rows mean more rows can fit into the buffer, and the less work is required to process all the rows in the dataset.
- The most important improvement to make is to reduce the size of each row of data by removing unneeded columns and by configuring data types appropriately.
- When sufficient memory is available, use a small number of large buffers, rather than a large number of small buffers.
- Avoid many small buffers. Tweak the values for DefaultMaxBufferRows and DefaultMaxBufferSize to get as many records into a buffer as possible. To optimize the Pipeline, the goal is to pass as many records as possible through the fewest number of buffers, especially when dealing with large data volumes.
- DefaultMaxBufferRows and DefaultMaxBufferSize are configured separately for each Data Flow task. When integrating data from multiple data sources using a single Data Flow task, these two settings will only impact the data source components and transformations in that task.
- Do not increase buffer size to the point where paging to disk starts to occur.
- Instead of using a Sort transformation at data flow level, use ORDER BY at source level while pulling the data. As it does not require additional buffers for this kind of asynchronous transformation, it improves performance a lot. If data needs to be sorted in the data flow, improve performance by designing the data flow to use as few sort operations as possible instead of doing the sort operation several times.
- Design the package in such a way that it does a full pull of data only in the beginning or on-demand, next time onward it should do the incremental pull, this will greatly reduce the volume of data load operations, especially when volumes are likely to increase over the lifecycle of an application. For this purpose, use upstream enabled CDC (Change Data Capture) feature of SQL Server 2008; for previous versions of SQL Server incremental pull logic.
- If the destination server is SQL Server and the SSIS package is running on that server only, then instead of using OLEDB Destination component always use SQL Server Destination component as it improves the performance by 8-15 percent by optimizing in-memory data loading. Note the SSIS SQL Server Destination is only useful when the SSIS data operation is executed on the destination SQL Server machine. Otherwise, the standard OLE DB destination component will have to be used. However, it cannot take advantage of in-memory data loading.
- It is recommended to set two of the data flow task properties viz. BufferTempStoragePath and BLOBTempStoragePath instead of using default values. The location provided should be of some fast drives. It is also recommended that BufferTempStoragePath and BLOBTempStoragePath should point to drives which are on separate spindles in order to maximize I/O throughput.
SSIS on 64 bit Computers
Often SSIS packages are developed and tested in 32-bit environments before deployment in a 64-bit production environment. While this can be done, note that with the 64-bit editions of SSIS, some Integration Services features are not available. Few features are available only in 32-bit versions, or have limitations on 64-bit computers; even though SSIS can be used on a 64-bit machine, some Integration Services features are available only in 32-bit versions, have limitations on 64-bit machine, or are not supported on Itanium-based operating systems.
Installing 64-bit SSIS runtime and tools
- By default SSIS 32-bit runtime and tools are not included – The SSIS 32-bit runtime and tools are not automatically included when installing 64-bit Integration Services. If packages need to run in 32-bit mode, then 32-bit runtime needs to be installed separately. To install the 32-bit versions of the tools, select either Client Tools or Business Intelligence Development Studio during setup.
- Installation location – 64-bit runtime features are installed under the Program Files directory, and 32-bit runtime features are installed separately under the Program Files (x86) directory.
- 32-bit runtime tools run by default – when both 32-bit and 64-bit runtimes are installed together on same server, by default 32-bit runtime tools will be executed when “dtexec” is typed into the command prompt. This occurs because the directory path for the 32-bit version appears in the PATH environment variable before the directory path for the 64-bit version. The simplest option is to specify the full path of the utility at command prompt.
Running SSIS packages in 32-bit mode
For the SSIS packages that don’t make use of the 64-bit processor or times when the package is to be run in 32-bit mode, the SSIS project property Run64BitRuntime can be set to False (by default it’s TRUE; even it will be ignored if the 64-bit version of the Integration Services runtime is not installed). This is an instruction to load 32-bit runtime environment rather than 64-bit, and the packages will still run without any changes. The property can be found under SSIS Project Property Pages -> Configuration Properties -> Debugging.
So why run SSIS packages in 32-bit mode on a 64-bit machine; why not to run in 64-bit mode itself. There are several reasons as discussed below, which compel to run SSIS packages in 32-bit mode:
- There is no 64-bit design-time or run-time support for Data Transformation Services (DTS) packages; therefore the Execute DTS 2000 Package task can’t be used in these circumstances.
- When running a package in 64-bit mode, there may be an issue connecting to as many data sources as when running a package in 32-bit mode. Some .NET Framework Data Providers and native OLE DB providers might not be available in 64-bit versions. For example, the Microsoft OLE DB Provider for Jet, which connects to Access databases and Excel spreadsheets, is not available in a 64-bit version.
- Cannot run packages that contain Script tasks or Script components in 64-bit mode unless the scripts have been precompiled. By default, the value of the PreCompile property is TRUE for both the Script task and the Script component.
- Cannot use package logging to SQL Server Profiler log provider in 64-bit mode.
- To use a Script task or Script component that references other assemblies or COM components for which a 64-bit version is not available or not installed.
- The Execute Package Utility (dtexecui.exe) runs in 32-bit mode only, so if developing and testing packages using this utility, make sure to test the package with the command line utility(dtexec.exe) for 64-bit execution by using the 64-bit version of dtexec.exe.
There are some enhancements in 64-bit support from SQL Server Integration Services 2005 to 2008. For example, there is no need to worry whether the script task or components are precompiled or not etc.
SSIS Package Execution by SQL Server Agent Job
When configuring a SQL Server Agent job with a job step type of SQL Server Integration Services Package, the job invokes the dtexec utility. However, the version of the dtexec utility that the job invokes depends on what versions of SQL Server and SQL Server Agent have been installed. If 64-bit SQL Server agent is installed, it calls the 64-bit version of dtexec or if the 32-bit SQL Server Agent is installed, it calls the 32-bit version of dtexec.
This default behavior can be changed to run a package from a 64-bit SQL Server Agent job in 32-bit mode as follows:
- In SQL Server 2005 – Select a job step type of Operating system, and enter a command line or use a batch file that invokes the 32-bit version of dtexec.exe.
- In SQL Server 2008 – Select “Use 32 bit runtime” on the “Execution options” tab of the New Job Step dialog box.
For the purpose of troubleshooting SSIS issues (especially in the case of performance issues, exceptions, etc.) it is important to understand some high level concepts with respect to SSIS architecture. This knowledge will encourage development of better performing packages and optimize package performance.
This series of articles discussed SSIS architecture, SSIS buffer management, types of transformation and how it relates to execution tree, parallel processing and considerations for SSIS on 64-bit computers.
The next series of articles will cover SSIS new features in SQL Server 2008 viz.
- Improvements in Data Flow Task/ Pipeline Performance Improvements
- Lookup Enhancement
- Data Profiling Task
- Pipeline Memory Limiter
- VSTA support
- Import & Export Wizard Enhancements
- A brief talk of leveraging SQL Server 2008 features in SSIS viz. CDC and MERGE statement
SQL Server Books Online (BOL) / MSDN