Dear All, As part of the Transform stage in our DWH I have to create a clinical notes table. We are sent two data extracts which store the actual note content in ntext columns. The table containing the data has about 5,000,000 rows currently. In our existing ETL, the data extracts are stored on one server and we query across linked servers to compile a clinical notes table from the two extracts, inserting note text for any notes which are dated within the last 180 days. i then convert the created ntext column to varchar(max) in order to be able to provide it to a data view in use by a front end portal. This takes about 25 minutes. We are now revamping the ETL and have a new server which hosts both the data extracts and the transform database. However, rather than a performance improvement from having both the extracts and creation on the same server, I am getting a performance degradation. The same query takes about 50 minutes to run. I reviewed the execution plan and the key difference seems to be that on the linked servers, server A has to organise everything before it gets data from Server B. Whereas on the single server C, the execution plan is much more sprawling. I have tried rewriting the query to create an index table without the text data (takes a couple of minutes), and then creating a new table based on the index but joined to obtain the ntext columns, and this has improved speed on the single server to around 35 mins, but is longer on the linked server (around 50 minutes). I originally thought about an update, but of course you can't use ntext in an update statement. Has anyone got any suggestions as to how I can optimise this query? Thanks Iain (code samples are attached).