Grabbing 1000 rows before and after returned datas | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Grabbing 1000 rows before and after returned datas

Thanks for reading my post! I have a VERY large database (1.7 billion rows) with a clustered index key. I need to be able to select a row and retrieve a 1000 rows before and 1000 rows after my returned result. Unfortunately the keys are fuzzy so I can#%92t do any calculations to retrieve the range. Is the something a cursor can do without bring down the server?
To know what after and before mean, you have to define ordering criteria. Also what is "result" – Row Id or date or …? Can you post the query that returns "result" so we can have starting point to work with?
I don’t think you want to use a cursor for this. Using a cursor on 1.7 billion rows will have a severe performance impact on your database. Please post query and table structure, as Mirko suggested, and we can help you from there.
We define ordering criteria from clustered index key. a key will look something like N$.SSS/ or K$$$*9. My result would be a key and from that row I need to grab a 1000 rows before and after.. I hope that explains it better?
The query is not created yet[B)] but it would be simple. select key from table where key = keyvalue.
Assuming key is unique: select top 1000 <columns>
from table
where key < keyValue
order by key desc select top 1000 <columns>
from table
where key > keyValue
order by key
I hope that’s what you were looking for.

I guess if you want both at the same time, you could add a "UNION" statment in between what Mirko has posted already.
you may also possibly need to consider having a non-clustered index on that key column, since using the top keyword wouldn’t give SQL enough of a clue to not resort to a clustered index scan, which would be dreadful for performance. A non-clustered index (ideally covered if there weren’t too many rows) may give you better performance even if you have to incur 2000 bookmark lookups Cheers
Twan, not sure that’s true. There is condition key > keyValue, so I expect qo to be smart enough to jump at the first key entry after keyValue and read it sequentialy and stop after 1000th row is read. However, I don’t have time right now to test it, so feedback would be highly appriciated.
Hi ya, I don’t have a big enough table to work with on my laptop… but as far as I understand, the keyvalue used in the query would be a parameter, SQL will have stats on the index to work out the number of rows < keyvalue and the number of rows > keyvalue. I think the next step it will take will be to see if this number of rows is greater than its threshold (something like 10%) and will opt to do either multiple index seeks (with bookmark lookups for non-clustered index) or an index scan…? If the query was where col between x and y, then I can see SQL might jump to x and then scan until it gets to y…? I agree that, it would be interesting to try it out on a large table to see the execution plans and the query stats Cheers