I have a table with 28 columns, and around 10 indexes (one clustered index) defined on it. Record lenght is around 2900 bytes (out of which 8 nvarchar fields totalling 2700 bytes). My application may continuously insert records into it upto 20 -30 millions (in batches of 200-300 records). The insert speed to the table comes down drastically over a period (like a c/x curve). I made fill factor for all indexes as 50%. But still the speed reduces in similar fashion. The Insert speed initially will be around 1700 to 2000 records per second. But over a period of time insert speed is coming down drastically & reaches to arround 200 records per second after 2.5 Million records & comes down to around 40 records/sec speed after 4 million inserts. Is there any way I can improve the inserts (without deep drops & keeping update speed more than 300-400 rec/sec for 20-30 million range)? We need all the indexes defined, else the applications which will query the DB will not work.