Fellow DB gurus, I have a interesting situation. I have a very large database which I need to load. It is several txt files containing a total of 250 million records totaling approximately 300GB. Loading this in the database is not the issue. The issue is, what is the best way to design the subsystem in such a way where SELECT performance will be optimal. I have thought of a way to partition the data in that each TABLE will contain ~4-5 million rows (on average some more, some less). With a database this large, I was thinking of creating a filegroup for each table and putting the filegroup in its own drive (of course i only have 5 drives to work with). The question is: a) Is the above best solution? b) What is best RAID to get? I was reading that in above situation a RAID 10 is best for hardware performance. What about SAN shall I consider that? Any other thoughts you may have on above situation and how best to solve it? Thanks in advanced for advice.