This is a difficult question for me to articulate, but I'll do my best: From a SQL Server perspective, does logically locating tables and indexes in multiple filegroups potentially yield improved disk I/O throughput that can ultimately benefit query performance? I'm aware of strategies involving data/log file placement on different drives & disk subsystems to improve throughput and accommodate special data access patterns. In my case, I'm using a SAN so I don't have a lot of control in that realm. However, I wonder if more files can yield more parallelism that can increase the rate at which the Buffer Pool is filled from disk? Is it better to have one 1.6 TB data file, or better to have sixteen 100 GB secondary data files in different file groups? Or does it matter when excuting queries with a SAN backend? I'd think they may be some hardware cache benefits to accessing smaller files, but I'm more interested in how SQL interprets multiple files as opposed to one.