We are using filestreams in Microsoft SQL Server 2008 (SP2) – 10.0.4000.0 (X64) and Windows server 2008 to store millions of files. Since there are millions of files all of them are grouped into 100 filegroups and inserted sequntially one after another filegroup. Now we have around 15K files in each filegroup and expected to grow to 20K soon. so we want to know is there any maximum limit of rows that a filegroup can have to give optimum performance or is there any maximum on the operating system side for optimum performace as we are storing or going to store around <20K files in single folder?
Any suggestion to the right resources will also be very helpful.
I have a link to msdn blog which states that
4.Check if FILESTREAM directory containers do not have more than 300,000 files individually, as NTFS performance degradation may be an
issue especially when generation of 8.3 filenames is enabled.
Actually you can put as many files as you want in a file group.
Basically file group feature included in sql server so that you can
archive your organization data properly and can query the data
efficiently. Its also depend on the hard disk space on server and
configuration that which raid model you use for data storage.
we manage plenty of files in filestream as well. At a number of 1600000 files in one filegroup(directory) the reading and writing operations on filestream-data were very slow.
Up to about 1000000 files I didn’t experience problems with SQL and C#.
But Filesystem was very slow then.
One more suggestion. If you have a lot of files in one filegroup you might get problems during backup. Our system hang up. http://support.microsoft.com/kb/2550552 help us.