Best practices for log and mdf files is to split across drives (e.g.
G:SQL Data &
What are the best practices for memory optimization files? Should they reside on their own drive? Can they reside on a Azure Temp drive (better IO performance but files will be lost on redeploy, tempdb can be hosted here)?
I am still unclear on the purpose of the Memory Optimized file set and what is actually stored there and the rate of IO.
I think we are talking about In Memory OLTP ?
The files are necessary for disaster recovery
Once added they can’t be removed without dropping the database !!!
You need x4 the amount of memory as disk files – if the disk files fill up your database will go suspect.
A fast disk subsystem is recommended.
I would advise careful testing before you deploy to production.
Just to emphasize Stephens point:
You need to be more details with what you mean by “memory optimization files”.
It is likely that you mean memory optimized tables (aka In Memory OLTP). In that case, these files are your actual database files. They correspond to the mdf, ndf and ldf files for traditional data. You don’t delete the mdf, ndf and ldf files for a database, I hope. Deleting the corresponding files for your memory optimized tables are just as bad. It will cause your database to be unusable and you have to restore from a good backup. I.e., storing them on some type of temp disk isn’t an option.