Other than accidentally filling your entire C: drive with data, are there any other considerations for using the OS drive for data files? Most answers are simply “don’t do it”, but I’d like to know any technical reasons.
I’m on a dev box and will gladly trade risk for extra IO bandwidth. I’m refactoring a very large dataset, which generates 10-20 GB (simple) log files and a ton of tempdb activity. I’m moving the source, read-only tables to sata SSDs. I’d like to give the refactored data, tempdb data, and log files their own NVMe, but that means one of them would share C: with the OS.
For a production solution it’s better to put OS, data, log, and tempdb on different volumes, even if those volumes share a single disk, storage pool, or SAN array.
This limits the blast area of running out of space, and provides separate visibility to the different IO types through the Windows Logical Disk performance counters.
However, it works fine, and is absolutely supported to put everything on C.