I’m working on a project where we’re receiving data from multiple sources, that needs to be saved into various tables in our database.
I’ve played with various methods, and the fastest I’ve found so far is using a collection of TableValue parameters, filling them up and periodically sending them to the database via a corresponding collection of stored procedures, inside a transaction.
The results are quite satisfying. However, looking at disk usage (% Idle Time in Perfmon), I can see that the disk is getting periodically ‘thrashed’ (a ‘spike’ down to 0% every 13-18 seconds), whilst in between the %Idle time is around 90%. I’ve tried varying the ‘batch’ size, but it doesn’t have an enormous influence.
Is this ‘normal’? Given that the overall average throughput is ok, I could just leave it, but I’m a bit worried about exploding buffers.
Should I be able to get better throughput by (somehow) avoiding the spikes while decreasing the overall idle time
- The database is in Simple recovery mode, and pre-sized to ‘big’, the log file is not growing
- The ldf and mdf files are on different disks (the mdf on a striped array, ldf on a ‘normal’ disk, all I had available)
SQL Server batches updates – and especially dirty writes with so called checkpoints. This is pretty much normal behavior and only a problem when it turns into a problem (i.e. too large, slowing down the system etc.)