I have a problem with one database on SQL Server. I have a database with one table. This table has more than 5 million rows, and I want to run a select query.
When I try to run
SELECT * on this table, the server completes the task in 3 minutes, and never uses maximum hardware.
The server has the following configuration:
- Server HP
- Ram 32
- CPU 2*8Core
- HDD 500GiG raid 5
I ran a hardware test with Performance Test Software and the server health is good.
I have a question: why does SQL Server not use 100% of hardware to run the query faster?
SQL Server just used 3-5% CPU and 20 meg (per sec) read from hard.
I use SQL Server 2008 R2 Data Center edition and Windows Server 2012 r2, all of these are updated.
I ran the query on the server using SSMS and in this case we don’t have network and buffering.
My DB size is 1.5 GIG and all my hards are SSD.When i copy file on my hard the speed is 500m/s.
This Table has 20 field and their type is int and Nvarchar.
First check the execution plan. I would expect a non-parallel plan for this trivial query so no more than 1 logical CPU will be used. That comes to about 6% total CPU that can be used on your 16 core system if there are no other waits.
Client processing, rendering, and network time for 5M rows is significant and may explain why you’re only using 3-5% CPU instead of 6% possible. This will manifest itself as async_network_io waits while SQL Server waits for the client to request the next buffer of data.
Although there may be IO waits as well, I doubt that will be the long pole in the tent here. I’d expect most of the IO for the full scan against a cold cache to be done with async read-ahead reads so CPU will be the bottleneck after client/network time is eliminated. You can test this by running the query directly on the database server from SSMS with the discard query results option.