How do you see which database on a Server uses most resources

Posted on

Question :

I have a database server with a couple of databases on it. How can I see where any resource pressure may come from?

I would like to get a table of:

  • Database Name
  • Batch Requests per second
  • CPU Time
  • Logical Read
  • Logical Writes

Answer :

Only some resources can be tracked to databases. IO and buffer pool usage can clearly be tracked to a database:

The query execution statistics (like logical reads) can be attributed to queries, but cannot be associated (correctly) with a database because queries can (and often do) span databases. sys.dm_exec_query_stats is the main entry point for these stats.

I’ve cobbled together this query from a few other queries. It’s not exactly what you’re looking for but can give you an idea of what sort of resource consumption you may be looking at.

    DECLARE @total_buffer INT;
    SELECT  @total_buffer = cntr_value
        FROM sys.dm_os_performance_counters
        WHERE RTRIM([object_name]) LIKE '%Buffer Manager'
            AND (counter_name = 'Total Pages' --this is 2008
            OR counter_name = 'Database pages'); -- this is 2012
    ;WITH src AS (
            database_id, DB_Buffer_Pages = COUNT_BIG(*)
            FROM sys.dm_os_buffer_descriptors
            --WHERE database_id BETWEEN 5 AND 32766
            GROUP BY database_id

        database_name = DB_NAME(mem.database_id), 
    FROM (

            --DatabaseName = CASE [database_id] WHEN 32767
            --              THEN 'Resource DB'
            --              ELSE DB_NAME([database_id]) END,
            db_buffer_pages = DB_Buffer_Pages,
            cached_size_mb = DB_Buffer_Pages / 128.0,
            db_buffer_pct = CONVERT(DECIMAL(6,3),
                                DB_Buffer_Pages * 100.0 / @total_buffer)        
            FROM src
        ) mem
        JOIN (
                    database_id, cpu_time, 
                    CAST(cpu_time * 1.0 / SUM(cpu_time) OVER() * 100.0 AS DECIMAL(5, 2)) AS cpu_percent_usage
                FROM (
                        SELECT database_id, SUM(total_worker_time) AS cpu_time
                        FROM sys.dm_exec_query_stats AS qs
                        CROSS APPLY (SELECT CONVERT(int, value) AS database_id 
                                        FROM sys.dm_exec_plan_attributes(qs.plan_handle)
                                        WHERE attribute = N'dbid') AS F_DB
                        GROUP BY database_id) DB_CPU_Stats
        ) cpu
            ON mem.database_id = cpu.database_id
        JOIN (
                SELECT database_id, io_in_mb AS [total_io_mb],
                    CAST(io_in_mb/ SUM(io_in_mb) OVER() * 100.0 AS DECIMAL(5,2)) AS [io_percent]
                FROM (
                        SELECT database_id,
                            CAST(SUM(num_of_bytes_read + num_of_bytes_written)/1048576 AS DECIMAL(12, 2)) AS io_in_mb
                        FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS [DM_IO_STATS]
                        GROUP BY database_id) Aggregate_IO_Statistics) [io]
        ON mem.database_id = [io].database_id  

Here’s something to get you information on reads and writes (cobbled from a few other queries):

    DatabaseName SYSNAME ,
    VLFcount INT
--  Create the SQL Server 2012-compatible table. We'll drop the column if it's not 2012
    RecoveryUnitId INT ,
    FileID INT ,
    FileSize BIGINT ,
    StartOffset BIGINT ,
    FSeqNo BIGINT ,
    [Status] BIGINT ,
    Parity BIGINT ,
    CreateLSN NUMERIC(38)
    INSERT INTO #stage 
    EXECUTE sp_executesql N'DBCC LOGINFO()';                
    -- if it worked then we don't want this data twice
    TRUNCATE TABLE #stage;
    --All versions between 2000 and 2008R2 do not have RecoveryUnitId
    ALTER TABLE #stage
        DROP COLUMN RecoveryUnitId;         
EXECUTE sp_msforeachdb N'Use [?]; 
                            INSERT INTO #stage 
                            EXECUTE sp_executesql N''DBCC LOGINFO([?])''; 

                            INSERT INTO #results 
                            SELECT DB_Name(), COUNT(*) 
                            FROM #stage; 
                            TRUNCATE TABLE #stage;' 
    collection_date =  SYSDATETIME(),--COALESCE(@CollectionDate, GETDATE()),
    file_type = CASE vfs.file_id WHEN 2 THEN 'Log' ELSE 'Data' END,
    database_name = DB_NAME(vfs.database_id), 
    avg_read_stall_ms = CAST(io_stall_read_ms/(1.0 + num_of_reads) AS NUMERIC(10,1)),
    pct_read_stall = CAST(100. * vfs.io_stall_read_ms/(vfs.io_stall_read_ms + vfs.io_stall_write_ms) AS DECIMAL(10,1)),
    pct_num_reads = CAST(100. * vfs.num_of_reads/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)),
    pct_read_bytes = CAST(100. * vfs.num_of_bytes_read/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)),
    avg_write_stall_ms = CAST(io_stall_write_ms/(1.0+num_of_writes) AS NUMERIC(10,1)),
    pct_write_stall = CAST(100. * vfs.io_stall_write_ms/(vfs.io_stall_write_ms + vfs.io_stall_read_ms) AS DECIMAL(10,1)),
    pct_num_writes = CAST(100. * vfs.num_of_writes/(vfs.num_of_reads + vfs.num_of_writes) AS DECIMAL(10,1)),
    pct_write_bytes = CAST(100. * vfs.num_of_bytes_written/(vfs.num_of_bytes_read + vfs.num_of_bytes_written) AS DECIMAL(10,1)),
    io_stalls = io_stall_read_ms + io_stall_write_ms, 
    total_io = num_of_reads + num_of_writes,
    avg_io_stall_ms = CAST((io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads + num_of_writes) AS NUMERIC(10,1)),
    vlf = VLFcount              
FROM sys.dm_io_virtual_file_stats(null,null) AS vfs
    INNER JOIN sys.master_files AS mf 
        ON vfs.database_id = mf.database_id
            AND vfs.[file_id] = mf.[file_id]
    INNER JOIN #results vlf
        ON DB_NAME(vfs.database_id) = vlf.DatabaseName;  
drop table #stage;
drop table #results;

Please note that the same caveats apply here as in Remus’ answer. Additionally, much of this information is useless as one-offs and better suited for evaluation over time.

Short answer: No. SQL Server does not track these statistics on a database level. Which logically also makes sense, since each batch request can (and often will, if you count the system dbs) span multiple databases.

You can see which performance counters are available on a database level with this query:

select * from sys.dm_os_performance_counters 
where object_name in ('SQLServer:Databases', 'SQLServer:Catalog Metadata')

What could help you though is to do a query based analysis like this:

DB_NAME(st.dbid) As DbName, 
COUNT(1) As QueriesInCache,
SUM(qs.total_worker_time) TotalWorkerTime, 
SUM(qs.total_logical_reads) TotalLogicalReads, 
SUM(qs.total_logical_writes) TotalLogicalWrites, 
SUM(qs.execution_count) As ExecCount 
from sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.plan_handle) st
where st.dbid is not null
group by st.dbid

However, this gives you only a cumulated view for all queries in currently in the cache, so you have to do extra analysis to figure out changes over time, and draw your conclusions from that.

Edit: Using SQL Profiler you would have to track every single request, and then aggregate that from time to time. That would put a substantial additional load onto the server resources so I wouldn’t consider this a viable solution for this question.

Leave a Reply

Your email address will not be published. Required fields are marked *