I’m doing some digging into potential memory/configuration problems on a sql server 2012 SP2. I stumbled upon something that seems a bit strange in the performance counters as well as the dbcc memorystatus information.
here’s the info from the buffer pool section of dbcc memorystatus:
If i’m reading things correctly the value is the number of 8 KB pages. The server which is a VM has 16 gb of memory total and SQL server is assigned a min/max value of 12GB.
Target” value of
32751616 gives me about
262GB which just so happens to be the total amount of memory in the VM host. If i check the “target pages” performance counter on the server i see the same number. I’m guessing this hints of the VM not being configured properly but what i’m really wondering is how the sql server will handle this. Will it impact performance / will the server try to commit more memory than is actually available?
Looking at the memory manager things seem to be looking more normal:
I’ve been searching the web for information about target buffer pool exceeding max memory but I haven’t been able to come up with much so any input is much appreciated.
- Edit. Adding a picture of the performance counter.
Value Buffer pool target is not in amount of 8 KB pages but the
Is actually number of pages which you were correct to assume . So as per output the target buffer pool memory is would be 8*32751616 which comes to 249G. Which is really absurd
Now for reason why Bpool target is showing value 250 G in DBCC MEMORYSTATUS and memory manager is showing 12 G as Target committed is because the Counter SQL Server Buffer Manager: target Pages is DEPRECATED from SQL Server 2012 onwards. Microsoft did not specifically mentioned this anywhere but I found it from SQL Server Buffer Manager Object BOL Document. If you read this article you wont find counter target pages because its deprecated. You would also see the same counter present in SQL Server 2008 R2.
Other reason may be I assume your system is NUMA system with NUMA nodes. I am sure if you see MEMORYSTATUS output it would have memory allocation breakdown for various nodes like Node0, Node 1…Node 64. So target includes target memory allocation for all NUMA nodes. If you note Buffer pool in MEMORYSTATUS output, it has single entry and is not restricted to particular node. Plus this value is calculated during startup and
does not actually shows correct value after a period of time so you should not refer to it. Instead refer to memory manager output
As per question regarding
buffer pool target pages on NUMA node believe me its not showing correct value for buffer pool. For my system with 4 G of RAM target pages are 7979008 with almost nothing running on the system. This is totally absurd and I would request you not to look at this counter. I had a chat with one of the experts on this field to confirm the behavior and he told me you should avoid looking at buffer pool output and target pages specially on NUMA system its calculation( way of calculation) is not correct. I can raise a connect Item but it would take few months to get response from MS team.
Not every thing in DBCC MEMORYSTATUS output matters to normal users many things are internal to MS and they would just not give any information regarding the same. If you want to monitor SQL Server memory usage please use below query.
select (physical_memory_in_use_kb/1024)Memory_usedby_Sqlserver_MB, (locked_page_allocations_kb/1024 )Locked_pages_used_Sqlserver_MB, (total_virtual_address_space_kb/1024 )Total_VAS_in_MB, process_physical_memory_low, process_virtual_memory_low from sys. dm_os_process_memory
If you really want to monitor SQL Server target memory I would suggest you to look at
SQLServer:memory Manager–Target Server Memory: This is amount of memory SQL Server is trying to acquire.
SQLServer:memory Manager–Total Server memory This is current memory SQL Server has acquired.
There are other counters as well you can get from This Link.
Few other interesting counters are
Page reads/sec – Number of physical database page reads that are issued per second. This statistic displays the total number of physical page reads across all databases. Because physical I/O is expensive, you may be able to minimize the cost, either by using a larger data cache, intelligent indexes, and more efficient queries, or by changing the database design
Free List Stalls/sec – Number of requests per second that had to wait for a free page
PS: Please let me know if I have missed anything or if something is not clear
Edit: Here is the Connect Item for your issue lets wait what MS folks have to say about it. Please vote it