In Windows Server 2012 Microsoft introduced CSV Cache for Windows Server 2012 Hyper-V and Scale-Out File Server Clusters. The CSV Block Cache is basically a RAM cache which allows you to cache read IOPS in the Memory of the Hyper-V or the Scale-Out File Server Cluster nodes. In Windows Server 2012 you had to set the CSV Block Cache and enable it on every CSV volume. In Windows Server 2012 R2 CSV Block cache is by default enabled for every CSV volume but the size of the CSV Cache is set to zero, which means the only thing you have to do is to set the size of the cache.
# Get CSV Block Cache Size (Get-Cluster).BlockCacheSize # Set CSV Block Cache Size to 512MB (Get-Cluster).BlockCacheSize = 512
Microsoft recommends using 512MB as cache on a Hyper-V host. On a Scale-Out File Server node, things are a little bit different. In Windows Server 2012 Microsoft allowed you to use a cache size up to 20% of the server, in Windows Server 2012 R2 Microsoft changed this, so you can now finally use up to 80% of the RAM of a Scale-Out File Server but with a maximum of 64GB.
Back in the days of Windows Server 2012 I made a little benchmark of CSV Cache on my Hyper-V hosts.
Tags: Cache, Cluster, Cluster Shared Volumes, CSV, CSV Block Cache, enable, Hyper-V, Memory, Microsoft, PowerShell, RAM, Read Cache, Scale-Out File Server, SOFS, Windows Server, Windows Server 2012, Windows Server 2012 R2 Last modified: September 2, 2018
Worth noting is that CSV cache in Windows 2012 R2 SOFS isn’t used with tiered storage spaces. For more information see the blog post Aidan Finn did after receiving an e-mail about this from Bart Van Der Beek.
Yes this is know but not everyone is using a SOFS and not everyone is using a tiered Storage Space
You write: “The CSV Block Cache is basically…cache write IOPS”. I think you mean read IOPS, it does not cache writes. Secondly, just for related info, if you use tiered spaces from R2 with Heatmap (which you want and is required for auto-tiering), you cannot use CSV cache at all… Instead you then use File-caching, or, deduplication-cache if you also have set the CSV to be deduped…
Sorry writing misake, of course it it read IOPS
512MB seems very small. Our virtual hosts have 256GB of RAM. Is there a formula to help right-size the cache?
If you are using the CSV on the Hyper-V Hosts 512MB is the best performance value setting. Adding more memory does not bring a lot more performance. If you are running the CSV on a scale-out file server you can use up to 80% of the server memory for the CSV cache.
Isn’t the command in Bytes unless you specify?
PS C:\Scripts> (Get-Cluster).BlockCacheSize = 512
PS C:\Scripts> (Get-Cluster).BlockCacheSize
512
PS C:\Scripts> (Get-Cluster).BlockCacheSize = 512MB
PS C:\Scripts> (Get-Cluster).BlockCacheSize
536870912
PS C:\Scripts>
Nevermind.
I think that the performance counters now work straight away in 2012 R2 (with updates) – in my testing at least.
So all thats needed is to set a value to the cache size
(Get-Cluster).BlockCacheSize = 512
I use 10% on 2012R2 hyper-v servers. huge difference in performance. one set of cluster the setting was 25GB and all 25Gbs are used at times. it floats between 9GB and 25GB. 512 did nothing
Of course it can make a difference, it always depends on the workloads you are running. In the Microsoft tests they saw not a huge benefit for going over 512MB. Same for us, in our personal test we couldn’t see a huge improvement in real world (!) performance with our test workloads when we used more as 512MB RAM. But it is absolutely possible that you can get more out of it, especially in VDI workload, and boot storms etc.
How ever we did get some benefits using more CSV Cache on our SOFS servers.