New D-Series of Azure VMs with 60% Faster CPUs, More Memory and Local SSD Disks

Today I’m excited to announce that we just released a new set of VM sizes for Microsoft Azure. These VM sizes are now available to be used immediately by every Azure customer.

The new D-Series of VMs can be used with both Azure Virtual Machines and Azure Cloud Services.  In addition to offering faster vCPUs (approximately 60% faster than our A series) and more memory (up to 112 GB), the new VM sizes also all have a local SSD disk (up to 800 GB) to enable much faster IO reads and writes.

The new VM sizes available today include the following:

General Purpose D-Series VMs

Name vCores Memory (GB) Local SSD Disk (GB)
Standard_D1 1 3.5 50
Standard_D2 2 7 100
Standard_D3 4 14 200
Standard_D4 8 28 400

 

High Memory D-Series VMs

Name vCores Memory (GB) Local SSD Disk (GB)
Standard_D11 2 14 100
Standard_D12 4 28 200
Standard_D13 8 56 400
Standard_D14 16 112 800

For pricing information, please see Virtual Machine Pricing Details.

Local SSD Disk and SQL Server Buffer Pool Extensions

A temporary drive on the VMs (D:\ on Windows, /mnt or /mnt/resource on Linux) is mapped to the local SSDs exposed on the D-Service VMs, and provides a really good option for replicated storage workloads, like MongoDB, or for significantly increasing the performance of SQL Server 2014 by enabling its unique Buffer Pool Extensions (BPE) feature.

SQL Server 2014’s Buffer Pool Extensions allows you to extend the SQL Engine Buffer Pool with the memory of local SSD disks to significantly improve the performance of SQL workloads. The Buffer Pool is a global memory resource used to cache data pages for much faster read operations.  Without any code changes in your application, you can enable the buffer pool support with the SSDs of the D-Series VMs using a simple T-SQL query with just four lines:

ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
SIZE = <size> [ KB | MB | GB ]
FILENAME = 'D:\SSDCACHE\EXAMPLE.BPE'

No code changes are required in your application, and all write operations will continue to be durably persisted in VM drives persisted in Azure Storage. More details on configuring and using BPE can be found here.

Start Using the D-Series VMs Today

You can start using the new D-Series VM sizes immediately.  They can be easily created and used via both the current Azure Management Portal as well as Preview Portal, as well as from the Azure management command-line/scripts/APIs.

To learn more about the D-Series please read this post which has even more details about them, as well as check out the Azure documentation center.

Hope this helps,

Scott

22 Comments

  • That's cool. SSD drives are really important nowadays. However, I hope Microsoft is going to reverse the latest Azure SQL changes, because we really need SQL federations.

  • It doesn't really (help).
    Only a temp drive so can't really use it for our app storage the way we'd like
    Would need to rewrite other pieces to use non-persistent location just to speed up what we can
    Pretty hefty price increase on D size smells of opportunism
    But yeah I guess it's good to acknowledge that the disk blob storage on the A series is slow and striping 57 blobs together to get decent throughput isn't a realistic solution
    grumpy today I guess

  • Hmm, so only temporary data is available through SSD? This creates quite many limitations, for example database logs can't be stored there.

  • It could be better to create VHD based on SSD drive to attach to VM as data disk. Or just encrease IOPS on disk. For now it is impossible to use VM SQLServer with big DB that require big performance. But i'll try SQL Server Buffer Pool Extensions - it could help me.

  • Hi,

    This is exactly what I need (Local ssd side of things) but it seems that I cannot use them with a virtual network which is a blocker for me. After a process of elimination it was the Virtual Network which causes it to fail to deploy.

    Jas

  • Yeah, this is great and all! I've googled the Buffer Pool Extension and people say it can increase perf 40%. That's better than nothing. But let's face the fact: 40% is nothing compared to what SSDs can yield.
    Heres SQLIO on a standard persisted (non-striped) azure disk:
    F:\SQLIO>call sqlio.exe -t128 -f128 -kW -b1
    sqlio v1.5.SG
    128 threads writing for 30 secs to file testfile.dat
    using 1KB IOs over 128KB stripes with 64 IOs per run
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 516.42
    MBs/sec: 0.50

    Here's the exact same test on a D-series 200 GB SSD temporary disk:

    D:\SQLIO>call sqlio.exe -t128 -f128 -kW -b1
    sqlio v1.5.SG
    128 threads writing for 30 secs to file testfile.dat
    using 1KB IOs over 128KB stripes with 64 IOs per run
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 191372.53
    MBs/sec: 186.88

    That's 516 vs. 191373 IOPS. That's 370 times more IOPS.

    Scottgu, you're a genius. And it doesn't take a genius to know this is EXTREMELY frustrating and a total lackluster. IMHO it would be better to not release the half-baked SSD-support you have now and put this where it belongs - in the Azure Storage System. But in a premium tier of the Storage already! :)

  • @Scott - This is a great performance enhancement to Azure VMs! Thank you!

    @Dennis - I agree that you need to be able to partition and shard databases, and actually there are more powerful ways to do that than Federations supported, and it's not difficult. I wrote about sharding in Microsoft Azure SQL Database Step by Step which happens to be on-sale this week, https://www.microsoftpressstore.com/deals. And I also have a session that discusses sharding at the upcoming http://www.AzureConf.net. Feel free to contact me if you'd like to discuss further.

  • @Mikael Koskinen, @Oleg, @Hans - Storing Buffer Pool Extensions and TempDB on SSDs will significantly improve the performance of read workloads which working set doesn't fit in memory and of operations that use temp data heavily (e.g. index rebuilds or queries handling large recordsets). With regards to log writes (persistent storage), there's ongoing work to on raise the IOPs p/disk and reduce write latency. In the meantime, you can achieve higher log throughput by configuring a storage pool with 2-3 disks (or just using the SQL Server Image optimized for OLTP which implements multiple perf best practices).
    Feel free to contact me to discuss further.

  • Just did some testing on the D-series with 50 GB PBE. This might help a good bit with read-intensive workloads, but with write-intensive workloads I didn't see any performance improvements.

    I guess that's what I should have expected, but anyways, I spent a few hours testing it, so why not post my results here...

  • @Luis - I tried this, if the OLTP image is similar to what is generated by the high_IO powershell script avail on TechNet. This is no panacea. I chose A4 size and thus the "optimized" VM you get is one drive consisting of 16 striped blobs. This presents problems of its own. For one, it's not really the configuration I'd be after. For two, the selling point to this customer was the ability to downsize the VM off peak and save some $$. Can't go back to A3 (or lower) with 16 blobs. What a mess.

  • @Luis, I'd love to contact you and make my case. I can be reached at hans.olav (at) linkmobility.com

    Right now I'm migrating away from SQL Server Standard on VM towards using Azure Table Storage instead. In Table Storage I get around ~14K IOPS for inserting <1K sized records, in SQL Server on VM with 28 GB RAM and 8 striped disks I get 300 IOPS.

    I really love Azure, and I really love SQL Server, but:
    * Azure is two years behind when it comes to provisioned IOPS. Let's face it: Amazon is 2 years ahead here.
    * SQL Server is a great product, but without SSD it's not usable for OLTP with high write-loads.
    * SQL Server Standard is too expensive & has features i don't need.
    * SQL Server Web isn't even highly available, so that's only for stupid people. But why not increase price and have availability??
    Who can live with a single-point-of-failure database in a cloud environment? No one or stupid people.

    The Azure SQL Database and Azure SQL Server VM story would improve to stellar heights if you had:
    * SSD-backed Azure Storage (Premium Tier or whatever) for SCREAMING SQL Server perf.
    * Azure SQL Database D-series which runs on SSD - Man, I just gave myself a nerdgasm right now thinking about it..

    'Hans Olav.

  • I already made a feedback suggestion for provisioned IOPS on Azure Storage:
    http://feedback.azure.com/forums/217298-storage/suggestions/4900829-provisioned-iops

    And also one for SSD-backed Azure SQL Databases:
    http://feedback.azure.com/forums/217321-sql-database/suggestions/6296025-add-possibility-for-ssd-based-high-iops-database

    Please go check them out and vote up if you agree. It's good karma!

    'Hans Olav.

  • @Scott: Will the higher CPU Performance also be available for WebSites? (hopefully with the possibility to upgrade existing installations :-) )

  • Hello, Hans
    We are listen to customer's feedback and we also continuously improve our service performance. Please stay tuned.

  • I consider moving my stuffs to Azure in near future. Your post really helpful, thank you for your information.

  • Hi,

    I think Azure also offer a free trial. Let's give it a try. I hope I will experience better speed now for my websites.

  • This is great news. Are we able to use these new vm sizes in Visual Studio yet?

  • I already installed some MongoDB clusters with 3 nodes ReplicaSet running on D VM with Linux and they're really faster! This is a great improvement! Keep up the great work!

  • Standard_D1 VMs have really slow disk performance, with many glitches, not usable for anything :(
    Any idea how to fix it?

  • Hey Ratko, thanks for raising the concerns. I would like to dig in to your issues a bit and understand what you are seeing and how we might be able to resolve and/or publish better guidance. If you wouldn't mind, please send me a mail so we can dig in (corey.sanders@microsoft.com).

  • This is awesome!

  • With MS not recommending putting TEMPDB on these SSDs, this temporary storage becomes quite useless to regular SQL users.

Comments have been disabled for this content.