In addition to the consumer release of Windows 10 build 20206, Microsoft has also released a new server build with the same version number to Insiders. Windows Server Insider Preview build 20206 brings a few improvements to the Storage Migration Services, and SMB protocol.
Build 20206 is from the Dev channel, so it includes changes that may or may not be included in the final release of Windows Server 21H1. It is a preview for the Long-Term Servicing Channel (LTSC) release.
File Services: SMB improvements
We’ve expanded the SMB 3.1.1 protocol in Windows Server vNext with a number of security and performance capabilities, including:
- AES-256 – Windows Server now supports AES-256-GCM and AES-256-CCM cryptographic suites for SMB Encryption and Signing. Windows will automatically negotiate this more advanced cipher method when connecting to another computer that supports it, and can also be mandated through Group Policy. Windows Server still supports AES-128 for down-level compatibility.
- Compression – You can now copy files over SMB with compression using the Robocopy /compress and Xcopy /compress If the destination computer supports SMB compression and the files being copied are compressible, you should see significant performance improvements. For more information and a demo of this behavior, visit the ITOps Talk Blog. Any patched Windows Server 2019 and Windows 10 computers already support compression; now you will have command-line tools to make use of it.
- RDMA encryption – SMB Direct over RMDA networks now supports encryption. Previously, enabling SMB Encryption would disable direct data placement, making RDMA performance as slow as TCP. Now data is encrypted before placement, leading to relatively minor performance degradation while adding AES-256 protected packet privacy.
- East-West storage encryption – Windows Server failover clusters now support granular control of encrypting and signing intra-node storage communications for Cluster Shared Volumes (CSV) and the storage bus layer (SBL). This means that when using Storage Spaces Direct, you can decide to encrypt or sign east-west communications within the cluster itself for higher security.
Storage Migration Services improvements
Today marks our introduction of third generation of Storage Migration Service improvements, including:
- AFS Tiering support preview – SMS now supports migrating data to a server configured with Azure File Sync cloud tiering, which allows you to overprovision storage while dehydrating data to Azure Files in the cloud. SMS now understands this scenario and can slow or pause transfers to allow AFS to catch up in it tiering to the cloud. This feature also makes changes to the SMS extension in Windows Admin Center, which will release separately to the feed at a later date. Please follow the Microsoft FileCab blog for updates.
- Scenarios backported to Windows Server 2019 are included – Cluster, Samba-Linux migration support, Local security principal migration, and Inter-network migration were all added as backported features to Windows Server 2019 since its release. If you had not patched, you would not have access to them. They are now included out-of-the-box in Windows Server vNext.
You can find extra details in the official announcement.
Windows Server Insider Preview has finally received a bunch of new features with release of build 20201 on August 26, 2020. The key changes in build 20201 include: What's new in Windows 10 build 20201
CoreNet: Data Path and Transports
- MsQuic – an open source implementation of the IETF QUIC transport protocol powers both HTTP/3 web processing and SMB file transfers.
- UDP performance improvements — UDP is becoming a very popular protocol carrying more and more networking traffic. With the QUIC protocol built on top of UDP and the increasing popularity of RTP and custom (UDP) streaming and gaming protocols it is time to bring the performance of UDP to a level on par with TCP. In Server vNext we include the game changing UDP Segmentation Offload (USO). USO moves most of the work required to send UDP packets from the CPU to the NIC’s specialized hardware. Complimenting USO in Server vNext we include UDP Receive Side Coalescing (UDP RSC) which coalesces packets and reduces CPU usage for UDP processing. To go along with these two new enhancements, we have made hundreds of improvements to the UDP data path both transmit and receive.
- TCP performance improvements — Server vNext uses TCP HyStart++ to reduce packet loss during connection start up (especially in high speed networks) and SendTracker + RACK to reduce Retransmit TimeOuts (RTO). These features are enabled in the transport stack by default and provide a smoother network data flow with better performance at high speeds.
- PktMon support in TCPIP — The cross-component network diagnostics tool for Windows now has TCPIP support providing visibility into the networking stack. PktMon can be used for packet capture, packet drop detection, packet filtering and counting for virtualization scenarios, like container networking and SDN.
(Improved) RSC in the vSwitch
RSC in the vSwitch has been improved for better performance. First released in Windows Server 2019, Receive Segment Coalescing (RSC) in the vSwitch enables packets to be coalesced and processed as one larger segment upon entry in the virtual switch. This greatly reduces the CPU cycles consumed processing each byte (Cycles/byte).
However, in its original form, once traffic exited the virtual switch, it would be re-segmented for travel across the VMBus. In Windows Server vNext, segments will remain coalesced across the entire data path until processed by the intended application. This improves two scenarios:
– Traffic from an external host, received by a virtual NIC
– Traffic from a virtual NIC to another virtual NIC on the same host
These improvements to RSC in the vSwitch will be enabled by default; there is noo action required on your part.
Direct Server Return (DSR) load balancing support for Containers and Kubernetes
DSR is an implementation of asymmetric network load distribution in load balanced systems, meaning that the request and response traffic use a different network path. The use of different network paths helps avoid extra hops and reduces the latency by which not only speeds up the response time between the client and the service but also removes some extra load from the load balancer.
Using DSR is a transparent way to achieve increased network performance for your applications with little to no infrastructure changes. More information
Introducing Virtual Machine (Role) Affinity/AntiAffinity rules with Failover Clustering
In the past, we have relied on the group property AntiAffinityClassNames to keep roles apart, but there was no site-specific awareness. If there was a DC that needed to be in one site and a DC that needs to be in another site, it wasn’t guaranteed. It was also important to remember to type the correct AntiAffinityClassNames string for each role.
There are these PowerShell cmdlets:
- New-ClusterAffinityRule = This allows you to create a new Affinity or AntiAffinityrule. There are four different rule types (-RuleType)
- DifferentFaultDomain = keep the groups on different fault domains
- DifferentNode = keep the groups on different nodes (note could be on different or same fault domain)
- SameFaultDomain = keep the groups on the same fault domain
- SameNode = keep the groups on the same node
- Set-ClusterAffinityRule = This allows you to enable (default) or disable a rule
- Add-ClusterGroupToAffinityRule = Add a group to an existing rule
- Get-ClusterAffinityRule = Display all or specific rules
- Add-ClusterSharedVolumeToAffinityRule = This is for storage Affinity/AntiAffinity where Cluster Shared Volumes can be added to current rules
- Remove-ClusterAffinityRule = Removes a specific rule
- Remove-ClusterGroupFromAffinityRule = Removes a group from a specific rule
- Remove-ClusterSharedVolumeFromAffinityRule = Removes a specific Cluster Shared Volume from a specific rule
- Move-ClusterGroup -IgnoreAffinityRule = This is not a new cmdlet but does allow you to forcibly move a group to a node or fault domain that otherwise would be prevented. In PowerShell, Cluster Manager, and Windows Admin Center, it would show that the group is in violation as reminder.
Now you can keep things together or apart. When moving a role, the affinity object ensures that it can be moved. The object also looks for other objects and verifies those as well, including disks, so you can have storage affinity with virtual machines (or Roles) and Cluster Shared Volumes (storage affinity) if desired. You can add roles to multiples such as Domain controllers, for example. You can set an AntiAffinity rule so that the DCs remain in a different fault domain. You can then set an affinity rule for each of the DCs to their specific CSV drive so they can stay together. If you have SQL Server VMs that need to be on each site with a specific DC, you can set an Affinity Rule of same fault domain between each SQL and their respective DC. Because it is now a cluster object, if you were to try and move a SQL VM from one site to another, it checks all cluster objects associated with it. It sees there is a pairing with the DC in the same site. It then sees that the DC has a rule and verifies it. It sees that the DC cannot be in the same fault domain as the other DC, so the move is disallowed.
There are built-in overrides so that you can force a move when necessary. You can also easily disable/enable rules if desired, as compared to AntiAffinityClassNames with ClusterEnforcedAffinity where you had to remove the property to get it to move and come up. We also have added functionality in Drain where if it must move to another domain and there is an AntiAffinity rule preventing it, we will bypass the rule. Any rule violations are exposed in both Cluster Admin as well as Windows Admin Center for your review.
Flexible BitLocker Protector for Failover Clusters
BitLocker has been available for Failover Clustering for quite some time. The requirement was the cluster nodes must be all in the same domain as the BitLocker key is tied to the Cluster Name Object (CNO). However, for those clusters at the edge, workgroup clusters, and multidomain clusters, Active Directory may not be present. With no Active Directory, there is no CNO. These cluster scenarios had no data at-rest security. Starting with this Windows Server Insiders, we introduced our own BitLocker key stored locally (encrypted) for cluster to use. This additional key will only be created when the clustered drives are BitLocker protected after cluster creation.
New Cluster Validation network tests
Networking configurations continue to get more and more complex. A new set of Cluster Validation tests have been added to help validate the configurations are set properly. These tests include:
- List Network Metric Order (driver versioning)
- Validate Cluster Network Configuration (virtual switch configuration)
- Validate IP Configuration Warning
- Network Communication Success
- Switch Embedded Teaming Configurations (symmetry, vNIC, pNIC)
- Validate Windows Firewall Configuration Success
- QOS (PFC and ETS) have been configured
(Note regarding QOS settings above: This does not imply that these settings are valid, simply that settings are implemented. These settings must match your physical network configuration and as such, we cannot validate that these are set to the appropriate values)
Server Core Container images are 20 percent smaller
In what should be a significant win for any workflow that pulls Windows containers images, the download size of the Windows Server Core container Insider image has been reduced by 20%. This has been achieved by optimizing the set of .NET pre-compiled native images included in the Server Core container image. If you are using .NET Framework with Windows containers, including Windows PowerShell, use a .NET Framework image, which will include additional .NET pre-compiled native images to maintain performance for those scenarios, while also benefitting from a reduced size.
What’s new with the SMB protocol
Raising the security bar even further, SMB now supports AES-256 Encryption. There is also increased performance when using SMB encryption or signing with SMB Direct with RDMA enabled network cards. SMB now also has the ability to do compression to improve network performance.