SlideShare a Scribd company logo
Storage spaces direct webinar

 your choice
 reduce
File Based StorageBlock StorageSAN AlternativeHyper-Converged Cloud Fabric
SMB3
Hyper-V cluster with local storage
Hyper-V Cluster
Scale-Out File Server Cluster
Hyper-V Cluster(s)
SMB3 Storage Network Fabric
Storage spaces direct webinar
Storage spaces direct webinar
Storage spaces direct webinar
 Software-Defined Storage Design Calculator
 Software-Defined Storage Design Considerations
Guide





Storage spaces direct webinar
•
•
•
• When compute + storage needs grow together
• Great for small clusters
•
•
•
•
•
•
•
•
3-node
Windows Server
scale-out file server
Shared SAS storage enclosures
SAS cables
Node
w/disks
Node
w/disks
Node
w/disks
Node
w/disks
SMB Network w/RDMA
Servers Enclosures Disk/Enclosure Resiliency Raw/Net Capacity (TB) $/GB
2 1 4x 800GB SSD, 8x 6TB HDD 2-way mirror 50 / 23 1.41
2 2 12x 1.6TB SSD, 48x 8TB HDD 3-way mirror 770 / 230 1.14
4 4 12x 1.6TB SSD, 48x 8TB HDD 3-way w/ EA 1500 / 450 1.18
Servers Disks (local) Resiliency Raw/Net Capacity (TB) $/GB
2
4x 800GB SSD, 12x 4TB HDD
2-way mirror 96 / 42 1.02
3 3-way mirror 144 / 42 1.53
4
3-way mirror/dual-parity
192 / 90 0.95
8 384 / 217 0.79
12 576 / 368 0.70
16 768 / 505 0.68
Storage
Options
Flash support SAS SSD
Flash caches 1 GB WBC per virtual disk
Performance tier Mirror
Capacity tier Mirror
Tier optimization Nightly (default)
Flash sizing 5-6% of raw capacity
SAS SSD SAS or SATA SSD, NVMe
1 GB WBC per virtual disk
ReFS read cache
(40% of SSD capacity)
Read/Write cache
Mirror Replaced by flash cache
Parity
Mirror (~10%)
Parity (~90)
Real-time w/ReFS
5-6% of raw capacity
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS w/ReFS* Storage Space Direct*
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
Resiliency
Options
Cluster nodes 2-4
Enclosures 1-4
MPIO Yes
Max pool size 84 disks
SSDs to repair 0-1 per pool
HDDs to repair 0-4 per pool
Virtual disks/node 2-4
Fault domains Disk or enclosure
Dual-site clusters
w/Storage Replica No
2-4 2*-16 (64?)
1-4 0-1
Yes No
84 disks 240
0-1 per pool 0
0-4 per pool 0-2 per node
2-4
Disk or enclosure Cluster node, chassis, rack
2+ nodes per site 3-4+ nodes per site
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS Storage Space Direct
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
•
•
•
•
•
•
•
Network
Options
Bandwidth >= 1Gbit/s
Protocol Ethernet, IB3
Interfaces/Server 2
NIC Teaming Not w/RDMA
SET Teaming -
RSS Recommended
RSC Recommended
RDMA1 Preferred
PacketDirect -
In-box drivers
Not recommended for
production2
>=10Gib/s
Ethernet, IB3
2
Yes
Recommended
Recommended
Recommended
Preferred Recommended
Preferred
Not supported2
1 Either iWARP or RoCE; however, RoCE requires additional server and switch configuration/support (PFC and DCB) and VLANs
2 Use approved NIC driver from solution vendor
3 Don’t use Infiniband w/hyper-converged
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS Storage Space Direct
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS Storage Space Direct
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
RAM for
storage caches
None
Total RAM 64 GB
Extra RAM for VMs 32-128 GB if hyper-converged
CPU
2 6-core CPUs typical
(add for extra SSDs, Data
Deduplication, hyper-converged)
16 GB CSV cache?
10 GB per 1 TB of flash cache
16 GB CSV cache?
64 GB 64 GB
32-128 GB if hyper-converged
2 6-core CPUs typical
(add for extra SSDs, Data Deduplication, hyper-converged)
Storage spaces direct webinar
Storage spaces direct webinar
Storage spaces direct webinar
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS Storage Space Direct
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
Solution vendors
Dell
DataOn
RAID Inc.
HP
Dell
DataOn
RAID Inc.
HP
Dell
Cisco
Fujotsu
HP
Intel
Lenovo
Quanta
Quanta D51PH

HP Apollo 2000 System
Dell PowerEdge R730xdCisco UCS C3260 Rack Server
Intel® Server Board S2600WT-Based Systems Lenovo System x3650 M5
Fujitsu Primergy RX2540 M1
Storage spaces direct webinar
Node
File-Delete
Notification (TRIM)
Disable w/FSUtil
Storage Spaces
registry changes
Optimize Storage Spaces repair
settings on Scale-Out File
Servers
MPIO policy Optimize with MPClaim
Physical disk caches Disable with Diskcache.exe
CSV read cache Max 64 TB; by-passed w/tiers
Pool
RetireMissingPhysic
alDisks
Set with Set-StoragePool
Volume
Column count Calculate w/spreadsheet
Volume size
10 TB if using Volsnap for VSS
snapshots
UseLargeFRS Set with Format-Volume
Use defaults
Use defaults
Use defaults N/A
Use defaults
Supported
Use defaults (handled by Health Service)
Use defaults 
>10TB if using ReFS snapshots
10 TB if using Volsnap for VSS snapshots
Use defaults
Windows Server 2016
Disaggregated | Hyper-converged
Shared SAS Storage Space Direct
OS Windows Server 2012 R2
Architecture Disaggregated | Hyper-converged
Model Shared SAS
Storage spaces direct webinar
Storage Spaces Storage Pool
Storage Spaces Virtual Disks
Scale-Out File Server
CSVFS Cluster File System
Software Storage Bus
Virtual Machines
1 2
ReFS On-Disk File System
Virtual Machines
ClusPort
SpacePort
Virtual Disks
File System
Cluster Shared Volumes File System (CSVFS)
Application
Node 1
Block over SMB
ClusBflt
Node 2
Physical Devices
ClusPort
SpacePort
Virtual Disks
File System
ClusBflt
Physical Devices
Storage
Configuration
Caching
devices
Capacity
devices
Caching
behavior
SSD + HDD SSD HDD Write + Read
SSD + SSD SSD SSD Write only
Caching Devices
Capacity Devices
TP5
New-Cluster
Enable-ClusterS2D
New-Volume
New-Cluster
Enable-ClusterS2D
New-StoragePool
Set-PhysicalDisk -Usage Journal
New-StorageTier -ResiliencySettingName Mirror
New-StorageTier -ResiliencySettingName Parity
New-Volume
$mt = New-StorageTier -FriendlyName Mirror -MediaType SSD -ResiliencySettingName Mirror
$pt = New-StorageTier -FriendlyName Parity -MediaType SSD -ResiliencySettingName Parity
New-Volume -FriendlyName SQlServer -StorageTiers $mt -StorageTierSizes 1TB
New-Volume -FriendlyName GenericVM -StorageTiers $mt,$pt -StorageTierSizes 100GB,900GB
Mirror Parity
$mt = New-StorageTier -FriendlyName Mirror -MediaType SSD -ResiliencySettingName Mirror
$pt = New-StorageTier -FriendlyName Parity -MediaType HDD -ResiliencySettingName Parity
New-Volume -FriendlyName SQlServer -StorageTiers $mt -StorageTierSizes 1TB
New-Volume -FriendlyName GenericVM -StorageTiers $mt,$pt -StorageTierSizes 100GB,900GB
Mirror Parity
Storage spaces direct webinar
Potential metadata
devices
Virtual Disk Mirror Parity Mixed-Resiliency
Optimized for Performance Capacity Balanced
Use case All data is hot All data is cold Mix of hot and cold
Efficiency Least (33%) Most (57+%) Depends on mix
File System ReFS (or NTFS) ReFS (or NTFS) Requires ReFS
Node requirement 2+ 4+ 4+
Mirror tier Parity tier
Server 2 Server 3 Server 4Server 1
A A’ A’’B B’ B’’Mirror
Parity
X1 X2 Px
Y1 Y2 Py
Q
Nodes Mirror tier
Efficiency
Parity Tier
Efficiency
Resiliency
4 33% 57% 1 node
8 33% 80% 1 node
16 33% 84% 1 node
LRC Erasure Coding in Storage Spaces
Mirror tier Parity tier
W
ReFS
Storage spaces direct webinar
•
•
•
•
•
•
Storage spaces direct webinar
Storage spaces direct webinar

More Related Content

Storage spaces direct webinar

  • 2.   your choice  reduce File Based StorageBlock StorageSAN AlternativeHyper-Converged Cloud Fabric SMB3
  • 3. Hyper-V cluster with local storage
  • 4. Hyper-V Cluster Scale-Out File Server Cluster Hyper-V Cluster(s) SMB3 Storage Network Fabric
  • 8.  Software-Defined Storage Design Calculator  Software-Defined Storage Design Considerations Guide 
  • 11. • • • • When compute + storage needs grow together • Great for small clusters
  • 12. • • • • • • • • 3-node Windows Server scale-out file server Shared SAS storage enclosures SAS cables Node w/disks Node w/disks Node w/disks Node w/disks SMB Network w/RDMA
  • 13. Servers Enclosures Disk/Enclosure Resiliency Raw/Net Capacity (TB) $/GB 2 1 4x 800GB SSD, 8x 6TB HDD 2-way mirror 50 / 23 1.41 2 2 12x 1.6TB SSD, 48x 8TB HDD 3-way mirror 770 / 230 1.14 4 4 12x 1.6TB SSD, 48x 8TB HDD 3-way w/ EA 1500 / 450 1.18 Servers Disks (local) Resiliency Raw/Net Capacity (TB) $/GB 2 4x 800GB SSD, 12x 4TB HDD 2-way mirror 96 / 42 1.02 3 3-way mirror 144 / 42 1.53 4 3-way mirror/dual-parity 192 / 90 0.95 8 384 / 217 0.79 12 576 / 368 0.70 16 768 / 505 0.68
  • 14. Storage Options Flash support SAS SSD Flash caches 1 GB WBC per virtual disk Performance tier Mirror Capacity tier Mirror Tier optimization Nightly (default) Flash sizing 5-6% of raw capacity SAS SSD SAS or SATA SSD, NVMe 1 GB WBC per virtual disk ReFS read cache (40% of SSD capacity) Read/Write cache Mirror Replaced by flash cache Parity Mirror (~10%) Parity (~90) Real-time w/ReFS 5-6% of raw capacity Windows Server 2016 Disaggregated | Hyper-converged Shared SAS w/ReFS* Storage Space Direct* OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS
  • 15. Resiliency Options Cluster nodes 2-4 Enclosures 1-4 MPIO Yes Max pool size 84 disks SSDs to repair 0-1 per pool HDDs to repair 0-4 per pool Virtual disks/node 2-4 Fault domains Disk or enclosure Dual-site clusters w/Storage Replica No 2-4 2*-16 (64?) 1-4 0-1 Yes No 84 disks 240 0-1 per pool 0 0-4 per pool 0-2 per node 2-4 Disk or enclosure Cluster node, chassis, rack 2+ nodes per site 3-4+ nodes per site Windows Server 2016 Disaggregated | Hyper-converged Shared SAS Storage Space Direct OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS
  • 17. Network Options Bandwidth >= 1Gbit/s Protocol Ethernet, IB3 Interfaces/Server 2 NIC Teaming Not w/RDMA SET Teaming - RSS Recommended RSC Recommended RDMA1 Preferred PacketDirect - In-box drivers Not recommended for production2 >=10Gib/s Ethernet, IB3 2 Yes Recommended Recommended Recommended Preferred Recommended Preferred Not supported2 1 Either iWARP or RoCE; however, RoCE requires additional server and switch configuration/support (PFC and DCB) and VLANs 2 Use approved NIC driver from solution vendor 3 Don’t use Infiniband w/hyper-converged Windows Server 2016 Disaggregated | Hyper-converged Shared SAS Storage Space Direct OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS
  • 18. Windows Server 2016 Disaggregated | Hyper-converged Shared SAS Storage Space Direct OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS RAM for storage caches None Total RAM 64 GB Extra RAM for VMs 32-128 GB if hyper-converged CPU 2 6-core CPUs typical (add for extra SSDs, Data Deduplication, hyper-converged) 16 GB CSV cache? 10 GB per 1 TB of flash cache 16 GB CSV cache? 64 GB 64 GB 32-128 GB if hyper-converged 2 6-core CPUs typical (add for extra SSDs, Data Deduplication, hyper-converged)
  • 22. Windows Server 2016 Disaggregated | Hyper-converged Shared SAS Storage Space Direct OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS Solution vendors Dell DataOn RAID Inc. HP Dell DataOn RAID Inc. HP Dell Cisco Fujotsu HP Intel Lenovo Quanta
  • 23. Quanta D51PH  HP Apollo 2000 System Dell PowerEdge R730xdCisco UCS C3260 Rack Server Intel® Server Board S2600WT-Based Systems Lenovo System x3650 M5 Fujitsu Primergy RX2540 M1
  • 25. Node File-Delete Notification (TRIM) Disable w/FSUtil Storage Spaces registry changes Optimize Storage Spaces repair settings on Scale-Out File Servers MPIO policy Optimize with MPClaim Physical disk caches Disable with Diskcache.exe CSV read cache Max 64 TB; by-passed w/tiers Pool RetireMissingPhysic alDisks Set with Set-StoragePool Volume Column count Calculate w/spreadsheet Volume size 10 TB if using Volsnap for VSS snapshots UseLargeFRS Set with Format-Volume Use defaults Use defaults Use defaults N/A Use defaults Supported Use defaults (handled by Health Service) Use defaults  >10TB if using ReFS snapshots 10 TB if using Volsnap for VSS snapshots Use defaults Windows Server 2016 Disaggregated | Hyper-converged Shared SAS Storage Space Direct OS Windows Server 2012 R2 Architecture Disaggregated | Hyper-converged Model Shared SAS
  • 27. Storage Spaces Storage Pool Storage Spaces Virtual Disks Scale-Out File Server CSVFS Cluster File System Software Storage Bus Virtual Machines 1 2 ReFS On-Disk File System Virtual Machines
  • 28. ClusPort SpacePort Virtual Disks File System Cluster Shared Volumes File System (CSVFS) Application Node 1 Block over SMB ClusBflt Node 2 Physical Devices ClusPort SpacePort Virtual Disks File System ClusBflt Physical Devices
  • 29. Storage Configuration Caching devices Capacity devices Caching behavior SSD + HDD SSD HDD Write + Read SSD + SSD SSD SSD Write only Caching Devices Capacity Devices
  • 31. $mt = New-StorageTier -FriendlyName Mirror -MediaType SSD -ResiliencySettingName Mirror $pt = New-StorageTier -FriendlyName Parity -MediaType SSD -ResiliencySettingName Parity New-Volume -FriendlyName SQlServer -StorageTiers $mt -StorageTierSizes 1TB New-Volume -FriendlyName GenericVM -StorageTiers $mt,$pt -StorageTierSizes 100GB,900GB Mirror Parity
  • 32. $mt = New-StorageTier -FriendlyName Mirror -MediaType SSD -ResiliencySettingName Mirror $pt = New-StorageTier -FriendlyName Parity -MediaType HDD -ResiliencySettingName Parity New-Volume -FriendlyName SQlServer -StorageTiers $mt -StorageTierSizes 1TB New-Volume -FriendlyName GenericVM -StorageTiers $mt,$pt -StorageTierSizes 100GB,900GB Mirror Parity
  • 35. Virtual Disk Mirror Parity Mixed-Resiliency Optimized for Performance Capacity Balanced Use case All data is hot All data is cold Mix of hot and cold Efficiency Least (33%) Most (57+%) Depends on mix File System ReFS (or NTFS) ReFS (or NTFS) Requires ReFS Node requirement 2+ 4+ 4+
  • 36. Mirror tier Parity tier Server 2 Server 3 Server 4Server 1 A A’ A’’B B’ B’’Mirror Parity X1 X2 Px Y1 Y2 Py Q Nodes Mirror tier Efficiency Parity Tier Efficiency Resiliency 4 33% 57% 1 node 8 33% 80% 1 node 16 33% 84% 1 node LRC Erasure Coding in Storage Spaces
  • 37. Mirror tier Parity tier W ReFS