What is SDDC?
SDDC refers to a datacenter whose entire infrastructure (server, network, and storage) is virtualized and delivered as a service. In an SDDC environment, a software layer is added to hardware configuration to orchestrate the management of computing, storage, and networking services.
How VMware Implements SDDC
VMware SDDC is a fully integrated stack of hardware and software and consists of four components:
vSphere provides compute virtualization and includes the type-1 VMware ESXi hypervisor. It is ideal for development environments to handle Kubernetes workloads out of the box.
vSAN provides storage virtualization and is used in conjunction with vSphere to manage compute and storage in a single platform.
NSX provides networking and security virtualization and connects and protects the applications residing within the data center.
vRealize provides extra management, automation, self-service, intelligent operation, and financial transparency for both traditional and cloud-based application workloads
The data center begins with the basic building blocks of any on-premises infrastructure: compute, storage and networking. VMware created a software-defined product to virtualize each: vSphere for compute, vSAN for storage and NSX for networking.
VMware had hoped to make the software-defined data center the de facto standard for many businesses. But with the shift toward cloud technologies, VMware now hopes to "cloudify" its core technology. The company wants to provide a consistent infrastructure -- one vSphere-based platform -- for workloads regardless of whether they run on premises or in the cloud.
VMware Cloud Foundation is an integrated software stack that bundles vSphere, VMware vSAN and VMware NSX into a single platform through the SDDC Manager. An admin can deploy the bundle on premises as a private cloud or run it as a service within a public cloud. An administrator can provision an application immediately without having to wait for network or storage.
Layers in the SDDC
Physical Configuration of the SDDC
Workload Domains
The compute, storage, and network resources are organized in workload domains. The physical layer also includes the physical network infrastructure, and storage setup.
Compute
The physical compute resources are delivered through ESXi, a bare-metal hypervisor that installs directly onto your physical server. With direct access and control of underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs. ESXi is the base building block of the Software-Defined Data Center.
Network
VMware Validated Design can use most physical network architectures. When building an SDDC, the following considerations exist:
Layer 2 or Layer 3 transport types This VMware Validated Design uses a Layer 3 network architecture.
A Top of Rack (ToR) switch is typically located inside a rack and provides network access to the servers inside that rack.
An inter-rack switch at the aggregation layer provides connectivity between racks. Links between inter-rack switches are typically not required. If a link failure between an inter-rack switch and a ToR switch occurs, the routing protocol ensures that no traffic is sent to the inter-rack switch that has lost connectivity.
Using quality of service tags for prioritized traffic handling on the network devices
NIC configuration on the physical servers VMware vSphere Distributed Switch supports several NIC teaming options. Load-based NIC teaming supports an optimal use of available bandwidth and redundancy if a link failure occurs. Use a minimum of two 10-GbE connections, with two 25-GbE connections recommended, for each ESXi host in combination with a pair of top of rack switches.
VLAN port modes on both physical servers and network equipment 802.1Q network trunks can support as many VLANs as required. For example, management, storage, overlay, and VMware vSphere vMotion traffic.
Because of the considerations for the physical network architecture, providing a robust physical network to support the physical-to-virtual network abstraction is an important requirement of network virtualization.
Regions and Availability Zones
Availability Zone
Represent the fault domain of the SDDC. Multiple availability zones can provide continuous availability of an SDDC. This VMware Validated Design supports one availability zone per region.
Region
Each region is a separate SDDC instance. You use multiple regions for disaster recovery across individual SDDC instances.
In this VMware Validated Design, regions have similar physical and virtual infrastructure design but different naming.
Storage
This VMware Validated Design provides guidance for the storage of the management components. A shared storage system not only hosts the management and tenant or container workloads, but also template repositories and backup locations. Storage within an SDDC can include either or both internal and external storage as either principal or supplemental storage. For the management domain, this validated design includes internal storage by using vSAN for principal storage and external NFS storage for supplemental storage.
Internal Storage
vSAN is a software-based distributed storage platform that combines the internal compute and storage resources of clustered VMware ESXi hosts. By using storage policies on a cluster, you configure multiple copies of the data. As a result, this data is accessible during maintenance and host outages.
External Storage
External storage provides non-vSAN storage by using NFS, iSCSI, or Fiber Channel. Different types of storage can provide different levels of SLA, ranging from just a bunch of disks (JBODs) using SATA drives with minimal to no redundancy, to fully redundant enterprise-class storage arrays.
Principal Storage
VMware vSAN storage is the default storage type for the SDDC management components. All design, deployment, and operational guidance are performed on vSAN. Considering block or file storage technology for principal storage is out of scope of the design. These storage technologies are referenced only for specific use cases such as backups to supplemental storage.
The storage devices on vSAN ready servers provide the storage infrastructure. This validated design uses vSAN in an all-flash configuration.
For workloads in workload domains, you can use vSAN, vVols, NFS, and VMFS on FC.
Supplemental Storage
NFS storage is the supplemental storage for the SDDC management components. It provides space for archiving log data and application templates.
Supplemental storage provides additional storage for backup of the SDDC. It can use the NFS, iSCSI, or Fibre Channel technology. Different types of stage can provide different levels of SLA, ranging from JBODs with minimal to no redundancy, to fully redundant enterprise-class storage arrays. For bandwidth-intense IP-based storage, the bandwidth of these pods can scale dynamically.
Virtual Infrastructure Layer
The virtual infrastructure layer of the SDDC contains ESXi, vCenter Server, vSAN, and NSX-T Data Center that provide compute, networking, and storage resources to the management and tenant workloads.
Cluster Types
This VMware Validated Design uses the following types of clusters:
First Cluster in the Management Domain
Shared Edge and Workload Cluster in a Virtual Infrastructure Workload Domain
First Cluster in the Management Domain
Resides in the management domain and runs the virtual machines of the components that manage the data center, such as vCenter Server, NSX-T Manager, SDDC Manager, Workspace ONE Access, VMware vRealize Suite Lifecycle Manager, VMware vRealize Operations Manager, VMware vRealize Log Insight, vRealize Automation, and other management components.
The first management cluster occupies half a rack.
Shared Edge and Workload Cluster
Represents the first cluster in the virtual infrastructure workload domain and runs the required NSX-T services for north-south routing between the data center and the external network, and east-west routing inside the data center. This shared cluster also hosts the tenant workloads. As you extend your environment, you must add workload-only clusters.
Workload Cluster
Resides in a virtual infrastructure workload domain and runs tenant workloads . Use workload clusters to support a mix of different types of workloads for different types of Service Level Agreements (SLAs). You can mix different types of workload clusters and provide separate compute pools for different types of SLAs.
vCenter Server Design
Layout of vCenter Server Clusters
vCenter Server Design Details
Dynamic Routing and Virtual Network Segments
This VMware Validated Design supports dynamic routing for both management and tenant and container workloads, and also introduces a model of isolated application networks for the management components.
Virtual network segments are created on the vSphere Distributed Switch for the first cluster in the management domain and for the shared edge and workload cluster in a workload domain.
Dynamic routing support includes the following nodes:
Dynamic Routing in a Single Region
Routing Devices for a Multi-Region SDDC
NSX-T Edge cluster
Tier-0 gateway with ECMP enabled for north-south routing across all regions You apply the no-export BGP community to all routes learned from external neighbors. Because the NSX-T SDN in the first and second regions does not have an independent path between those autonomous systems, re-advertising data center networks would give a false indication of a valid, independent path.
Tier-1 gateway for east-west routing across all regions
Tier-1 gateway for east-west routing in each region
Virtual network segments provide support for limited access to the nodes of the applications through published access points.
Virtual Network Segment Design
Cross-region virtual network segment that connects the components that are designed to fail over to a recovery region.
Region-specific virtual network segment in Region A for components that are not designed to fail over.
Region-specific application virtual network in Region B for components that are not designed to fail over.
Software-Defined Storage Design
In each region, workloads on the management cluster store their data on a vSAN datastore. The vSAN datastore spans all four ESXi hosts of the first cluster in the management domain and of the shared edge and workload cluster in a workload domain. Each host adds one disk group to the datastore.
Applications store their data according to the default storage policy for vSAN.
vRealize Log Insight uses NFS exports as supplemental storage for log archiving.
Shared Storage Logical Design
Comments