Category: vCloud Director

  • How the VCF 9 Fleet Sizer Actually Works

    How the VCF 9 Fleet Sizer Actually Works

    A complete walkthrough of every calculation behind the tool — from raw NVMe capacity to ESA protection factors, NVMe memory tiering, and VCF licence entitlement. No black boxes.


    Table of Contents

    1. What the tool sizes
    2. Host specification inputs
    3. Management VM stack
    4. Compute sizing formula
    5. vSAN ESA storage pipeline
    6. Protection policies & PF table
    7. Final host count & limiter
    8. NVMe memory tiering
    9. External storage mode
    10. VCF licence entitlement
    11. Principal storage options (KB 416270)
    12. Assumptions & caveats

    1. What the tool sizes

    The VCF 9 Fleet Sizer calculates the minimum number of ESXi hosts required across a VMware Cloud Foundation deployment — one Management Domain and any number of VI Workload Domains. For each domain it independently determines whether CPU, memory, or storage is the binding constraint, and returns the host count driven by the most demanding dimension.

    The sizer is built specifically for VCF 9 with vSAN ESA — the Express Storage Architecture that requires NVMe-only drives and operates as a single storage tier without a separate cache/capacity split. It also models external storage mode (Fibre Channel, NFS) where hosts are sized on compute and memory only, and a disaggregated NVMe memory tiering model unique to VCF 9.

    ⚠️ Planning aid only — not an official Broadcom tool. All outputs are estimates based on the inputs you provide. Validate every design against official Broadcom documentation, the VMware HCL, and field engineering guidance before procurement or deployment. Real-world DRR and vSAN overheads vary significantly by workload.


    2. Host specification inputs

    Every domain (management and each WLD) has an independent host specification. The tool does not assume all hosts are identical across domains — a management cluster might run 2×16c hosts while a production WLD uses 2×32c AI-optimised nodes.

    InputDefaultUsed inNotes
    CPU Qty2Core count, licensingSockets per host
    Cores per CPU16Core count, licensingPhysical cores — no hyperthreading multiplier applied
    RAM (GB)1,024Memory sizingTotal usable host RAM
    NVMe Qty6Storage sizingNVMe drives per host (vSAN ESA only)
    NVMe Size (TB)7.68Storage sizingTB decimal — converted to GB via ×1,000
    CPU OversubscriptionUsable vCPUvCPU:pCPU ratio — applies before reserve
    RAM OversubscriptionUsable RAM1× = no oversubscription. Rarely exceed 1× for RAM
    Compute Reserve %30%Usable vCPU & RAMHeadroom withheld from placement (HA, overhead)

    Raw capacity per host formulas:

    Host Cores = CPU Qty × Cores per CPU
    Raw GB per Host = NVMe Qty × NVMe Size (TB) × 1,000

    ⚠️ No hyperthreading multiplier. The sizer deliberately does not multiply physical cores by 2 for hyperthreading. Logical thread counts are workload-specific and highly variable. Instead, the CPU oversubscription ratio gives you explicit control. A 2× ratio on a 32-core host models the same headroom as a 64-thread count at 1× — but you’re aware you’re making that choice.


    3. Management VM stack

    The Management Domain hosts a fixed stack of VCF infrastructure VMs. These are not user workloads — they are the control plane. Their combined vCPU, RAM, and disk demand is the entire sizing input for the management cluster. The tool carries an accurate per-component VM stack based on current VCF 9 T-shirt sizes from Broadcom documentation.

    ComponentSizesvCPU rangeRAM rangeDisk range
    vCenter Server (Mgmt)S / M / L / XL4 – 2421 – 58 GB694 – 2,283 GB
    NSX ManagerM / L / XL6 – 2424 – 96 GB300 – 400 GB
    NSX EdgeS / M / L / XL2 – 164 – 64 GB200 GB
    NSX Global ManagerS / M / L / XL4 – 2416 – 96 GB300 – 400 GB
    Avi Load BalancerS / M / L8 – 2424 – 48 GB128 – 512 GB
    vCenter Server (WLD)S / M / L / XL4 – 2421 – 58 GB694 – 2,283 GB
    VCF Operations (SDDC Mgr)S / M / L / XL4 – 2416 – 128 GB274 GB
    VCF Operations CollectorS / M2 – 48 – 32 GB144 GB
    VCF Operations for LogsS / M / L12 – 4824 – 96 GB1,590 GB
    VCF Operations for NetworksL / XL / XXL12 – 4824 – 96 GB1,590 GB
    VCF Net. CollectorM / L / XL / XXL4 – 1612 – 48 GB200 – 300 GB
    Identity ManagerEmbedded / HA0 – 320 – 64 GB0 – 400 GB

    Management sizing is deterministic: configure your component sizes, and the tool sums the total vCPU, RAM, and disk demand — no workload VM estimates needed.


    4. Compute sizing formula

    For Workload Domains, tenant demand is specified as VM count × per-VM averages for vCPU, RAM, and disk. Infrastructure VMs (NSX Edges, VKS Supervisor nodes) can optionally be included in the WLD demand totals. All demands are then sized against the host specification to determine the compute host floor.

    WLD demand totals:

    Demand vCPU = (VMs × vCPU/VM) + Infra vCPU
    Demand RAM = (VMs × RAM/VM) + Infra RAM
    Demand Disk = (VMs × Disk/VM) + Infra Disk

    Usable capacity per host:

    Usable vCPU/host = Host Cores × CPU Oversub × (1 − Reserve%)
    Usable RAM/host = Host RAM × RAM Oversub × (1 − Reserve%)

    Compute host floors (evaluated independently):

    CPU Hosts = ⌈ Demand vCPU / Usable vCPU per host ⌉
    RAM Hosts = ⌈ Demand RAM / Usable RAM per host ⌉

    Example: 200 VMs × 4 vCPU = 800 vCPU demand. Host: 2×16c = 32 physical cores × 2× oversub × 0.70 reserve factor = 44.8 usable vCPU/host. CPU Hosts = ⌈ 800 / 44.8 ⌉ = 18 hosts.


    5. vSAN ESA storage pipeline

    vSAN ESA storage sizing is a sequential pipeline of capacity transformations. Each stage adds overhead for a specific reason. Starting from raw VM disk demand, the pipeline applies data reduction, swap space, protection overhead, free space reserve, and growth buffer — in that order — to arrive at the total raw capacity required and therefore the storage host floor.

    Pipeline stages:

    Step 1 — VM Capacity GB = Demand Disk GB ÷ DRR
    (DRR = Dedup Ratio × Compression Ratio)
    Step 2 — Swap GB = Demand RAM GB × VM Swap%
    (100% for mgmt, configurable for WLD)
    Step 3 — Interim GB = VM Capacity GB + Swap GB
    Step 4 — Protected GB = Interim GB × Protection Factor (PF)
    Step 5 — With Free GB = Protected GB × (1 + vSAN Free%)
    Step 6 — Total Required = With Free GB × (1 + Growth%)

    Storage host floor:

    Effective Hosts = Total Hosts − Failures to Tolerate
    Per-Host Requirement = Total Required GB ÷ Effective Hosts
    Storage Hosts = ⌈ Total Required GB / Raw GB per Host ⌉ + Failures

    Data Reduction Ratio (DRR)

    The tool splits DRR into two separate inputs: Dedup Ratio and Compression Ratio. DRR = Dedup × Compression. Both default to 1.0 (no reduction) because real-world ratios depend entirely on data entropy — databases compress poorly, VDI golden images deduplicate extremely well. Using optimistic DRR values leads to undersized storage clusters.

    ⚠️ DRR above 2.0 is optimistic. Unless you have measured DRR from an equivalent workload in your environment, keep both ratios at 1.0. A DRR of 2.0 halves your storage host count. If the real-world ratio comes in at 1.2, you’ll need significantly more hosts than planned.

    TiB conversion

    The tool uses binary TiB throughout. NVMe drives are marketed in TB decimal (1 TB = 1,000 GB). Conversion: 1 TB = 1,000 GB = 0.9095 TiB. A 6× 7.68 TB host = approximately 41.9 TiB raw per host after conversion.


    6. Protection policies & PF table

    The Protection Factor (PF) is the storage overhead multiplier applied to usable data to account for redundancy. It is determined by your chosen RAID type, FTT (Failures to Tolerate), and for RAID-5, the stripe width. The tool enforces the minimum host count per policy.

    PolicyPFMin HostsFTTNotes
    RAID-5 2+1 FTT=11.50x31Default — best balance of protection and efficiency
    RAID-5 4+1 FTT=11.25x61Lower overhead but needs 6+ hosts
    RAID-6 4+2 FTT=21.5x62Two simultaneous drive failures tolerated
    Mirror FTT=12.x31Simple mirror — highest rebuild performance
    Mirror FTT=23.×52Three copies of every object
    Mirror FTT=34.×73Maximum redundancy — very high storage cost

    7. Final host count & limiter

    The final host count is the maximum across four independent floors: CPU hosts, RAM hosts, storage hosts, and the policy minimum. The tool identifies which floor is binding and labels it the Limiter.

    Final Hosts = max( CPU Hosts, RAM Hosts, Storage Hosts, Policy Min )
    LimiterMeaningCommon cause
    ComputeCPU is the binding constraintHigh vCPU density, low oversub ratio
    MemoryRAM is the binding constraintMemory-intensive workloads, RAM oversub at 1×
    StoragevSAN ESA capacity drives the countLarge disk demand, high PF, low DRR, insufficient NVMe
    PolicyProtection policy min host countSmall cluster — compute fine but policy enforces minimum N hosts

    When storage is the limiter, your NVMe capacity per host is insufficient to hold the protected dataset within the compute-determined host count. Solutions: increase NVMe drive count or size, relax the vSAN free% reserve, or accept a higher host count.


    8. NVMe memory tiering (VCF 9)

    VCF 9 introduces NVMe-backed memory tiering, where fast NVMe drives act as a memory extension. A partition of each NVMe drive is set aside as a memory tier — not storage — allowing effective RAM per host to exceed physical DRAM installed. This can reduce the host count when memory is the sizing constraint.

    Tiering formulas:

    Partition GB = min( Drive GB, DRAM × NVMe Ratio, 512 GB cap )
    NVMe Ratio Used = Partition GB ÷ Host DRAM GB
    Effective Host RAM = Host DRAM × (1 + NVMe Ratio Used)
    Tiered Demand R = ( Eligible Demand ÷ (1 + NVMe Ratio Used) )
    + Ineligible Demand

    Key inputs: Eligibility % (what fraction of workload is not latency-sensitive), NVMe-to-DRAM ratio (GB of NVMe tier per GB of DRAM), and tier drive size (separate from vSAN data drives). The effective RAM and reduced demand figure feed back into the RAM host floor calculation.

    ⚠️ Tiering caveats. NVMe tiering suits read-heavy workloads with temporal locality. It is not appropriate for latency-sensitive databases, real-time analytics, or anything where memory bandwidth consistency matters. The eligibility % input requires honest assessment of your workload mix.


    9. External storage mode

    Both the Management Domain and each WLD can be toggled to External Array mode — modelling Fibre Channel or NFS as principal storage. In this mode, the vSAN ESA storage pipeline is bypassed entirely. Host count is determined by compute only, and the user supplies an estimated array capacity for documentation.

    Final Hosts (ext) = max( CPU Hosts, RAM Hosts, Policy Min )
    — Storage floor is removed

    The Limiter can only be Compute, Memory, or Policy. No ESA capacity, PF, or per-host storage figures are calculated for external domains.

    Entitlement impact

    Every VCF core licence includes 1 TiB of vSAN raw storage entitlement. When a domain runs external storage, those cores are still licensed at the same cost but the bundled vSAN storage is unused.

    Forfeited TiB = Licensed Cores × 1 TiB/core

    For a 10-host domain with 2×32c hosts, that’s 640 TiB of vSAN entitlement forfeited — storage the customer is paying for but not using. The tool surfaces this inline, in the Fleet License Summary, and in the export report so the commercial impact is visible before procurement conversations begin.


    10. VCF licence entitlement calculation

    VCF 9 is licensed per core. The tool calculates total core count across the fleet and derives the vSAN storage entitlement bundled with those licences.

    Mgmt Cores = Mgmt Hosts × Host Cores
    WLD Cores = Σ( WLD Hosts × Host Cores )
    Entitlement (TiB) = ( Mgmt Cores + WLD Cores ) × 1 TiB/core
    Fleet vSAN Raw TiB = Σ( Hosts × NVMe Qty × NVMe TB × 0.9095 )
    Add-on Required = max( 0, Fleet Raw TiB − Entitlement TiB )

    If raw capacity exceeds entitlement, the difference is flagged as Add-on TiB Required — additional vSAN capacity licensing needed beyond what’s included in core licences. External storage domains exclude their array capacity from the fleet raw total.


    11. Principal storage options in VCF 9 (KB 416270)

    VCF 9 supports a broader set of principal storage options than previous versions. Some are available via standard greenfield workflows; others require the Converge workflow. This distinction matters — it affects automation, LCM, and Day 2 operations.

    Storage ModelMgmt DefaultMgmt AdditionalVI WLDMethod
    vSAN ESAPrincipalPrincipalPrincipal🟢 Greenfield
    vSAN OSAPrincipalPrincipalPrincipal🟢 Greenfield
    Storage Cluster (disagg. vSAN)PrincipalPrincipal🟢 Greenfield
    Compute-Only ClusterPrincipalPrincipal🟢 Greenfield
    Fibre Channel (FC)PrincipalPrincipal + SuppPrincipal + Supp🟢 Greenfield
    NFS v3PrincipalPrincipal + SuppPrincipal + Supp🟢 Greenfield
    iSCSIPrincipal*Principal*Principal*🔄 Converge
    NFS v4.1Principal*Principal*Principal*🔄 Converge
    FCoEPrincipal*Principal*Principal*🔄 Converge
    NVMe/FC · NVMe/TCP · NVMe/RDMAPrincipal*Principal*Principal*🔄 Converge

    * Via Converge workflow: deploy ESXi 9 → configure target datastore → deploy vCenter 9 → import into VCF 9 using Converge (management) or Import vCenter (WLD).

    ⚠️ Day 2 operations constraint: For non-LCM Day 2 operations (host commissioning, adding/removing hosts or clusters), perform the operation in vCenter first, then run Sync Inventory in VCF Operations. If this step is skipped, lifecycle management in VCF Operations will be blocked for those hosts and clusters.

    Source: Broadcom KB Article 416270


    12. Assumptions & caveats

    AssumptionDetail
    Single cluster per domainEach WLD is modelled as one cluster. Multi-cluster WLDs are not supported.
    Homogeneous hostsAll hosts within a domain use the same spec. Mixed-node clusters are not modelled.
    vSAN ESA onlyThe storage pipeline models ESA only. vSAN OSA has different overhead characteristics.
    Growth is a flat bufferGrowth % is applied once, not compounded year-over-year. Add headroom manually for multi-year plans.
    VM Swap fixed at 100% for mgmtThe management domain’s swap requirement is not user-configurable.
    No stretched cluster modellingStretched clusters double host count and require witness nodes — not currently modelled.
    Flat DRR across all dataA single DRR applies to the entire disk demand. Mixed workloads with varying compressibility are not modelled per-VM.
    No explicit vSAN CPU/RAM overheadvSAN ESA consumes a small amount of host CPU and memory. Include this in your Compute Reserve % input.

    🚫 Not an official Broadcom tool. This sizer is an independent planning aid built by vmtechie.blog. It is not endorsed by or affiliated with Broadcom. All figures are estimates. Validate every design against official Broadcom TechDocs, VMware HCL, and field engineering guidance before procurement or deployment.

  • VCF 9 Fleet Planning Sizer

    VCF 9 Fleet Planning Sizer

    After several VCF design sessions—navigating management domains, ESA policies, and the new core-based licensing—one thing became clear: we have plenty of docs, but we need more interactive clarity. I built the VCF 9 Fleet Planning Sizer (ESA Only) to help architects model environments quickly.

    🔷 VCF 9 Fleet Planning Sizer (ESA Only)

    👉 Try it here: https://sizer.vmtechie.blog/

    This is an independent planning calculator designed to help architects model:

    • Infrastructure VM footprint (Supervisor, Edge, etc.)
    • Management Domain sizing
    • Multiple Workload Domains
    • ESA storage behavior
    • DRR (Dedup × Compression realism)
    • Failure domain modeling (0 / N+1 / N+2)
    • Core-based licensing visibility
    • vSAN entitlement vs raw consumption

    Why I Built This Tool

    Designing VCF 9 isn’t just about adding up VMs. It’s about navigating the “Triple Constraint”: Compute, ESA Storage, and Licensing. In real architecture discussions, we constantly ask:

    • What is actually limiting this cluster?
    • CPU, Memory, or Storage?
    • How many hosts do we really need?
    • What does FTT=2 + RAID-6 really do to capacity?
    • Are we oversizing?
    • Are we license constrained?
    • What happens if I add Supervisor HA?
    • What does N-2 failure tolerance mean in practice?

    Spreadsheets can answer parts of this, but they don’t show the dynamic interaction between policy, compute, and ESA, This tool tries to do that.

    Management Domain Sizing

    The calculator starts with:

    🔹 Hardware Profile

    • CPUs per host
    • Cores per CPU
    • RAM per host
    • NVMe quantity & size
    • Minimum host count

    🔹 Policy Inputs

    • CPU oversubscription
    • Memory oversubscription
    • Host reserve %
    • FTT & RAID policy
    • vSAN free space %
    • Dedup & compression
    • VM Swap Used %
    • Failure modeling

    How It Calculates Management Hosts

    1. Compute usable vCPU per host
    2. Compute usable RAM per host
    3. Apply reserve factor
    4. Compare demand from full Management VM stack
    5. Determine limiter (Compute / Memory / Storage)
    6. Calculate ESA protected storage requirement
    7. Apply failure domain logic
    8. Final host count = max(CPU, RAM, Storage, Minimum Hosts)

    You immediately see:

    • Demand vs Capacity
    • Protection Factor
    • ESA storage breakdown
    • Core licenses required
    • Raw TiB consumed

    Full Management VM Stack Modeling

    The tool includes:

    • SDDC Manager
    • vCenter
    • NSX Manager
    • NSX Edge
    • AVI
    • VCF Operations
    • Log Insight
    • Network Insight
    • Identity
    • Custom VMs

    Each with T-shirt sizing.

    ESA Storage Model

    ESA math is often misunderstood,The calculator models:

    VM Capacity = (VM disks + infra disks) / DRRSwap = Provisioned RAM × Swap %Interim Total = VM Capacity + SwapProtected = Interim × Protection Factor+ Free Space Reserve+ Growth %Storage Hosts = ceil(total / per-host raw capacity + failures)

    Protection Factor examples:

    PolicyFTTProtection Factor
    RAID-112.0
    RAID-123.0
    RAID-511.5
    RAID-521.75
    RAID-621.5

    Workload Domains (Where It Gets Interesting)

    You can add multiple WLDs.

    Each WLD has:

    🔹 Tenant Demand

    • VM count
    • vCPU per VM
    • RAM per VM
    • Disk per VM
    • Growth %

    🔹 Policy + Planning

    • CPU/Mem oversub
    • FTT + RAID
    • Reserve %
    • Free space %
    • Dedup × Compression
    • VM Swap Used %
    • Failure Domain (0 / N+1 / N+2)

    Limiter Visualization + Health Model

    Each WLD shows:

    • Compute limiter
    • Memory limiter
    • Storage limiter
    • Utilization %
    • Health badge:
      • 🟢 Healthy
      • 🟡 Tight
      • 🔵 Oversized

    This gives immediate architectural intuition.

    Licensing Visibility (Core-Based)

    The calculator also models:

    • Management core licenses
    • Workload core licenses
    • Total fleet cores
    • Entitlement (1 TiB per core)
    • Required add-on capacity

    What Makes This Different?

    This tool is:

    ✔ ESA-focused
    ✔ Policy-aware
    ✔ Failure-domain realistic
    ✔ Multi-domain capable
    ✔ Licensing visible
    ✔ Architecture-driven

    It’s not just math. It reflects real design conversations.

    ⚠️ Important Disclaimer

    This calculator is:

    • Independent
    • Not an official Broadcom / VMware tool
    • Not endorsed by my employer
    • Intended as a planning aid only

    Always validate against:

    • Official documentation
    • HCL
    • Field engineering guidance

    🧑‍💻 Who Is This For?

    • VCF Architects
    • Cloud Platform Leads
    • Infrastructure Engineers
    • Pre-sales Architects
    • Capacity planners
    • Anyone doing ESA-based VCF 9 designs

    🚀 Try It

    👉 Live here:

    https://sizer.vmtechie.blog

    If you test it, I’d love feedback

    Final Thoughts

    Architecture clarity reduces risk.This tool is my contribution to making VCF 9 planning:

    More transparent.
    More realistic.
    More engineer-friendly.

  • VCF Automation – Tenant Management

    VCF Automation – Tenant Management

    In today’s multi-tenant cloud environments, VMware Cloud Foundation Automation (VCFA) offers a robust layered architecture that seamlessly bridges enterprise-grade infrastructure management with developer-ready self-service capabilities.

    By clearly separating responsibilities—from VMware Cloud Service Providers who manage the physical and virtual infrastructure, to organization administrators who allocate resources, and finally to developers who consume them—VCFA enables efficient resource governance, operational consistency, and scalability. This structured approach not only supports multi-tenancy and workload isolation but also accelerates innovation by empowering end users to deploy applications and services quickly within well-defined boundaries.

    Why Tenant Management Matters?

    Tenant management is more than just dividing resources—it’s about ensuring cost efficiency, security, scalability, and compliance in a shared infrastructure. In VCFA, these capabilities allow VMware Cloud Service Providers to maximize utilization without compromising performance or governance for individual tenants.

    Key concepts to understand from both the Provider and Tenant perspectives:

    Projects

    Projects control user access to namespaces and user ownership of provisioned resources. All organizations are created with a default project. The default project is empty and does not have any namespaces or users.

    Example: A VMware Cloud Service Provider might assign a dedicated project to each customer department for clearer billing and isolation.

    Regions

    The Regions page lists all the regions where the organization has a quota in. Organizations can have a quota in one or many regions. Your provider administrator assigns the regional quota to your organization. Quota in a region can come from one or many vSphere Zones within that region.

    Example: A global enterprise hosted by a VMware Cloud Service Provider might have quotas in Asia and Europe to ensure low-latency access for local teams.

    Namespace Class

    Namespace classes are templates for namespace provisioning. These templates can be used to standardize namespace attributes, like utilization limits, reservations, VM classes, storage classes, and content libraries. organizations comes preconfigured with three default namespace classes (small, medium, and large), which are meant to serve as example templates. The only different attributes among these built-in templates are the CPU and Memory limits. Administrators can use these templates as-is or can modify them to suit their needs.

    Namespace

    Projects are the central construct for organizing and allocating infrastructure resources to tenants or teams. As the organization administrator, you manage and distribute infrastructure by assigning namespaces to projects. When configuring a project, you must add at least one namespace so that users within the project can begin provisioning workloads such as virtual machines, VMware Kubernetes Service (VKS) clusters, or other supported resources. Namespaces act as scoped resource pools, defining limits for CPU, memory, and storage to ensure fair allocation and performance consistency. Each namespace is tied to a Virtual Private Cloud (VPC) and a namespace class, which in turn is associated with at least one zone to determine placement and availability. This structure not only enforces resource governance but also enables automation workflows to deploy consistently within predefined boundaries. All organizations are created with a default project, which is initially empty and contains no namespaces or users, providing a baseline starting point for configuration.

    Example: A tenant of a VMware Cloud Service Provider might create separate namespaces for development and production to avoid accidental resource conflicts.

    Virtual Private Clouds (VPCs)

    A Virtual Private Cloud (VPC) in VMware Cloud Foundation Automation (VCFA) offers an isolated networking environment that can be associated with one or more namespaces. Organizations can create multiple VPCs and assign each to specific namespaces based on workload or isolation requirements.

    Each VPC is an independent network and supports three types of IP address spaces, each offering different levels of reachability:

    • Private CIDRs: These addresses are internal to the VPC and are not routable outside without NAT. They are managed by the VPC administrator and do not need to be globally unique, allowing reuse across multiple VPCs.
    • TGW Private IP Blocks: These IP blocks are scoped at the organization level and are advertised through the Transit Gateway (TGW) within the organization. Organization admins define these blocks, and project admins can allocate subnets from them for their VPCs. This enables direct communication between VPCs in the same organization using the TGW Private IP space.
    • External IP Blocks: Managed by the provider admin, these IPs enable outbound access through Source NAT. Organization admins can assign subnets from provider-defined external blocks, giving workloads external connectivity while still using internal addressing.

    You can choose to deploy a separate VPC per namespace for stricter isolation, or share a VPC across namespaces where network separation is not required.

    Transit Gateways

    Each organization has a transit gateway which provides connectivity to the provider gateway within the organization. One or more VPCs are connected to the transit gateway, and that connection is defined by a VPC connectivity profile. Each VPC has connected workloads and a private subnet. SNAT rules translate addresses from this private subnet to a public address in the IP spaces block. This infrastructure enables the organization and its workloads to connect to external networks.

    You can view what transit gateways are available to your organization on the Manage & Govern > Networking > Transit Gateways page.

    IP Management

    Provider can use IP Spaces to manage their IP address allocation needs. IP Spaces provide a structured approach to allocating public IP addresses to different organizations, enabling connectivity to external networks.

    An IP space consists of a set of CIDR blocks that are reserved, these CIDRs must be dedicated to  and used by organization administrators as they configure services. An IP space can only be IPv4.

    Organization administrators can create and manage the private IP blocks within their organization. there tenant can view external IP address blocks assigned to this organization by a provider. You can also create and view private TGW IP address blocks for the entire organization to use. Finally, you can view private VPC IP address blocks that are applicable to specific VPCs.

    In essence, VMware Cloud Foundation Automation’s tenant management capabilities provide a structured, role-based framework for organizing projects, namespaces, VPCs, transit gateways, and IP resources. By aligning provider and tenant responsibilities, VMware Cloud Service Providers ensure secure isolation, consistent governance, and streamlined automation—empowering organizations to scale efficiently while maintaining full control over infrastructure and networking resources.

  • Navigating the Shift: From VMware Cloud Director to VCF Automation in VMware Cloud Foundation 9

    Navigating the Shift: From VMware Cloud Director to VCF Automation in VMware Cloud Foundation 9

    VMware Cloud Foundation 9 (VCF 9) has officially launched, introducing a next-generation Cloud Management Platform — VCF Automation (VCFA). This new platform supersedes both Aria Automation and VMware Cloud Director (VCD). This blog is specifically aimed at those familiar with VCD and looking to understand how VCFA compares — what remains familiar, what’s changed, and how to navigate the shift.

    It’s important to note that VCFA is not a simple rebranding of existing tools. It is a new solution built with purpose, though it incorporates core components from its predecessors. The provider-facing layer, known as Tenant Manager, is built on the VCD codebase, so the UI and APIs will feel familiar to seasoned VCD administrators. On the other hand, the tenant experience draws heavily from Aria Automation, introducing a modernized interface and capabilities that will appear significantly different — especially for users coming from a traditional VCD background.

    Why VCFA?

    Modern enterprises and service providers are navigating increasingly complex environments — hybrid, multi-cloud, containerized, and AI-driven workloads are the new normal. VMware has responded with VCFA: a cloud automation solution tightly integrated with VCF 9 that provides:

    • Unified multi-tenant management
    • Seamless integration across compute, storage, and networking
    • Robust self-service capabilities for both providers and tenants
    • Compliance-ready, policy-driven automation

    This is not just an incremental upgrade. VCFA is a next-generation platform, built to be extensible, resilient, and future-proof.

    How VCFA Differs from VCD and Aria Automation

    Let’s break it down into provider and tenant perspectives:

    Provider Experience – Tenant Manager

    The provider-facing component of VCFA is called Tenant Manager.

    • It leverages the codebase from VCD, meaning administrators familiar with VCD will find the UI and APIs instantly recognizable.
    • Tasks such as creating tenants, managing quotas, assigning resources, and configuring networks follow a some what similar structure to VCD.
    • However, Tenant Manager is fully integrated with VCF’s architecture, eliminating dependency on external orchestration layers.

    In essence, Tenant Manager modernizes VCD’s core capabilities while maintaining continuity for service providers.

    Tenant Experience – VCFA UI and APIs

    For tenants, the VCFA experience is heavily influenced by Aria Automation but redesigned for simplicity and control:

    • New self-service portal tailored for tenant-level resource provisioning
    • Integrated access to IaaS, network services, Kubernetes (via VKS), and more
    • Native support for day 2 operations, approvals, cost visibility, and policy governance
    • UI/UX reflects a cloud-native mindset, empowering developers and app teams

    If you’re a tenant used to the VCD interface, the VCFA UI may initially seem unfamiliar — but it brings greater power, flexibility, and visibility.

    Provider Management

    The VCF Automation Provider Management Portal is a dedicated interface for Provider Administrators and to access it, type https://vcfa.example.com/provider and to log in for the first time, you must use default administrator/admin account with local user and password which you set up during the installation.

    You can use the Quick Start wizard in VCF Automation to quickly create an organization with predefined settings, streamlining the initial setup process. This is a convenient alternative to manually configuring each component and is especially useful for setting up a test or evaluation environment to explore the platform’s capabilities.

    NOTE – VCF Automation 9.0, only active-standby mode is supported for NSX Tier-0 Gateways. In active-standby mode, an elected active member processes the traffic. If the active member fails, a new member becomes active.

    Alternatively, you can use the manual wizard in VCF Automation to set up each component individually—Region, Organization, IP Space, Provider Gateway, and Tenant Networking—giving you full control and customization over your environment. In this blog post, I’ll walk you through that step-by-step process to help you understand how to configure a tenant from the ground up.

    Region

    In VCFA, a region represents a logical grouping of compute, storage and networking resources, typically associated with one or more vCenter Server instances and a shared NSX instance.

    NSX Local Manager – provides software define networking for the region, select the NSX Manager instance that integrates with the vCenter instances you want to use for the region

    Note: A single NSX Manager instance must be integrated with all vCenter instances within a region.

    Supervisor(s) – Inside a Region we have one or more Supervisors and provides compute infrastructure for the region, list shows all available Supervisors for NSX Manager instance that you choose in above step.

    Storage Class(es) – shows all storage classes across the selected Supervisors.

    Organisations

    In VMware Cloud Foundation Automation (VCFA), Organizations are foundational constructs used to separate and manage tenants and providers in a multi-tenant private cloud environment. These organizations define the boundaries for resource allocation, identity management, policies, and service consumption.

    VCFA introduces two main types of organizations:

    Provider Consumption Organization

    A PCO ( Provider consumption organization ) is created which the provider can use to share blueprint catalog, workflows with other tenant organizations , this must be enabled by going to Administration > Feature flags and enable PCO Organization feature flag

    Tenant Organization

    Each tenant/customer is onboarded into VCFA as a separate organization, Tenants get:

    • Isolated access to their own VMs, networks, storage, Kubernetes clusters, etc.
    • Self-service portal and/or API access
    • Resource limits defined by the provider
    • Option to integrate with their own identity providers (IdP) (e.g., SAML, LDAP)
    • Custom catalogs or services if published by the provider

    When onboarding a new customer in VCFA:

    • You (the provider) create a Tenant Organization.
    • Allocate region, supervisor and zones (resources – e.g., 10 GHz, 10 GB RAM).
    • Assign VM classes and storage classes
    • Configure access control (create local users)
    • Let the customer use VCFA UI or API to deploy/manage their workloads.

    VCFA Organizations are essential to enabling multi-tenancy, isolation, and governance in VCFA.They help service providers manage multiple customers securely and efficiently. Each org has its own identity, resource limits, users, services, and policies.

    IP Space

    IP spaces offer a structured approach for providers to allocate IP addresses to different organizations, enabling connectivity to external networks. You can use quotas to control usage. For internal organization communications, organizations can self-manage their own IP address blocks.

    Go to Networking > IP spaces to create a new IP Space and set quotas. IP Blocks are created in NSX. IP Blocks represent IPs used in this local datacenter, south of the Provider Gateway. IPs within this scope are used for configuring services and networks.

    External Reachability represents the IPs used outside the datacenter, north of the Provider Gateway.

    Provider Gateway

    A Provider Gateway in VCFA is the logical network boundary between the provider-managed infrastructure and external environments. It serves as the entry/exit point for all traffic coming in and going out of tenant environments.

    A provider gateway leverages VCF Networking T0s or T0 VRFs, and associates them with IP addresses from IP spaces that can be advertised from those gateways. A provider gateway can be assigned to one or more organizations.

    To add a provider gateway, first you must create an Active Standby tier-0 gateway in the NSX Manager associated with the region to back it. You can create the tier-0 gateway in the NSX Manager UI or by using the NSX Policy API.

    If you want to add a tier-0 gateway that is backed by a VRF gateway in NSX, you must also create a VRF gateway that is linked to the tier-0 gateway.

    • Enter a name and, optionally, a description for the new provider gateway.
    • From the drop-down menu, select the region of the tier-0 gateway, and click Next.
    • Select a tier-0 gateway from the list, and click Next.
    • Select one or more IP spaces to associate with the provider gateway, and click Next.
    • Review the network settings and click Create.

    Region Network Settings (Tenant Networking)

    When you configure networking for a Region in VCFA, you’re defining how tenant workloads in that region will connect—both internally and externally. This includes:

    Click on “START” will take to Organization page, there select Organization for which you want to configure Networking and click on CONFIGURE

    • Select the Region – choose the appropriate region where this organization’s resources will be provisioned, then click Next.
    • Choose a Provider Gateway – select a provider gateway to connect the organization’s virtual network to external networks (e.g., internet or upstream services), then click Next.
    • Assign an Edge Cluster – Pick the Edge cluster where the VPC services for this organization will operate. (You may choose the same cluster associated with the Tier-0 provider gateway, or a different Edge cluster depending on your resource planning)
    • Review and Confirm – Review all configured network settings. Once validated, click Create to complete the network setup for the organization.Select a region, and click Next

    This blog post provides a comprehensive, step-by-step walkthrough of how to manually onboard a tenant in VMware Cloud Foundation Automation (VCFA) by configuring key components such as Regions, Organizations, IP Spaces, Provider Gateways, and Tenant Networking, offering cloud providers and administrators deeper control and customization compared to the Quick Start option—ultimately enabling a flexible, scalable, and secure multi-tenant private cloud environment built on VCF 9.

  • From Virtualization to Cloud Service Delivery with VMware Cloud Foundation & VCSPs

    From Virtualization to Cloud Service Delivery with VMware Cloud Foundation & VCSPs

    The IT landscape is undergoing a massive transformation. Traditional virtualization, which once revolutionized data centers, is now evolving into full-fledged cloud service delivery. Organizations are no longer just managing VMs; they are delivering scalable, secure, and AI-ready cloud platforms.

    The Shift from Virtualization to Cloud Services

    Virtualization has been the backbone of IT infrastructure for over a decade, enabling efficiency, consolidation, and improved resource utilization. However, as digital transformation accelerates, enterprises require more than just virtual machines. They need scalable, automated, and AI-powered cloud platforms that can meet the growing demands of modern workloads.

    This shift is being powered by VMware Cloud Foundation (VCF)—the cornerstone of modern cloud infrastructure. With VCF, enterprises and Cloud Service Providers (CSPs) can move beyond virtualization to build multi-cloud, hybrid, and sovereign cloud environments with automation, security, and AI-driven capabilities at their core.

    Key Benefits of VMware Cloud Foundation

    Unified Platform: Compute, storage, networking, and management are integrated into a single solution.
    Hybrid & Multi-Cloud Operations: Seamlessly run workloads across private, public, and hybrid cloud environments.
    Built-in Security & Compliance: Ensure data sovereignty and regulatory compliance with sovereign cloud initiatives.
    AI-Ready Infrastructure: GPU acceleration and private AI capabilities empower AI/ML workloads.
    Accelerated Cloud Service Delivery: Enable Cloud Providers & VMware Cloud Service Providers (VCSPs) to deliver next-gen cloud offerings.

    The Significance of VMware Cloud Providers (VCSPs)

    VMware Cloud Providers (VCSPs) play a pivotal role in enabling organizations to seamlessly transition from virtualization to cloud services. They extend the capabilities of VMware Cloud Foundation by offering:

    🔹 Managed Cloud Services: Helping enterprises offload infrastructure management with fully managed VMware-based cloud environments.
    🔹 Sovereign and In-Country Cloud Solutions: Ensuring compliance with regional data sovereignty laws while delivering cloud scalability.
    🔹 Multi-Tenant Cloud Platforms: Empowering service providers to offer flexible, cost-effective cloud solutions with secure tenant isolation.
    🔹 AI and GPU-Powered Cloud Services: Providing enterprises with AI-ready infrastructure to support next-gen workloads.
    🔹 Disaster Recovery & Business Continuity: Offering reliable DRaaS (Disaster Recovery as a Service) to ensure business resilience.

    Future of Cloud with VMware Cloud Foudation

    As enterprises and service providers embrace cloud-first and AI-driven strategies, VCF is enabling them to deliver next-generation cloud services with agility, resilience, and efficiency. This evolution is not just about technology; it’s about unlocking new business opportunities, enhancing innovation, and driving digital transformation at scale.

    With cloud-native applications, AI/ML workloads, and security-first cloud strategies becoming the new normal, the role of VMware Cloud Foundation is more critical than ever.

    VMware Cloud Foundation is transforming the way cloud services are delivered, from the traditional virtualization model to highly flexible, customer-tailored cloud services. With the support of VCSPs, businesses are empowered to adopt cutting-edge cloud solutions faster and more efficiently than ever before.

  • Enhancing Firewall Flexibility in VMware Cloud Director 10.6.1

    With VMware Cloud Director 10.6.1, service providers gain greater flexibility and control over firewall configurations, ensuring compliance with licensing entitlements while delivering scalable, high-value security services. This update aligns with VMware Cloud Foundation (VCF) networking licensing, enabling providers to selectively offer the VMware Advanced Networking & Security (ANS) Add-On to customers based on their needs and cost agreements.

    Impact of VMware NSX Licensing Changes

    Recent changes to VMware’s NSX licensing model have significantly altered how firewall features are provisioned. Under the new structure:

    • Stateless Firewall is included in the VMware Cloud Foundation (VCF)
    • Stateful Firewall now requires an additional, separate license documented Here

    This change impacts how service providers manage network security within VMware Cloud Director environments. To address these shifts, Cloud Director 10.6.1 introduces new controls that give providers flexibility in defining which firewall type—stateless or stateful—is available to their tenants. This ensures security policies align with business needs while optimizing costs associated with VMware licensing.

    VMware Cloud Director with NSX supports both stateful and stateless firewalls, each serving different security needs:

    What is a Stateless Firewall?

    A stateless firewall inspects traffic on a per-packet basis without maintaining the state of active connections. Unlike stateful firewalls, which track the context of traffic flow, stateless firewalls apply predefined rules to each packet independently.

    💡 Key Benefits:
    ✔ Faster packet processing for high-performance workloads.
    ✔ Ideal for perimeter protection and edge security use cases.
    ✔ Lower resource consumption compared to stateful firewalls.

    Stateful vs. Stateless Firewalls in Cloud Director

    FeatureStateful FirewallStateless Firewall
    Connection Tracking✅ Maintains connection state❌ No connection awareness
    Security Context✅ Applies rules based on traffic flow❌ Evaluates each packet independently
    PerformanceHigher resource usageLightweight, optimized for speed

    Configuring in Cloud Director

    This feature is designed to help cloud service providers who wish to control which tenants can access Stateless/Stateful Firewall services. The goal is to enforce better governance over the consumption of advanced network services, such as Stateful Firewall and Distributed Firewall.

    The license selection is made at the Edge Cluster level in VCD. The service provider determines which type of firewall can be applied to a specific Edge Cluster. Consequently, all Provider/Organization and vApp Edge Gateways utilizing that cluster will have firewall rules configured as either stateful or stateless, depending on the selection.

    This will have corresponding changes in NSX, while The firewall rule configuration remains the same in vCD. below is the VMware Cloud Director (VCD) view of the Org VDC Edge Gateway firewall configuration deployed on an Edge Cluster designated with the stateless firewall option inside NSX Manager.

    NOTE : Changing an Edge Cluster from Stateful to Stateless or vice versa will not impact existing deployed Gateways.

    Gateway Firewall Enforcement Control in VCD

    One key use case is when a service provider or tenant is using an appliance-based third-party firewall instead of the NSX-integrated firewall in Cloud Director. In such cases, they may not require NSX-based firewall enforcement and prefer to manage security through their own solution. This feature allows them to disable the NSX firewall, ensuring flexibility in security architecture without unnecessary conflicts.

    Now with this release both service providers and tenants can disable or enable the firewall at the Provider or Org Gateway level without removing existing firewall rules. A new “Active” switch has been introduced in the Firewall UI (top right corner), allowing users to toggle firewall enforcement as needed while preserving the configured rules.

    Conclusion

    The new firewall flexibility in Cloud Director 10.6.1 ensures that service providers can:

    Optimize licensing costs by choosing stateless or stateful firewall options.
    Align security offerings with customer needs.
    Enhance governance and compliance around advanced network security services.
    Seamlessly integrate third-party firewall solutions into their cloud environments.

    By leveraging these new capabilities, Cloud Director providers can deliver scalable, efficient, and cost-effective security solutions while adapting to the evolving VMware NSX licensing model.

    Cloud Director 10.6.1 Release Notes Published Here

  • Integrating VMware Data Services Manager with VMware Cloud Director

    Integrating VMware Data Services Manager with VMware Cloud Director

    Self-Service DBaaS: Tenants can easily provision and manage databases like MySQL, PostgreSQL, etc., without admin intervention.

    Centralized Management: Service providers maintain full control and visibility over all database services provisioned by tenants.

    Scalability: Easily scale database instances as per tenant demand, with seamless multi-tenant support.

    Overview of VMware Data Services Manager (DSM)

    • VMware Data Services Manager MySQL
    • VMware RabbitMQ
    • VMware SQL with Postgres and VMware SQL with MySQL
    • Mongo DB Enterprise Advanced and Community editions
    • Apache Kafka – Confluent Platform
    • VMware Data Services Manager Postgres

    Prerequisites

    Before you begin the integration, ensure you have the following:

    • A deployed VMware Cloud Director instance.
    • VMware Data Services Manager installed and configured.
    • A prepared Tanzu Kubernetes Grid (TKG) cluster.

    Steps for Integration

    Install and Configure VMware DSM

    VMware DSM simplifies data services management by offering a platform for tenants to provision, manage, and monitor their databases. Here’s how to set it up:

    • Deploy DSM:
      • Deploy the VMware DSM appliance in your VMware environment.
      • Ensure DSM is connected to your Cloud Director and vSphere environment, with access to required resources for provisioning database instances.
    • Configure Data Services:
      • Within DSM, configure the data services you wish to offer to tenants, such as MySQL, PostgreSQL, MongoDB, etc.
      • Define database service policies, such as backup policies, storage configurations, and high availability options.
    • Create Tenant-specific Database Templates:
      • Create database templates or pre-configured service offerings for different tenants, specifying the parameters such as CPU, memory, storage, and network configurations.

    For more details on how to setup DSM see – set up the infrastructure policy and backup locations in the VMware Data Services Manager portal, see the VMware Data Services Manager Documentation.

    Data Solution Extension Integration with DSM

    The VMware Cloud Director Extension for Data Solutions is a powerful plug-in designed to enhance VMware Cloud Director by adding data and messaging services to its portfolio. This extension enables cloud providers to offer a variety of on-demand data services to their tenants, including:

    • VMware SQL with MySQL
    • VMware SQL with PostgreSQL
    • RabbitMQ
    • Kafka
    • Mongo DB

    Now to Integrate DSE with DSM follow below steps:

    • Access your VMware Cloud Director instance and navigate to the Data Solutions Extension.
    • Go to Settings > DSM Integration within the Data Solutions Extension interface
    • Choose the TKG cluster (this is provider hosted K8s Cluster) where you want to deploy the data services operatorr and click Next.
    • Follow the prompts to install the Data Solutions operator. This process typically takes a few minutes.
    • Enter the necessary details to connect VMware DSM with the Data Solutions Extension and click Connect
    • Define infrastructure policies and backup locations within the VMware DSM portal to ensure data protection and compliance.

    Publish to Tenants

    Once the integration is complete, you can publish VMware DSM PostgreSQL and MySQL solutions to tenant organizations.

    Tenant Self Service

    • In a Web browser, navigate to the VMware Cloud Director tenant portal URL.For example, https://vcloud.example.com/tenant/myOrg.
    • Enter your user name and password, and click Sign In.
    • In the primary left navigation panel, click More > Data Solutions.
    • Select version required and enter required details to deploy a database

    Tenant Self Service – Backup/Restore

    You can protect your data solution instances by backing them up to an S3 location and restoring them to a new instance

    You can backup solution instances on-demand or by using a custom backup schedule. You can back up and restore VMware SQL with Postgres, VMware SQL with MySQL, VMware Data Services Manager MySQL, and VMware Data Services Manager Postgres instances.

    Tenant Self Service – Upgrade

    You can upgrade the available data solutions and their instances within the VMware Cloud Director extension for Data Solutions.

    Upgrade a solution

    Select the upgrade version and Acknowledge that you have read and completed the pre-upgrade actions and click Upgrade.

    Integrating VMware Data Services Manager with VMware Cloud Director and the Data Solutions Extension is a strategic move for cloud providers looking to enhance their service offerings. By following the steps outlined above, you can streamline your data management processes, improve scalability, and deliver a superior experience to your tenants.

  • Why VMware VCSP Partners Should Embrace vSAN Now: A Powerhouse for Private Cloud Offerings with the New Licensing Advantage

    In today’s dynamic business environment, enterprises are increasingly seeking agile and scalable private cloud solutions. VMware partners are uniquely positioned to capitalize on this trend, and vSAN, VMware’s software-defined storage solution, is a powerful tool to add to your private cloud arsenal. Let’s delve into why vSAN, with its new licensing model and advanced architecture, is a strategic asset for building compelling private cloud offerings.

    The Private Cloud Imperative

    Organizations are looking to migrate to private clouds ( on prem or partner hoststed or partner managed ) to gain greater control, flexibility, and security over their IT infrastructure. Private clouds offer several advantages:

    • Security and Compliance: Maintain control over data and applications within a secure, private environment.
    • Improved Resource Utilization: Consolidate resources and eliminate silos, leading to more efficient allocation and utilization.
    • Enhanced Agility: Rapidly provision and scale resources to meet changing business demands.

    vSAN: The Bedrock of a Robust Private Cloud

    vSAN plays a critical role in building a feature-rich private cloud solution. Here’s how it empowers partners with a new licensing model and advanced architecture:

    • Simplified Infrastructure Management: vSAN integrates seamlessly with existing VMware tools, streamlining provisioning, deployment, and management of private cloud infrastructure.
    • Scalability on Demand: Effortlessly scale storage and compute resources within your private cloud to accommodate business growth.
    • Reduced Operational Costs: The software-defined nature of vSAN eliminates the need for expensive, dedicated storage hardware, leading to significant cost savings.
    • New vSAN Licensing Model: The recent shift to a per-core consumption model offers predictable pricing (VCF provides 1 TiB of vSAN entitlement for each VCF core purchased), allowing you to accurately forecast costs and deliver competitive private cloud solutions.
    • vSAN Express Storage Architecture (ESA): The ESA is optimized to exploit the full potential of the very latest in hardware and unlocks new capabilities, simplifies deployment and streamlines management for private cloud environments.

    Power of ESA

    The Express Storage Architecture in vSAN 8 stands on the shoulders of much of the architecture found in OSA included in previous versions of vSAN, and vSAN 8.  vSAN had already solved many of the great challenges associated with distributed storage systems, and we wanted to build off of these capabilities while looking at how best to optimize a data path to reflect the capabilities of today’s hardware.

    The advances in architecture primarily come from two areas, as illustrated in below figure:

    An optimized log-structured object manager and data structure.  This layer is a new design built around a new high-performance block engine and key value store that can deliver large write payloads while minimizing the overhead needed for metadata.  The new design was built specifically for the capabilities of the highly efficient upper layers of vSAN to send data to the devices without contention.  It is highly parallel and helps us drive near device-level performance capabilities in the ESA. 

    A new patented log-structured file system.  This new layer in the vSAN stack – known as the vSAN LFS – allows vSAN to ingest new data fast and efficiently while preparing the data for a very efficient full stripe write.  The vSAN LFS also allows vSAN to store metadata in a highly efficient and scalable manner.

    For more info on ESA, check here – https://core.vmware.com/blog/introduction-vsan-express-storage-architecture

    vSAN MAX: Powering Mission-Critical Private Clouds

    vSAN Max is a distributed scale-out storage system for vSphere clusters.  It is powered by the vSAN ESA, so it offers the capabilities that are a part of the ESA, but serves as a storage-only cluster.  It uses vSAN’s native protocol and data path for cross-cluster communication, which preserves the management experience and provides the highest levels of performance and flexibility for a distributed storage system.

    For more details check here : https://core.vmware.com/blog/introducing-vsan-max

    Beyond Efficiency: The vSAN Advantage

    vSAN offers more than just operational benefits. Here’s how it elevates your private cloud proposition:

    • Faster Time to Market: The rapid deployment capabilities of vSAN, allow you to deliver private cloud solutions to clients quickly and efficiently.
    • Improved Service Delivery: vSAN’s inherent performance and scalability, coupled with vSAN MAX for demanding workloads, enable you to offer high-performance private cloud environments for any customer need.
    • Enhanced Security: vSAN integrates with VMware security features, allowing you to build private clouds that meet stringent security compliance requirements.

    Partnering for Private Cloud Success

    To maximize your success with vSAN in the private cloud domain or VMware based public cloud domain, consider these steps:

    • Develop Private Cloud Expertise: Invest in training and resources to build a team of experts proficient in designing, deploying, and managing private cloud solutions with vSAN, including the new licensing model and vSAN ESA architecture.
    • Craft Compelling Private Cloud Packages: Develop standardized or customizable private cloud packages that leverage vSAN’s strengths to address specific customer needs, including options for cost-optimized vSAN configurations or high-performance vSAN MAX deployments.
    • Showcase Customer Success Stories: Demonstrate the value proposition of vSAN-powered private clouds through successful client case studies and testimonials, highlighting the benefits of the new licensing model, vSAN ESA, and vSAN MAX for diverse private cloud requirements.

    Conclusion

    VMware partners have a tremendous opportunity to lead the private cloud charge. By embracing VMware vSAN, its new licensing model, advanced vSAN ESA architecture, and the power of vSAN MAX, you can empower businesses to thrive in the digital age. Invest in vSAN expertise, craft compelling private cloud offerings, and watch your business soar as a trusted advisor in the private cloud revolution.

  • VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

    VMware Cloud Director OIDC Integration with VMware Workspace ONE Access

    Prerequisite

    • VMware Workspace access ONE must be already deployed.
    • VMware workspace access ONE must be configured with a directory service source for users and groups.
    • Cloud Director must be installed and configured for provider and tenant organizations.

    Bill of Material

    • VMware Cloud Director 10.5.1
    • VMware Workspace ONE Access 23.09.00

    Steps to Configure Workspace ONE Access for OIDC Authentication

    Workspace ONE Access uses OAuth 2 to enable applications to register with Workspace ONE Access and create secure delegated access to applications. In this case, we will use Cloud Director to integrate with Workspace One Access.

    • In the Workspace ONE Access console Settings > OAuth 2.0 Management page, click ADD CLIENT.
    • In the Add Client page, configure the following.
    • Click SAVE. The client page is refreshed and the Client ID and the hidden Shared Secret are displayed.
    • Copy and save the client ID and generated shared secret.
    • Note: If the shared secret is not saved or you lose the secret code, you must generate a new secret, and update in Cloud Director that uses the same shared secret with the regenerated secret. To regenerate a secret, click the client ID that requires a new secret from the OAuth 2.0 Management page and click REGENERATE SECRET.

    Steps to configure VMware Cloud Director to use Workspace ONE Access for Provider/Tenant users and groups

    • From the top navigation bar, select Administration.
    • In the left panel, under Identity Providers, click OIDC or directly you can browse: https:// [VCD Endpoint]/(provider or tenant/[orgname])/administration/identity-providers/oidcSettings
    • If you are configuring OIDC for the first time, copy the client configuration redirect URI and use it to create a client application registration with an identity provider that complies with the OpenID Connect standard, for example, VMware Workspace ONE Access. (this has already been done above)
    • Click Configure
    • Verify that OpenID Connect is active and fill in the Client ID and Client Secret you created in VMware Workspace ONE Access as above during client creation.
    • To use the information from a well-known endpoint to automatically fill in the configuration information, turn on the Configuration Discovery toggle and enter a URL at the site of the provider that VMware Cloud Director can use to send authentication requests to. Fill in the IDP Well-known Configuration Endpoint field with the value:               https://ws01 URL/SAAS/auth/.well-known/openid-configuration
    • Click next.
    • If everything is correctly configured, the below information will automatically get populated, keep a note we are using the User Info endpoint.
    • VMware Cloud Director uses the scopes to authorize access to user details. When a client requests an access token, the scopes define the permissions that this token has to access user information, enter the scope information, and click Next.
    • Since we are using User Info as an access type, map the claims as below and click Next.

    NOTE: At the claims mapping step, the Subject theme will be default populated with “sub” which will mean that VCD users will have the username format “[username]@XXX”. If you want to import the users to VCD with a different format, you can change the Subject theme to map to “email” and then import users to VCD using the email address attached to the account. 

    This is the most critical piece of configuration. Mapping this information is essential for VCD to interpret the token/user information correctly during the login process.

    Login as an OIDC GROUP Member User

    1. In the Provider/Tenant organization’s Administration Page, import OIDC groups and map them to existing VCD roles.
    2. NOTE: In case you don’t see the “IMPORT GROUPS” button, refresh the page, and you will see the desired button IMPORT GROUPS
    • User go to https:// [VCD Endpoint]/(provider or tenant/[orgname])
    • The user should be redirected to the Workspace ONE Access login page. Users can log in with the user in the group.
    • The user will be redirected back to VCD and should now be fully logged in. 

    After the first successful login, the organization administrator can see the newly auto-imported user.

    Login as an OIDC User

    • In the Provider/Tenant organization’s Administration Page, import OIDC users and map them to existing VCD roles.
    • User go to https://[VCD Endpoint]/(provider or tenant/[orgname])
    • The user should be redirected to the Workspace ONE Access login page and log in there.
    • The user will be redirected back to VCD and should now be fully logged in. 

    If you get the SSO Failure page double-check that you imported to the correct group/user and that the username format is correct. For additional information, you can check Here and for troubleshooting and about configuring additional logging, you can check the official documentation here.

    Login without OIDC or as a Local User

    In version 10.5, if an organization in VMware Cloud Director has SAML or OIDC configured, the UI displays only the Sign in with Single Sign-On  option. To log in as a local user, navigate to https://vcloud.example.com/tenant/tenant_name/login or https://vcloud.example.com/provider/login.

  • NSX Multi-Tenancy in VMware Cloud Director

    Multi-Tenancy was introduced in NSX UI starting from VMware NSX 4.1 and now commencing with version 10.5.1, VMware Cloud Director introduces support for NSX multi-tenancy, facilitating direct alignment of vcd organizations with NSX projects.

    What are NSX Projects ?

    A project in NSX functions akin to a tenant. Creating projects enables the separation of security and networking configurations among different tenants within a single NSX setup.

    Multi-tenancy in NSX is achieved by creating NSX projects, where each project represents a logical container of network and security resources (a tenant). Each project can have its set of users, assigned privileges, and quotas. Multi-tenancy serves various purposes, such as providing Networking as a Service, Firewall as a Service, and more.

    How NSX Projects relate to Cloud Director Organizations?

    Within the VCD platform, the tenancy is established via Organizations. Each tenant receives its exclusive organization, ensuring a distinct and isolated virtual infrastructure tailored to their tasks. This organizational setup grants precise control over tenant access to resources, empowering them to oversee Users, Virtual Data Centers (VDCs), Catalogs, Policies, and other essentials within their domain.

    To clearly outline the tenant structure, VMware NSX introduced a feature known as Projects. These Projects allocate NSX users to distinct environments housing their specific objects, configurations, and monitoring mechanisms based on alarms and logs.

    With VCD 10.5.1, management functionalities tied to NSX Tenancy fall within the exclusive purview of the Provider. NSX Tenancy operates on an Organization-specific level within VCD. When activated, a VCD Organization aligns directly with an NSX Project.

    VCD drives and manages the creation of the associated NSX project, allowing the User to configure the project identifier. The NSX project is actually created during the creation of the first VDC in the organization for which you activated NSX tenancy. The name of the NSX project is the same as the name of the organization to which it is mapped.

    How to enable?

    The Cloud Provider can enable the NSX Tenancy for a specific Organization by going into the Cloud Director Organization section, choosing an organization, and selecting “NSX Tenancy”, he/she can also define a Log Name, which will be the Organization’s unique identifier in the backing NSX Manager logs.

    The name of the NSX project will be the same as the name of the organization to which it is mapped.

    Once NSX tenancy has been activated on the Org level, the Cloud provider can create a new Org VDC and choose to enable “NSX Tenancy”, this is when The NSX project is actually get created in NSX.

    NOTE: Network Pool selection is disabled. This is because NSX supports Project creation only in the default overlay Transport Zone. Also, make sure the default overlay Transport zone already exists.

    Note: If you choose not to activate NSX tenancy during the creation of an organization VDC, you cannot change this setting later.

    When not to choose to enable tenancy?

    Some use cases do not require organization VDC participation in NSX tenancy, for example, if the VDC only needs VLAN networks. Additionally, organization VDCs using NSX tenancy are restricted to using the network pool that is backed by the default overlay transport zone, so, in order to be able to use a different network pool, you might wish to opt out of NSX tenancy.

    also there are a few features that NSX projects do not support today, like NSX Federation deployments as well as not all Edge Gateway features are available for Networking Tenancy-enabled VDCs like VPNs (IPsec/L2) and sharing segment profile templates, etc.. so work in progress and will see more and more features coming in future.

    Conclusion

    Aligning NSX Projects with VCD’s Tenancy ensures customers access an extensive array of networking capabilities offered by the NSX Multi-tenancy solution. Among these crucial functionalities is tenant-centric logging for core VCD networking services like Edge Services and Distributed firewalls. Additionally, integrating NSX Projects paves the way to investigate potential enhancements, facilitating tenant self-service login capabilities within VCD features. Below, you can find more information and capabilities.

    Managing NSX Tenancy in VMware Cloud Director

    VMware Cloud Director 10.5.1 adopts NSX Projects