Although CAS does most of the heavy lifting to build and configure the infrastructure services, there are a few things you'll need to prepare beforehand. Refer to the Key Decision Points article for guidance on some of the requirements.
Google Cloud Platform
Cloud Desktop builds servers and services on Google Cloud Platform (GCP) In order to successfully configure a CAS deployment, you must have your GCP environment ready to go. At a minimum, this includes the following:
- A billable Google Cloud account - If you haven't already, sign up for Google Cloud. New GCP accounts in the trial period have several restrictions that will prevent a CAS deployment from being created, such as very low quotas on the number of vCPUs you can assign to resources. Be sure to configure billing in your GCP account.
- A GCP project - In Google Cloud, projects are distinct "silos" for resources you create. It is generally a good idea to create separate projects for systems that should have a high level of isolation between them, but it's also useful to keep most of your systems in just a few-- or even one-- project for convenience and interoperability. CAS can only create one deployment per GCP project, but the project can be used for other resources. So at a minimum, you need to have at least one project in your GCP account.
- Owner rights to the GCP project - When you create your CAS deployment, you must connect to Google Cloud using a Google account that has Owner permissions on the GCP project you wish to use for the deployment. CAS will perform initial configuration of the environment using your account, including the creation of a secure service account for itself that will be used for ongoing management. When you run the wizard, you must have credentials for a Google account that has Owner rights on the project (in an Advanced Deployment, you can provide a JSON key for a service credential with Owner rights rather than using Google account credentials).
- Sufficient resource quotas in the GCP project - Typically, new accounts in Google Cloud have very low quotas set for resources, both to protect users from inadvertently racking up huge bills and to protect Google from fake accounts that try to commit DDoS-style attacks by provisioning large numbers of resources. It is helpful to check the quotas on your GCP project before starting a deployment in CAS; CAS will also check these quotas before finalizing the deployment. If you are planning a large deployment (typically 100+), you may need to contact your GCP sales representative to expedite the quota increases on very new accounts. While the exact quotas you will require depend on the size of your deployment, the following are general guidelines on quota requirements:
- Compute Engine API - CPUs
- Regions: In each region that will host your deployment, you typically need a minimum of 24 CPUs to host redundant infrastructure and a single User Session Server; smaller deployments may require slightly fewer CPUs. Each additional Session Server will need 2-16 additional CPUs, depending on your sizing.
- Global: Ensure that the global quota for your CPUs is greater than or equal to the sum of the regional CPU quotas for all regions that will host your deployment. For example, if you deploy into two regions and each region needs 30 CPUs, your global CPU quota should be at least 60.
- Compute Engine API - Internal IP addresses: In some cases, Google's default quotas on IP addresses are prohibitively low. Typically, each region in your deployment will require about 10 IP addresses for infrastructure, with an additional IP needed for each Session Server.
- Compute Engine API - Static IP addresses: This quota applies to public IP addresses in your GCP project. Although most CAS deployments do not have many external IP addresses (typically one per region for end-user connectivity), we have sometimes seen that GCP quotas are set prohibitively low for very new GCP accounts.
- Compute Engine API - various GPU quotas: If you are planning to configure User Session Servers with GPU acceleration, ensure that your deployment's regions have adequate quota for the type of GPU you plan to use.
- Compute Engine API - CPUs
Microsoft licensing is a complicated, ever-changing topic. If you have a relationship with an authorized Microsoft Licensing reseller, it is recommended to discuss your existing licensing rights with them to determine what you need for full compliance.
Generally speaking, Microsoft Licensing requirements for Cloud Desktop consist of the following:
- Windows OS license (per-CPU core): OS licensing is built-in to the cost of GCE VM instances. Unless you are using Sole-Tenant Nodes with Bring Your Own Licenses (BYOL), you typically do not need to worry about this license.
- Windows Server Client Access License (per-user or per-device): The Windows Server client access license (CAL) applies to every user that "interacts" with a Windows Server, be it copy a file off a file server, authenticating via Active Directory, or logging into an interactive desktop session. While you do not need a separate CAL for each server in your organization, you do need a license for each user (or each client device depending on the circumstance). Typically, you do not need separate CALs for your Cloud Desktop deployment if you already have Windows Server CALs for your other Windows infrastructure.
- Remote Desktop Services Subscriber Access License (per-user or per-device): In addition to the Client Access License, accessing remote desktops (and RemoteApps) via Remote Desktop Services (on which Cloud Desktop is built) also requires a RDS Subscriber Access License (SAL). The SAL authorizes users or devices to establish remote desktop sessions. If you have an existing Enterprise Agreement with Microsoft, usage rights for RDS SALs may be included; check with your Microsoft Licensing reseller.
DNS Names and SSL Certificates
When a new deployment is created, CAS assigns an External domain name using itopia's cloudvdi.net domain name; the full DNS address is [code]-[region].cloudvdi.net, where [code] is the unique deployment ID for the environment and [region] is the GCP region if it is a multi-region deployment. itopia automatically secures all deployments that use this domain with a strong SSL certificate.
This external address is the connection point for your end-users; Remote Desktop client applications will use this FQDN as the RD Gateway server, or users can access their RD Web Portal or the RD Web Client using this address.
If your users access their Cloud Desktops via the MyRDP Portal, it is likely that they may never see this address. However, you may wish to assign custom DNS addresses to your endpoints, perhaps to provide an easy, memorable address for your users to access the RD Web Client.
You can set the external domain name for each region to a custom value. For example, a multi-region deployment could be configured as remote.contoso.com and remote-backup.contoso.com, or even as remote.contoso.com and remote.fabrikam.com.
When configuring a custom external domain name, you will need the following:
- A public DNS domain your organization owns
- Access to create DNS A records in your domain
- Server names for each region
- An SSL certificate for the External Domain Names in PFX format. See additional information on certificate requirements below.
The type of SSL certificate depends on your deployment and the external domain names you are configuring:
- For a single-region deployment, a standard "single-subject" SSL certificate is adequate, with the subject matching the External Domain Name you have chosen for your deployment such as remote.contoso.com
- For a multi-region deployment where all regions will use the same DNS domain, a wildcard or subject alternative name (SAN) certificate can be used. For example, if all regions will use external domain names in the contoso.com domain, you may either use a wildcard certificate for *.contoso.com or a SAN certificate with the specific external domain name for each region, such as remote-us-east.contoso.com, remote-us-west.contoso.com, remote-europe.contoso.com
- For a multi-region deployment where regions will use different DNS domains, a subject alternative name (SAN) certificate can be used. For example, if your chosen external domain names are remote.contoso-us.com and remote.contoso-europe.com, you can use a SAN certificate with each of these names explicitly defined as either the subject or a subject alternative name.
In every scenario above, you must upload the certificate separately for each region; you could therefore use different SSL certificates for each region, rather than using wildcard or SAN certificates. For ease of management, itopia recommends using a single certificate for all regions, but this is not required.
itopia recommends using the newer ECC SSL certificate type rather than RSA SSL certificates; ECC is a newer algorithm that is more secure and more efficient. However, very old operating systems such as Windows XP and Windows Server 2003 do not support ECC; if you may have end-users connecting from older operating systems (10+ years), you should continue to use RSA certificates. itopia recommends a minimum RSA-2048 or ECC-256 certificate.
NOTE: Remember that wildcard SSL certificates do not apply to subdomains. For example, a wildcard certificate for *.contoso.com WILL NOT be valid for remote.us.contoso.com and remote.europe.contoso.com. This is an inherent and intended limitation of SSL certificates.
Domain Network Connectivity (Optional)
If you are creating a deployment using the Existing Active Directory domain model or you plan to create an AD trust to your existing AD domain, you will need to establish connectivity between the VPC network for your CAS deployment and the network that contains existing domain controllers for your Active Directory domain.
In standard deployments, CAS creates a new VPC network in your Google Cloud project and attaches the resources for your deployment to this network. This VPC network will not have connectivity to the domain controllers for your domain until you configure it; if you are creating a new deployment using the Existing AD model, the deployment will not be built until you establish connectivity.
The type of interconnectivity you need depends on the location of your domain controllers:
- If your AD domain has domain controllers in Google Cloud, you can create a VPC network peering to connect the VPC network for your deployment to the VPC network that contains your domain controllers
- If you AD domain only has domain controllers outside of Google Cloud (such as an on-premises or co-located datacenter), you must create a VPN or a Direct Interconnect