SAP HANA on Azure - Microsoft Cloud Workshop

Microsoft Cloud Workshop Microsoft Cloud Workshop on May 01, 2019

In this hands-on lab, you will step through the implementation of a single node and highly available SAP HANA deployments on Microsoft Azure virtual machines running SUSE Linux Enterprise Server.
After its completion, you will be able to perform single node and highly available SAP HANA deployments by using Terraform and Ansible, validate both types of deployments, test failover scenarios, and remove the deployed resources.

Before the Hands-on Lab

Microsoft Cloud Workshops

SAP HANA on Azure
Before the hands-on lab setup guide
May 2019

Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein.

© 2019 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.

Contents

SAP HANA on Azure before the hands-on-lab setup guide

Requirements

  • A Microsoft Azure subscription

  • A lab computer running Windows 10 or Windows Server 2016 with:

    • Access to Microsoft Azure

    • Access to the SAP HANA installation media (requires an SAP Online Service System account)

Note: The lab does not require locally installed software. Azure CLI and Terraform tasks are performed by using Cloud Shell in the Azure portal.

Before the hands-on lab

Duration: 15 minutes

To complete this lab, you must verify your account has sufficient permissions to the Azure subscription that you intend to use to deploy Azure VMs. You also need to identify the availability of the SUSE Linux Enterprise Server image that you will use to deploy Azure VMs.

Task 1: Validate the owner role membership in the Azure subscription

  1. Login to http://portal.azure.com, click on All services and, in the list of services, click Subscriptions.

  2. On the Subscriptions blade, click the name of the subscription you intend to use for this lab.

  3. On the subscription blade, click Access control (IAM).

  4. Review the list of user accounts, and verify that your user account has the Owner role assigned to it.

Task 2: Validate availability of the SUSE Linux Enterprise Server image

  1. In the Azure portal at http://portal.azure.com, click the Cloud Shell icon.

    In the Azure Portal, the Cloud Shell icon is selected.

  2. If prompted, in the Welcome to Azure Cloud Shell window, click Bash (Linux).

  3. If prompted, in the You have no storage mounted window, click Create storage.

  4. Once the storage account gets provisioned, at the Bash prompt, run the following: where <Azure_region> designates the target Azure region that you intend to use for this lab (e.g. eastus), and verify the output includes an existing image:

    az vm image list --location <Azure_region> --publisher SUSE --offer SLES-SAP --sku 12-SP3 --all --output table
    

Task 3: Validate sufficient number of vCPU cores

  1. In the Azure portal at http://portal.azure.com, in the Cloud Shell, at the Bash prompt, run the following: where <Azure_region> designates the target Azure region that you intend to use for this lab (e.g. eastus):

    az vm list-usage --location <Azure_region> --query "[?localName == 'Standard DSv2 Family vCPUs' || localName == 'Standard ESv3 Family vCPUs'].{VMFamily:localName, currentValue:currentValue, Limit:limit}" --output table
    

    Note: To identify the names of Azure regions, in the Cloud Shell, at the Bash prompt, run az account list-locations --query '[].name' --output tsv

  2. Review the output of the command executed in the previous step and ensure that you have at least 6 available vCPUs in the Standard DSv2 Family VM family and at least 24 available vCPUs in the Standard ESv3 Family in the target Azure region.

  3. If the number of vCPUs is not sufficient, navigate back to the subscription blade, and click Usage + quotas.

  4. On the subscription's Usage + quotas blade, click Request Increase.

  5. On the Basic blade, specify the following and click Next:

    • Issue type: Service and subscription limits (quotas)

    • Subscription: the name of the Azure subscription you will be using in this lab

    • Quota type: Compute/VM (cores/vCPUs) subscription limit increases

    • Support plan: the name of the support plan associated with the target subscription

  6. On the Quota details blade, specify the following and click Save and continue:

    • Deployment model: Resource Manager

    • Location: the target Azure region you intend to use in this lab

    • SKU family: DSv2 Series and ESv3 Series

  7. On the Problem blade, specify the following and click Next:

    • Severity: C - Minimal impact
  8. On the Contact Information blade, provide your contact details and click Create

Note: Quota increase requests are typically completed during the same business day.

You should follow all steps provided before performing the Hands-on lab.

Hands-on Lab Guide

Microsoft Cloud Workshops

SAP HANA on Azure
Hands-on lab step-by-step
May 2019

Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein.

© 2019 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.

Contents

SAP HANA on Azure hands-on lab step-by-step

Abstract and learning objectives

In this hands-on lab, you will step through the implementation of a single node and highly available SAP HANA deployments on Microsoft Azure virtual machines running SUSE Linux Enterprise Server.

After its completion, you will be able to perform single node and highly available SAP HANA deployments by using Terraform and Ansible, validate both types of deployments, test failover scenarios, and remove the deployed resources.

Overview

In this hands-on lab, you are working with Contoso to develop a process of implementing an automated deployment of single node and highly available SAP HANA instances on Azure virtual machines (VMs). Your tasks will include the preparation for the deployment process, invoking the deployment, validating the outcome of the deployment, and removal all of the deployed resources.

Solution architecture

HANA single node deployment

Solution architecture to setup SAP HANA on Azure consisting of a single node HANA instance.

HANA highly available deployment

Solution architecture to setup SAP HANA on Azure consisting of a highly-available HANA instance.

Requirements

  • A Microsoft Azure subscription

  • A lab computer running Windows 10 or Windows Server 2016 with:

    • Access to Microsoft Azure

    • Access to the SAP HANA software (requires an SAP Online Service System account)

Note: The lab does not require locally installed software. Azure CLI and Terraform tasks are performed by using Cloud Shell in the Azure portal.

Help references

Description Links
Automated SAP Deployments in Azure Cloud https://github.com/Azure/sap-hana/

Exercise 1: Deploy a single node HANA instance by using Terraform and Ansible

Duration: 90 minutes

In this exercise, you will implement a single-node deployment of SAP HANA on Azure virtual machines (VMs). Following initial configuration of Terraform-based templates, the deployment will be fully automated, including installation of all necessary SAP HANA components.

Task 1: Upload media files to Azure Storage

  1. From the lab computer, start a Web browser, and navigate to SAP Software Downloads at https://launchpad.support.sap.com/#/softwarecenter/ (requires an SAP Online Service System account).

  2. From the SAP Software Download, download the following software packages to the lab computer:

    • SAPCAR_1211-80000935.EXE (SAPCAR for Linux x86_64)

    • SAPCAR_1211-80000938.EXE (SAPCAR for Windows 64-bit)

    • IMDB_SERVER100_122_24-10009569.SAR (Support Package SAP HANA DATABASE 1.00 Linux on x86_64 64bit)

    • SAPHOSTAGENT36_36-20009394.SAR (SAP Host Agent)

    Note: The packages listed above might be superseded by newer versions. If so, ensure to adjust accordingly all references to the names of these packages in this task. To find appropriate packages, you can take advantage of the search functionality of the SAP Software Downloads portal. Use the first part of each package (up to the first hyphen) as the search criterion and, in the search results, identify the type that matches the intended platform (either Linux x86_64 or Windows 64-bit).

  3. From the SAP Software Download, download the following software package to the lab computer:

    • IMC_STUDIO2_240_0-80000323.SAR (HANA STUDIO 2 for Windows 64-bit)
  4. From the lab computer, start a Web browser, and navigate to the Azure portal at https://portal.azure.com.

  5. In the Azure portal at http://portal.azure.com, click the Cloud Shell icon.

    In the Azure Portal, the Cloud Shell icon is selected.

  6. In the Cloud Shell pane, from the Bash prompt, run the following to create a resource group that will host a storage account containing media files, where location designates the target Azure region that you intend to use for this lab (e.g. eastus):

    MEDIA_RESOURCE_GROUP_NAME='hanaMedia-RG'
    MEDIA_RESOURCE_GROUP=$(az group create --location location --name $MEDIA_RESOURCE_GROUP_NAME)
    
  7. In the Cloud Shell pane, from the Bash prompt, run the following to generate a pseudo-random name you will assign to the storage account:

    MEDIA_STORAGE_ACCOUNT_NAME=hana$RANDOM$RANDOM
    
  8. In the Cloud Shell pane, from the Bash prompt, run the following to create the storage account:

    LOCATION=$(echo $MEDIA_RESOURCE_GROUP | jq .location -r)
    az storage account create --location $LOCATION --resource-group $MEDIA_RESOURCE_GROUP_NAME --name $MEDIA_STORAGE_ACCOUNT_NAME --kind Storage --sku Standard_LRS
    
  9. In the Cloud Shell pane, from the Bash prompt, run the following to retrieve the first key of the newly created storage account:

    MEDIA_STORAGE_ACCOUNT_KEY=$(az storage account keys list --account-name $MEDIA_STORAGE_ACCOUNT_NAME --resource-group $MEDIA_RESOURCE_GROUP_NAME --query '[0].[value]' --output tsv)
    
  10. In the Cloud Shell pane, from the Bash prompt, run the following to create a container named sapbits with the Blob access level in the newly created storage account:

    az storage container create --name sapbits --account-key $MEDIA_STORAGE_ACCOUNT_KEY --account-name $MEDIA_STORAGE_ACCOUNT_NAME --public-access blob
    
  11. In the Azure portal, navigate to the newly created storage account. On the storage account blade, click Blobs in the Blob service section, click sapbits entry representing the container you created in the previous step, and then, on the sapbits blade, click Upload.

  12. From the Upload blob, upload the HANA media files you downloaded from SAP Software Downloads at the beginning of this task.

Task 2: Prepare for a single node HANA deployment

  1. If needed, in the Azure portal, restart the Cloud Shell.

  2. In the Cloud Shell pane, from the Bash prompt, run the following to generate the SSH key pair that will be used to secure access to Linux Azure VMs deployed in this lab:

    ssh-keygen -t rsa -b 2048
    
  3. When prompted to specify the location of the file in which to save the key and to specify the passphrase protecting the content of the file, press the Enter key (three times). You should see the output similar to the following:

    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/m/.ssh/id_rsa):
    Created directory '/home/m/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/m/.ssh/id_rsa.
    Your public key has been saved in /home/m/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:2gAFQbAc2QFR4miR49Hdk6zyPH7YLWvbjiINEH5WScA m@cc-87c3f8bb-f6549dcd6-8mf42
    The key's randomart image is:
    +---[RSA 2048]----+
    | .XXX== .        |
    | OoE.= =         |
    |+.B o . .        |
    |.+ + o           |
    |  + + . S        |
    |   . + +         |
    |    + = o        |
    |   . = =o.       |
    |    . ++=o       |
    +----[SHA256]-----+
    
  4. In the Cloud Shell pane, from the Bash prompt, run the following to create an Azure AD service principal that will be used during deployment:

    HANA_SP_NAME='hanav1snsp01'
    HANA_SP_ID=$(az ad sp list --display-name $HANA_SP_NAME --query "[0].appId" --output tsv)
    if ! [ -z "$HANA_SP_ID" ]
    then
        az ad sp delete --id $HANA_SP_ID
    fi
    HANA_SP=$(az ad sp create-for-rbac --name $HANA_SP_NAME)
    
  5. In the Cloud Shell pane, from the Bash prompt, run the following to set the variables that that will be used during deployment, representing, respectively, the identifier of the Azure subscription and its Azure AD tenant, as well as the application identifier and the corresponding password of the service principal you created in the previous step:

    export AZURE_SUBSCRIPTION_ID=$(az account show | jq -r '.id')
    export AZURE_TENANT=$(az account show | jq -r '.tenantId')
    export AZURE_CLIENT_ID=$(echo $HANA_SP | jq -r '.appId')
    export AZURE_SECRET=$(echo $HANA_SP | jq -r '.password')
    
    export ARM_SUBSCRIPTION_ID=$(az account show | jq -r '.id')
    export ARM_TENANT_ID=$(az account show | jq -r '.tenantId')
    export ARM_CLIENT_ID=$(echo $HANA_SP | jq -r '.appId')
    export ARM_CLIENT_SECRET=$(echo $HANA_SP | jq -r '.password')
    
  6. In the Cloud Shell pane, from the Bash prompt, run the following to clone the repository hosting the Terraform and Ansible files that you will use for deployment:

    rm ~/sap-hana/ -r -f
    git clone https://github.com/polichtm/sap-hana.git
    
  7. In the Cloud Shell pane, from the Bash prompt, run the following to change the current directory to the one hosting the Terraform and Ansible files that you will use for deployment:

    cd ~/sap-hana/deploy/vm/modules/single_node_hana/
    
  8. In the Cloud Shell pane, from the Bash prompt, run the following to retrieve the location in which you created the storage account in the previous task:

    MEDIA_RESOURCE_GROUP_NAME='hanaMedia-RG'
    LOCATION=$(az group show --name $MEDIA_RESOURCE_GROUP_NAME --query location --output tsv)
    
  9. In the Cloud Shell pane, from the Bash prompt, run the following to create the resource group that will host all resources deployed in this task:

    HANA_V1_SN_RESOURCE_GROUP_NAME='hanav1sn-RG'
    az group create --location $LOCATION --name $HANA_V1_SN_RESOURCE_GROUP_NAME
    

    Note: If needed, this step can be automated as well. Terraform (as well as Azure Resource Manager) templates can be configured to create resource groups.

  10. In the Cloud Shell pane, from the Bash prompt, run the following to generate a pseudo-random name that will be used as a prefix for DNS names assigned to public IP address resources deployed in this task:

    DOMAIN_NAME=hanav1sn$RANDOM
    
  11. In the Cloud Shell pane, from the Bash prompt, run the following to specify the size of the Azure VM to be used to host the single node HANA instance deployed in this task:

    VM_SIZE='Standard_E8s_v3'
    
  12. In the Cloud Shell pane, from the Bash prompt, run the following to specify the name of the root user account for the Linux VM deployed in this task:

    VM_USERNAME='labuser'
    
  13. In the Cloud Shell pane, from the Bash prompt, run the following to add the values you specified to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    sed -i "s/VAR_RESOURCE_GROUP/$HANA_V1_SN_RESOURCE_GROUP_NAME/" ./terraform.tfvars
    sed -i "s/VAR_LOCATION/$LOCATION/" ./terraform.tfvars
    sed -i "s/VAR_DOMAIN_NAME/$DOMAIN_NAME/" ./terraform.tfvars
    sed -i "s/VAR_VM_SIZE/$VM_SIZE/" ./terraform.tfvars
    sed -i "s/VAR_VM_USERNAME/$VM_USERNAME/" ./terraform.tfvars
    
  14. In the Cloud Shell pane, from the Bash prompt, run the following to identify the name of the storage account containing the media files, which you configured and populated in the previous task of this exercise:

    STORAGE_ACCOUNT_NAME=$(az storage account list --resource-group $MEDIA_RESOURCE_GROUP_NAME --query "[?starts_with(name,'hana')].[name]" --output tsv)
    
  15. In the Cloud Shell pane, from the Bash prompt, run the following to specify the names of the software packages you uploaded to the storage account in the previous task of this exercise:

    SAPCAR_LINUX_NAME='SAPCAR_1211-80000935.EXE'
    SAPCAR_WINDOWS_NAME='SAPCAR_1211-80000938.EXE'
    HDBSERVER_NAME='IMDB_SERVER100_122_24-10009569.SAR'
    SAP_HOST_AGENT_NAME='SAPHOSTAGENT36_36-20009394.SAR'
    HANA_STUDIO_WINDOWS_NAME='IMC_STUDIO2_240_0-80000323.SAR'
    

    Note: The packages listed above might be superseded by newer versions. If so, ensure to adjust accordingly the names of these packages.

  16. In the Cloud Shell pane, from the Bash prompt, run the following to add the values you specified to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    SAPCAR_LINUX_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAPCAR_LINUX_NAME
    SAPCAR_LINUX_URL_REGEX="$(echo $SAPCAR_LINUX_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    SAPCAR_WINDOWS_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAPCAR_WINDOWS_NAME
    SAPCAR_WINDOWS_URL_REGEX="$(echo $SAPCAR_WINDOWS_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    HDBSERVER_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$HDBSERVER_NAME
    HDBSERVER_URL_REGEX="$(echo $HDBSERVER_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    SAP_HOST_AGENT_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAP_HOST_AGENT_NAME
    SAP_HOST_AGENT_URL_REGEX="$(echo $SAP_HOST_AGENT_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    HANA_STUDIO_WINDOWS_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$HANA_STUDIO_WINDOWS_NAME
    HANA_STUDIO_WINDOWS_URL_REGEX="$(echo $HANA_STUDIO_WINDOWS_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    
    sed -i "s/VAR_SAPCAR_LINUX_URL/$SAPCAR_LINUX_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_SAPCAR_WINDOWS_URL/$SAPCAR_WINDOWS_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_HDBSERVER_URL/$HDBSERVER_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_SAP_HOST_AGENT_URL/$SAP_HOST_AGENT_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_HANA_STUDIO_WINDOWS_URL/$HANA_STUDIO_WINDOWS_URL_REGEX/" ./terraform.tfvars
    
  17. In the Cloud Shell pane, from the Bash prompt, run the following to set the values of passwords for user accounts that will be used to manage the single-node HANA instance:

    SAPADMUSER_PASSWORD='Lab@pa55hv1.0'
    ADMUSER_PASSWORD='Lab@pa55hv1.0'
    DBSYSTEMUSER_PASSWORD='Lab@pa55hv1.0'
    DBXSAADMIN_PASSWORD='Lab@pa55hv1.0'
    DBSYSTEMTENANTUSER_PASSWORD='Lab@pa55hv1.0'
    DBSHINEUSER_PASSWORD='Lab@pa55hv1.0'
    WINDOWS_ADMIN_PASSWORD='Lab@pa55hv1.0'
    
  18. In the Cloud Shell pane, from the Bash prompt, run the following to add the password values to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    sed -i "s/VAR_SAPADMUSER_PASSWORD/$SAPADMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_ADMUSER_PASSWORD/$ADMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSYSTEMUSER_PASSWORD/$DBSYSTEMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBXSAADMIN_PASSWORD/$DBXSAADMIN_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSYSTEMTENANTUSER_PASSWORD/$DBSYSTEMTENANTUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSHINEUSER_PASSWORD/$DBSYSTEMTENANTUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_WINDOWS_ADMIN_PASSWORD/$WINDOWS_ADMIN_PASSWORD/" ./terraform.tfvars
    

Task 3: Perform the single node HANA deployment

  1. In the Cloud Shell pane, from the Bash prompt, run the following to initialize Terraform modules and provider plugins necessary to perform Terraform-based single-node HANA deployment:

    terraform init
    
  2. In the Cloud Shell pane, from the Bash prompt, run the following to identify changes to be performed by the Terraform-based single-node HANA deployment:

    terraform plan
    
  3. In the Cloud Shell pane, from the Bash prompt, run the following to initiate Terraform-based single-node HANA deployment:

    terraform apply -auto-approve
    

Note: The deployment takes about 40 minutes to complete.

Exercise 2: Validate and remove the single node HANA deployment

Duration: 20 minutes

In this exercise, you will validate the single-node HANA deployment you performed in the previous exercise by using the Windows bastion host. Once you successfully validate the deployment, you will remove all of its resources.

Task 1: Connect to the single node HANA instance by using SAP HANA Studio

  1. From the lab computer, in the Azure portal, navigate to the blade of the hn1-win-bastion Azure VM operating as the Windows bastion host and initiate a Remote Desktop session. When prompted, sign in with the following credentials:

    • Username: bastion_user

    • Password: Lab@pa55hv1.0

    Note: The user name of the Windows bastion host is set in /deploy/vm/modules/single_node_hana/variables.tf

  2. Within the Remote Desktop session to hn1-win-bastion, start Notepad, and open the hosts file located in C:\Windows\System32\drivers\etc.

  3. Add the following entries to the host file, save your changes, and close the file:

    10.0.0.6	hn1-hdb0        
    

    Note: 10.0.0.6 is the private IP address assigned to the network interface of the Azure VM hosting the HANA instance.

  4. Within the Remote Desktop session, start SAP HANA Studio Administration.

  5. When prompted to select a workspace, accept the default value, and click Launch.

    In the Workspace Launcher, the Workspace is defined as C:\Users\bastion_user\hdbstudio.

  6. When prompted to provide a password hint, click No.

    A Password Hint Pop-up asks if you want to provide a password hint.

  7. On the Overview page, click Open Administration Console.

    the Open Administration Console link displays on the Overview page.

  8. In the SAP HANA Administration Console, expand the Systems menu, and click Add System.

    The SAP HANA Administration Console displays the empty Systems and Properties nodes, and the Systems menu.

  9. In the Specify System dialog box, specify the following information, and click Next.

    • Host Name: hn1-hdb0

    • Instance number: 01

      The Specify System dialog box displays with the previously defined settings.

  10. In the Connection Properties dialog box, select the Authentication by database user option, specify the following information, and click Finish.

    • User Name: SYSTEM

    • Password: Lab@pa55hv1.0

      The Connection Properties dialog box displays with the previously defined settings.

  11. Once you successfully connected to hn1-hdb0 as SYSTEM, select the HN1 (SYSTEM) node and click the Administration icon in the Systems toolbar and then click Open Default Administration.

    On the Systems node toolbar, the Administration icon is selected.

  12. Review the Administration status on the Overview tab and ensure that all services are started.

    In the Configuration and Monitoring view, on the Overview tab, all services are started, active, and in sync.

  13. Switch to the Alerts tab, and verify they are no alerts indicating operational issues.

    On the SAP HANA Administration Console Alerts tab, no operational issues display.

Task 2: Remove the single node HANA deployment

  1. Switch to the lab computer and, in the Cloud Shell pane, from the Bash prompt, run the following to change the current directory to the one hosting the Terraform and Ansible files that you used for the single node HANA deployment:

    cd ~/sap-hana/deploy/vm/modules/single_node_hana/
    

    Note: If needed, in the Azure portal, restart the Cloud Shell.

  2. In the Cloud Shell pane, from the Bash prompt, run the following to remove all resources provisioned by Terraform-based single-node HANA deployment:

    terraform destroy -auto-approve
    
  3. When prompted, type yes and press the Enter key to continue with the removal of the deployed resources.

Note: Do not wait for the completion of the removal but instead proceed to the next exercise.

Exercise 3: Deploy highly-available HANA instances by using Terraform and Ansible

Duration: 90 minutes

In this exercise, you will implement a highly-available deployment of SAP HANA on Azure virtual machines (VMs). Following initial configuration of Terraform configuration files, the deployment will be fully automated, including installation of all necessary SAP HANA components.

You will leverage a number of artifacts that you already implemented earlier in this lab, including:

-   HANA software that you uploaded to an Azure Storage account in the first task of the first exercise

-   SSH key pair you generated in the second task of the first exercise

Task 1: Prepare for a highly-available HANA deployment

  1. If needed, in the Azure portal, start the Cloud Shell.

  2. In the Cloud Shell pane, from the Bash prompt, run the following to clone the repository hosting the Terraform and Ansible files that you will use for deployment:

    cd ~
    rm ~/sap-hana/ -r -f
    git clone https://github.com/polichtm/sap-hana.git
    
  3. In the Cloud Shell pane, from the Bash prompt, run the following to change the current directory to the one hosting the Terraform and Ansible files that you will use for deployment:

    cd ~/sap-hana/deploy/vm/modules/ha_pair/
    
  4. In the Cloud Shell pane, from the Bash prompt, run the following to create an Azure AD service principal that will be used during deployment:

    HANA_SP_NAME='hanav1hasp01'
    HANA_SP_ID=$(az ad sp list --display-name $HANA_SP_NAME --query "[0].appId" --output tsv)
    if ! [ -z "$HANA_SP_ID" ]
    then
        az ad sp delete --id $HANA_SP_ID
    fi
    HANA_SP=$(az ad sp create-for-rbac --name $HANA_SP_NAME)
    
  5. In the Cloud Shell pane, from the Bash prompt, run the following to set the variables that that will be used during deployment, representing, respectively, the identifier of the Azure subscription and its Azure AD tenant, as well as the application identifier and the corresponding password of the service principal you created in the previous step:

    export AZURE_SUBSCRIPTION_ID=$(az account show | jq -r '.id')
    export AZURE_TENANT=$(az account show | jq -r '.tenantId')
    export AZURE_CLIENT_ID=$(echo $HANA_SP | jq -r '.appId')
    export AZURE_SECRET=$(echo $HANA_SP | jq -r '.password')
    
    export ARM_SUBSCRIPTION_ID=$(az account show | jq -r '.id')
    export ARM_TENANT_ID=$(az account show | jq -r '.tenantId')
    export ARM_CLIENT_ID=$(echo $HANA_SP | jq -r '.appId')
    export ARM_CLIENT_SECRET=$(echo $HANA_SP | jq -r '.password')
    
  6. In the Cloud Shell pane, from the Bash prompt, run the following to retrieve the location in which you created the storage account in the first task of the first exercise of this lab:

    MEDIA_RESOURCE_GROUP_NAME='hanaMedia-RG'
    LOCATION=$(az group show --name $MEDIA_RESOURCE_GROUP_NAME --query location --output tsv)
    
  7. In the Cloud Shell pane, from the Bash prompt, run the following to create the resource group that will host all resources deployed in this task:

    HANA_V1_HA_RESOURCE_GROUP_NAME='hanav1ha-RG'
    az group create --location $LOCATION --name $HANA_V1_HA_RESOURCE_GROUP_NAME
    
  8. In the Cloud Shell pane, from the Bash prompt, run the following to generate a pseudo-random name that will be used as a prefix for DNS names assigned to public IP address resources deployed in this task:

    DOMAIN_NAME=hanav1ha$RANDOM
    
  9. In the Cloud Shell pane, from the Bash prompt, run the following to specify the size of the Azure VMs to be used to host the highly available HANA instances deployed in this task:

    VM_SIZE='Standard_E8s_v3'
    
  10. In the Cloud Shell pane, from the Bash prompt, run the following to specify the name of the root user account for the Linux VMs deployed in this task:

    VM_USERNAME='labuser'
    
  11. In the Cloud Shell pane, from the Bash prompt, run the following to add the values you specified to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    sed -i "s/VAR_RESOURCE_GROUP/$HANA_V1_HA_RESOURCE_GROUP_NAME/" ./terraform.tfvars
    sed -i "s/VAR_LOCATION/$LOCATION/" ./terraform.tfvars
    sed -i "s/VAR_DOMAIN_NAME/$DOMAIN_NAME/" ./terraform.tfvars
    sed -i "s/VAR_VM_SIZE/$VM_SIZE/" ./terraform.tfvars
    sed -i "s/VAR_VM_USERNAME/$VM_USERNAME/" ./terraform.tfvars
    
  12. In the Cloud Shell pane, from the Bash prompt, run the following to identify the name of the storage account containing the media files, which you configured and populated in the previous task of this exercise:

    STORAGE_ACCOUNT_NAME=$(az storage account list --resource-group $MEDIA_RESOURCE_GROUP_NAME --query "[?starts_with(name,'hana')].[name]" --output tsv)
    
  13. In the Cloud Shell pane, from the Bash prompt, run the following to specify the names of the software packages you uploaded to the storage account in the previous task of this exercise:

    SAPCAR_LINUX_NAME='SAPCAR_1211-80000935.EXE'
    SAPCAR_WINDOWS_NAME='SAPCAR_1211-80000938.EXE'
    HDBSERVER_NAME='IMDB_SERVER100_122_24-10009569.SAR'
    SAP_HOST_AGENT_NAME='SAPHOSTAGENT36_36-20009394.SAR'
    HANA_STUDIO_WINDOWS_NAME='IMC_STUDIO2_240_0-80000323.SAR' 
    

    Note: The packages listed above might be superseded by newer versions. If so, ensure to adjust accordingly the names of these packages.

  14. In the Cloud Shell pane, from the Bash prompt, run the following to add the values you specified to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    SAPCAR_LINUX_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAPCAR_LINUX_NAME
    SAPCAR_LINUX_URL_REGEX="$(echo $SAPCAR_LINUX_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    SAPCAR_WINDOWS_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAPCAR_WINDOWS_NAME
    SAPCAR_WINDOWS_URL_REGEX="$(echo $SAPCAR_WINDOWS_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    HDBSERVER_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$HDBSERVER_NAME
    HDBSERVER_URL_REGEX="$(echo $HDBSERVER_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    SAP_HOST_AGENT_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$SAP_HOST_AGENT_NAME
    SAP_HOST_AGENT_URL_REGEX="$(echo $SAP_HOST_AGENT_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    HANA_STUDIO_WINDOWS_URL='https://'$STORAGE_ACCOUNT_NAME'.blob.core.windows.net/sapbits/'$HANA_STUDIO_WINDOWS_NAME
    HANA_STUDIO_WINDOWS_URL_REGEX="$(echo $HANA_STUDIO_WINDOWS_URL | sed -e 's/\\/\\\\/g; s/\//\\\//g; s/&/\\\&/g')"
    
    sed -i "s/VAR_SAPCAR_LINUX_URL/$SAPCAR_LINUX_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_SAPCAR_WINDOWS_URL/$SAPCAR_WINDOWS_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_HDBSERVER_URL/$HDBSERVER_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_SAP_HOST_AGENT_URL/$SAP_HOST_AGENT_URL_REGEX/" ./terraform.tfvars
    sed -i "s/VAR_HANA_STUDIO_WINDOWS_URL/$HANA_STUDIO_WINDOWS_URL_REGEX/" ./terraform.tfvars    
    
  15. In the Cloud Shell pane, from the Bash prompt, run the following to set the values of passwords for user accounts that will be used to manage the single-node HANA instance:

    SAPADMUSER_PASSWORD='Lab@pa55hv1.0'
    ADMUSER_PASSWORD='Lab@pa55hv1.0'
    DBSYSTEMUSER_PASSWORD='Lab@pa55hv1.0'
    DBXSAADMIN_PASSWORD='Lab@pa55hv1.0'
    DBSYSTEMTENANTUSER_PASSWORD='Lab@pa55hv1.0'
    DBSHINEUSER_PASSWORD='Lab@pa55hv1.0'
    WINDOWS_ADMIN_PASSWORD='Lab@pa55hv1.0'
    HA_CLUSTER_NODES_PASSWORD='Lab@pa55hv1.0'
    
  16. In the Cloud Shell pane, from the Bash prompt, run the following to add the password values to the terraform.tfvars file that contains variables used by Terraform deployment performed in this task:

    sed -i "s/VAR_SAPADMUSER_PASSWORD/$SAPADMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_ADMUSER_PASSWORD/$ADMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSYSTEMUSER_PASSWORD/$DBSYSTEMUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBXSAADMIN_PASSWORD/$DBXSAADMIN_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSYSTEMTENANTUSER_PASSWORD/$DBSYSTEMTENANTUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_DBSHINEUSER_PASSWORD/$DBSYSTEMTENANTUSER_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_WINDOWS_ADMIN_PASSWORD/$WINDOWS_ADMIN_PASSWORD/" ./terraform.tfvars
    sed -i "s/VAR_HA_CLUSTER_NODES_PASSWORD/$HA_CLUSTER_NODES_PASSWORD/" ./terraform.tfvars
    

Task 2: Perform the highly-available HANA deployment

  1. In the Cloud Shell pane, from the Bash prompt, run the following to initialize Terraform modules and provider plugins necessary to perform Terraform-based single-node HANA deployment:

    terraform init
    
  2. In the Cloud Shell pane, from the Bash prompt, run the following to identify changes to be performed by the Terraform-based highly-available HANA deployment:

    terraform plan
    
  3. In the Cloud Shell pane, from the Bash prompt, run the following to initiate Terraform-based highly-available HANA deployment:

    terraform apply -auto-approve
    

Note: The deployment takes about 60 minutes to complete.

Exercise 4: Validate and remove the deployment of the highly-available HANA instances

Duration: 60 minutes

In this exercise, you will validate the deployment of the highly-available HANA instance you performed in the previous exercise by using the Windows bastion host. Once you successfully validate the deployment, you will remove all of its resources.

Task 1: Connect to the highly-available HANA instances by using SAP HANA Studio

  1. From the lab computer, in the Azure portal, navigate to the blade of the hn1-win-bastion Azure VM operating as the Windows bastion host and initiate a Remote Desktop session. When prompted, sign in with the following credentials:

    • Username: bastion_user

    • Password: Lab@pa55hv1.0

    Note: The user name of the Windows bastion host is set in /deploy/vm/modules/ha_pair/variables.tf

  2. Within the Remote Desktop session to hn1-win-bastion, navigate to the Local Server view in the Server Manager window and disable IE Enhanced Security Configuration.

  3. Within the Remote Desktop session to hn1-win-bastion, start Notepad, and open the hosts file located in C:\Windows\System32\drivers\etc.

  4. Add the following entries to the host file, save your changes, and close the file:

    10.0.0.13	hdbha        
    

    Note: 10.0.0.13 is the IP address assigned to the front end of the Azure Internal Load Balancer that distributes network traffic to the Azure VMs hosting highly-available HANA instances.

  5. Within the Remote Desktop session to hn1-win-bastion, start SAP HANA Studio.

  6. When prompted to select a workspace, accept the default value, and click Launch.

    In the Workspace Launcher, the Workspace is defined as C:\Users\bastion_user\hdbstudio.

  7. When prompted to provide a password hint, click No.

    A Password Hint Pop-up asks if you want to provide a password hint.

  8. On the Overview page, click Open Administration Console.

    the Open Administration Console link displays on the Overview page.

  9. In the SAP HANA Administration Console, expand the Systems menu, and click Add System.

    The SAP HANA Administration Console displays the empty Systems and Properties nodes, and the Systems menu.

  10. In the Specify System dialog box, specify the following information, and click Next.

    • Host Name: hdbha

    • Instance number: 01

      The Specify System dialog box displays with the previously defined settings.

  11. In the Connection Properties dialog box, select the Authentication by database user option, specify the following information, and click Finish.

    • User Name: SYSTEM

    • Password: Lab@pa55hv1.0

      The Connection Properties dialog box displays with the previously defined settings.

  12. Once you successfully connected to HN1 as SYSTEM, select the HN1 (SYSTEM) node and click the System Monitor icon in the Systems toolbar.

    On the Systems node toolbar, the System Monitor icon is selected.

  13. Review the System Monitor view.

    The SAP HANA Administration Console System Monitor tab displays the System Monitor status.

    Note: It typically takes a few minutes before the operational state is fully identified.

  14. Right click the HN1 (SYSTEM) node and in the right click menu. Click Configuration and Monitoring followed by Open Administration.

    On the Systems node, HN1 (System) is selected. From its right-click menu, Configuration and Monitoring is selected.

  15. In the Configuration and Monitoring view, examine the Overview tab. Verify that all services are started, active, and in sync. You might need to wait a few minutes before the operational state is identified.

    In the Configuration and Monitoring view, on the Overview tab, all services are started, active, and in sync.

  16. Switch to the Alerts tab, and verify they are not indicating any operational issues.

    On the SAP HANA Administration Console Alerts tab, no operational issues display.

Task 2: Connect to the highly-available HANA instances by using Hawk

  1. From the lab computer, in the Azure portal, navigate to the Virtual machines blade.

  2. On the Virtual machines blade, right-click the ellipsis to the right of the hn1-hdb0 entry and, in the drop-down menu, click Connect.

  3. On the Connect to virtual machine blade, copy the entry in the Login using VM local account entry.

  4. In the Azure portal, start a Bash session within the Cloud Shell and paste the entry you copied in the previous step. The entry will be in the following format (where <dns_name> designates the fully qualified DNS name assigned to the public IP address assigned to the Azure VM):

    ssh labuser@<dns_name>
    
  5. When prompted whether you want to continue connecting, type yes and press Enter.

  6. Within the SSH session, reset the password of the hacluster account to Lab@pa55hv1.0 by running the following and following the prompts:

    sudo passwd hacluster
    
  7. Switch back to the Virtual machines blade, right-click the ellipsis to the right of the hn1-hdb1 entry and, in the drop-down menu, click Connect.

  8. On the Connect to virtual machine blade, copy the entry in the Login using VM local account entry.

  9. In the Azure portal, start another Bash session within the Cloud Shell and paste the entry you copied in the previous step. The entry will be in the following format (where <dns_name> designates the fully qualified DNS name assigned to the public IP address assigned to the Azure VM):

    ssh labuser@<dns_name>
    
  10. When prompted whether you want to continue connecting, type yes and press Enter.

  11. Within the SSH session, reset the password of the hacluster account to Lab@pa55hv1.0 by running the following and following the prompts:

    sudo passwd hacluster
    
  12. Switch to the Remote Desktop session to hn1-win-bastion Azure VM, start Internet Explorer, and browse to https://hn1-hdb0:7630. On the SUSE Hawk Sign in page, sign in as hacluster with the password Lab@pa55hv1.0.

  13. Once you sign in, review the Resources tab on the Status page.

    On the Status page, the Resources tab several resources in varying states of readiness.

  14. Next, switch to the Nodes tab on the Status page.

    On the Status page, the Nodes tab is selected, and displays two nodes..

  15. Switch back to the Resources and use the magnifying glass icon to examine the state of the HANA resources starting with the SAPHANATopology.

    A page displays with the SAPHANATopology resource details.

  16. Close the SAPHANATopology pane and use the magnifying glass icon to examine the state of the SAPHana resource.

    A page displays with the SAPHana resource details.

Task 3: Test a failover

  1. Within the Remote Desktop session to hn1-win-bastion Azure VM, in the Internet Explorer window displaying the SUSE Hawk page, from the msl_SAPHana_HN1_HDB01 pane, identify the system currently serving the master role. Close the msl_SAPHana_HN1_HDB01 pane.

    The same page displays with the SAPHana resource details.

  2. Switch to the lab computer and, in the Azure portal, and identify the Bash session which hosts the SSH session to the Azure VM you identified in the previous step.

    Note: You might need to restart the Bash session and reestablish the SSH session. If so, follow the instructions in the previous task.

  3. Within the SSH session, initiate a failover by running the following:

    sudo service pacemaker stop
    
  4. Switch back to the Remote Desktop session to hn1-win-bastion Azure VM and, in the Internet Explorer window displaying the SUSE Hawk page, observe how the status of the resource changes first to a question mark and then to a blue dot.

    On the Status page, the Resources tab displays, with a resource selected whose status has a question mark.

  5. Start Internet Explorer, and browse to https://hn1-hdb1:7630. On the SUSE Hawk Sign in page, sign in as hacluster with the password Lab@pa55hv1.0.

  6. In the Internet Explorer window displaying the SUSE Hawk resources status page, identify the system currently serving the master role hosting the msl_SAPHana_HN1_HDB01 resource:

    The resource selected now has a blue dot under Status.

  7. Switch to SAP HANA Administration Console, and refresh the Overview tab in the Configuration and Monitoring view. Note that SAP HANA is running at this point on the hn1-hdb1 node, and it is operational:

    In the Configuration and Monitoring view, the Overview tab displays information for the hn1-hdb1 node.

    Note: You might need to wait a few minutes before the operational state is identified.

  8. Switch to the lab computer, in the Azure portal and, in the SSH session in Cloud Shell, start the pacemaker service by running the following:

    sudo service pacemaker start
    
  9. Terminate the SSH session by running the following:

    exit
    
  10. Switch back to the Remote Desktop session to hn1-win-bastion Azure VM, switch to the SUSE Hawk Status page at https://hn1-hdb1:7630, and observe how the SAPHana clustered resource status is changing to operational on both hn1-hdb0 and hn1-hdb1 with hn1-hdb1 as the primary (you might need to wait a few minutes for the interface to refresh):

    On the Resources tab, the SAPHana line now displays a blue dot.

Task 4: Test a migration

  1. Within the Remote Desktop session to hn1-win-bastion Azure VM, ensure that you are viewing the SUSE Hawk Status page at https://hn1-hdb1:7630.

    On the Resources tab, the SAPHana line now displays a blue dot.

  2. From the SUSE Hawk Status page at https://hn1-hdb1:7630, select the Migrate option of the SAPHana clustered resource.

    On the Resources tab, the SAPHana line now displays a menu containing the Maintenance, Migrate, Cleanup, Recent events, and Edit options.

  3. From the SUSE Hawk Status page at https://hn1-hdb1:7630, select the hn1-hdb0 node as the migration target.

    In the Migrate msl_SAPHana_HN1_HDB01 dialog box, the entry hn1-hdb0 is selected.

  4. On the SUSE Hawk Status page, note that the status of SAPHana clustered resource is listed with a question mark and a couple of chain link icons representing constraints.

    On the Resources tab, the SAPHana line now displays a question mark and two constraints symbols.

  5. To remediate this, use the vertical menu on the left-hand side of the SUSE Hawk Status to switch to the Edit Configuration page and display its Constraints tab.

    The Constraints tab is selected on the Edit Configuration page.

    Note: Ban and prefer location constraints are generated automatically during migration operation in order to prevent unintended failback to the original cluster node. However, such constraints should be removed once the original node is available to host cluster resources.

    Note: Make sure not to accidentally remove other constraints.

  6. From the Constraints page, delete the cli-prefer-msl_SAPHana_HN1_HDB01 Location constraint.

    Under Operations, the Delete constraint icon is selected for cli-prefer-msl_SAPHana_HN1_HDB01.

  7. From the Constraints page, delete the cli-ban-msl_SAPHana_HN1_HDB01-on-hn1-hdb1 Location constraint.

    Under Operations, the Delete constraint icon is selected for cli-ban-msl_SAPHana_HN1_HDB01-on-hn1-hdb1.

  8. Switch to the SUSE Hawk Status page, and verify the SAPHana clustered resource is operational on both nodes with hn1-hdb0 as the master (you might need to wait a few minutes for the interface to refresh).

    The Resources tab is selected on the Status page.

  9. Switch to SAP HANA Administration Console, and refresh the Overview tab in the Configuration and Monitoring view. Note that SAP HANA is running at this point on the hn1-hdb0 node and is operational.

    In the Configuration and Monitoring view, on the Overview tab, details display for the hn1-hdb0 node.

Task 5: Test fencing

  1. Switch to the lab computer, in the Azure portal, navigate to the Virtual machines blade, and stop the Azure virtual machine to which you migrated HANA resources in the previous task.

  2. Switch back to the Remote Desktop session to hn1-win-bastion Azure VM, wait until the status of the msl_SAPHana_HN1_HDB01 resource in the Internet Explorer window displaying connection to https://hn1-hdb1:7630 changes from a question mark to a blue dot, and verify that its location changed to hn1-hdb1.

    On the Resources tab, the status of the resource has a blue dot, and its location is hn1-hdb1.

  3. Switch to SAP HANA Administration Console, and refresh the Overview tab in the Configuration and Monitoring view. Note that SAP HANA is running at this point on the hn1-hdb1 node.

    In the Configuration and Monitoring view, on the Overview tab, details display for SAP HANA.

  4. In SAP HANA Administration Console, navigate to the System Replication sub-tab of the Landscape tab of the Configuration and Monitoring view. Note that replication status indicates that the communication channel is closed.

    In the Configuration and Monitoring view, on the System Repliation sub-tab of the Landscape tab, details display replication status errors for SAP HANA.

    Note: You might need to wait a few minutes before the operational state is identified.

  5. Switch to the lab computer, in the Azure portal, navigate to the Virtual machines blade, start the hn1-hdb0 virtual machine and wait until it is running again.

  6. Switch back to the Remote Desktop session to hn1-win-bastion Azure VM and, on the SUSE Hawk Status page at https://hn1-hdb1:7630 note that the SAPHana clustered resource is operational on both hn1-hdb0 and hn1-hdb1 with hn1-hdb1 as the primary (you might need to wait a few minutes for the interface to refresh):

    On the Resources tab, the SAPHana line now displays a blue dot.

  7. Within the Remote Desktop session to hn1-win-bastion Azure VM, switch to the SAP HANA Administration Console and, on the System Replication sub-tab of the Landscape tab of the Configuration and Monitoring view, note that replication status is active.

    In the Configuration and Monitoring view, on the System Repliation sub-tab of the Landscape tab, details display active replication status for SAP HANA.

  8. In SAP HANA Administration Console, switch to the Overview tab in the Configuration and Monitoring view. Note that SAP HANA continues running on the hn1-hdb1 node and is fully operational.

    In the Configuration and Monitoring view, on the Overview tab, details display for the hn1-hdb0 node.

Task 6: Remove the highly-available HANA deployment

  1. Switch to the lab computer and, in the first Cloud Shell pane, from the Bash prompt, run the following to change the current directory to the one hosting the Terraform and Ansible files that you used for the highly-available HANA deployment:

    cd ~/sap-hana/deploy/vm/modules/ha_pair/
    

    Note: If needed, in the Azure portal, restart the Cloud Shell.

  2. In the Cloud Shell pane, from the Bash prompt, run the following to remove all resources provisioned by Terraform-based highly-available HANA deployment:

    terraform destroy -auto-approve
    
  3. When prompted, type yes and press the Enter key to continue with the deployment.

Note: Wait for the completion of the removal before you proceed to the next task.

Make sure to complete all After the Hands-on lab steps below.

After the Hands-on lab

Duration: 5 minutes

After completing the hands-on lab, you will remove the resource group and any remaining resources.

Task 1: Remove the resource group containing all Azure resources deployed in this lab

  1. From the lab computer, in the Azure portal at http://portal.azure.com , click the Cloud Shell icon.

  2. If prompted, in the Welcome to Azure Cloud Shell window, click Bash (Linux).

  3. At the Bash prompt, run the following:

    if [ az group exists --name hanav1sn-RG ]
    then
         az group delete --name hanav1sn-RG --no-wait --yes
    fi
    if [ az group exists --name hanav1ha-RG ]
    then
         az group delete --name hanav1ha-RG --no-wait --yes
    fi
    if [ az group exists --name hanaMedia-RG ]
    then
         az group delete --name hanaMedia-RG --no-wait --yes
    fi
    cd ~
    rm ~/sap-hana/ -r -f
    

Attribution

This content was originally posted here:
https://github.com/Microsoft/MCW-SAP-Hana-on-Azure

License

This content is licensed with the MIT License license.

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE