dialog assisted notebooks (#6564)

This commit is contained in:
Alan Ren
2019-08-05 16:01:34 -07:00
committed by GitHub
parent 2431bb8e37
commit 2bb8806da6
13 changed files with 1074 additions and 225 deletions

View File

@@ -6,7 +6,7 @@
},
"language_info": {
"name": "python",
"version": "3.6.6",
"version": "3.7.3",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
@@ -22,12 +22,33 @@
"cells": [
{
"cell_type": "markdown",
"source": "![Microsoft](https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft-small-logo.png)\n \n## Create Azure Kubernetes Service cluster and deploy SQL Server 2019 CTP 3.2 big data cluster\n \nThis notebook walks through the process of creating a new Azure Kubernetes Service cluster first, and then deploys a <a href=\"https://docs.microsoft.com/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions\">SQL Server 2019 CTP 3.2 big data cluster</a> on the newly created AKS cluster.\n \n* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n* The **Required information** cell will prompt you for a password that will be used to access the cluster controller, SQL Server, and Knox.\n* The values in the **Azure settings** and **Default settings** cell can be changed as appropriate.",
"source": [
"![Microsoft](https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft-small-logo.png)\n",
" \n",
"## Create Azure Kubernetes Service cluster and deploy SQL Server 2019 CTP 3.2 big data cluster\n",
" \n",
"This notebook walks through the process of creating a new Azure Kubernetes Service cluster first, and then deploys a <a href=\"https://docs.microsoft.com/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions\">SQL Server 2019 CTP 3.2 big data cluster</a> on the newly created AKS cluster.\n",
" \n",
"* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n",
"* The **Required information** cell will prompt you for a password that will be used to access the cluster controller, SQL Server, and Knox.\n",
"* The values in the **Azure settings** and **Default settings** cell can be changed as appropriate.\n",
"\n",
"<span style=\"color:red\"><font size=\"3\">Please press the \"Run Cells\" button to run the notebook</font></span>"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### **Prerequisites**\nEnsure the following tools are installed and added to PATH before proceeding.\n\n|Tools|Description|Installation|\n|---|---|---|\n| Azure CLI |Command-line tool for managing Azure services. Used to create AKS cluster | [Installation](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) |\n|kubectl | Command-line tool for monitoring the underlying Kuberentes cluster | [Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-native-package-management) |\n|azdata | Command-line tool for installing and managing a big data cluster |[Installation](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) |",
"source": [
"### **Prerequisites**\n",
"Ensure the following tools are installed and added to PATH before proceeding.\n",
"\n",
"|Tools|Description|Installation|\n",
"|---|---|---|\n",
"|Azure CLI |Command-line tool for managing Azure services. Used to create AKS cluster | [Installation](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) |\n",
"|kubectl | Command-line tool for monitoring the underlying Kuberentes cluster | [Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-native-package-management) |\n",
"|azdata | Command-line tool for installing and managing a big data cluster |[Installation](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) |"
],
"metadata": {}
},
{
@@ -37,7 +58,29 @@
},
{
"cell_type": "code",
"source": "import sys\r\ndef run_command():\r\n print(\"Executing: \" + cmd)\r\n !{cmd}\r\n if _exit_code != 0:\r\n sys.exit(f'Command execution failed with exit code: {str(_exit_code)}.\\n\\t{cmd}\\n')\r\n print(f'Successfully executed: {cmd}')\r\n\r\ncmd = 'az --version'\r\nrun_command()\r\ncmd = 'kubectl version --client=true'\r\nrun_command()\r\ncmd = 'azdata --version'\r\nrun_command()",
"source": [
"import pandas,sys,os,getpass,time,json,html\r\n",
"pandas_version = pandas.__version__.split('.')\r\n",
"pandas_major = int(pandas_version[0])\r\n",
"pandas_minor = int(pandas_version[1])\r\n",
"pandas_patch = int(pandas_version[2])\r\n",
"if not (pandas_major > 0 or (pandas_major == 0 and pandas_minor > 24) or (pandas_major == 0 and pandas_minor == 24 and pandas_patch >= 2)):\r\n",
" sys.exit('Please upgrade the Notebook dependency before you can proceed, you can do it by running the \"Reinstall Notebook dependencies\" command in Azure Data Studio.')\r\n",
"\r\n",
"def run_command():\r\n",
" print(\"Executing: \" + cmd)\r\n",
" !{cmd}\r\n",
" if _exit_code != 0:\r\n",
" sys.exit(f'Command execution failed with exit code: {str(_exit_code)}.\\n\\t{cmd}\\n')\r\n",
" print(f'Successfully executed: {cmd}')\r\n",
"\r\n",
"cmd = 'az --version'\r\n",
"run_command()\r\n",
"cmd = 'kubectl version --client=true'\r\n",
"run_command()\r\n",
"cmd = 'azdata --version'\r\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 1
@@ -49,19 +92,49 @@
},
{
"cell_type": "code",
"source": "import getpass\nmssql_password = getpass.getpass(prompt = 'SQL Server 2019 big data cluster controller password')\nif mssql_password == \"\":\n sys.exit(f'Password is required')\nconfirm_password = getpass.getpass(prompt = 'Confirm password')\nif mssql_password != confirm_password:\n sys.exit(f'Passwords do not match.')\nprint('Password accepted, you can also use the same password to access Knox and SQL Server.')",
"source": [
"env_var_flag = \"AZDATA_NB_VAR_BDC_CONTROLLER_PASSWORD\" in os.environ\n",
"if env_var_flag:\n",
" mssql_password = os.environ[\"AZDATA_NB_VAR_BDC_CONTROLLER_PASSWORD\"]\n",
"else: \n",
" mssql_password = getpass.getpass(prompt = 'SQL Server 2019 big data cluster controller password')\n",
" if mssql_password == \"\":\n",
" sys.exit(f'Password is required.')\n",
" confirm_password = getpass.getpass(prompt = 'Confirm password')\n",
" if mssql_password != confirm_password:\n",
" sys.exit(f'Passwords do not match.')\n",
"print('Password accepted, you can also use the same password to access Knox and SQL Server.')"
],
"metadata": {},
"outputs": [],
"execution_count": 2
},
{
"cell_type": "markdown",
"source": "### **Azure settings**\n*Subscription ID*: visit <a href=\"https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade\">here</a> to find out the subscriptions you can use, if you leave it unspecified, the default subscription will be used.\n\n*VM Size*: visit <a href=\"https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes\">here</a> to find out the available VM sizes you could use. \n \n*Region*: visit <a href=\"https://azure.microsoft.com/en-us/global-infrastructure/services/?products=kubernetes-service\">here</a> to find out the Azure regions where the Azure Kubernettes Service is available.",
"source": [
"### **Azure settings**\n",
"*Subscription ID*: visit <a href=\"https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade\">here</a> to find out the subscriptions you can use, if you leave it unspecified, the default subscription will be used.\n",
"\n",
"*VM Size*: visit <a href=\"https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes\">here</a> to find out the available VM sizes you could use. \n",
" \n",
"*Region*: visit <a href=\"https://azure.microsoft.com/en-us/global-infrastructure/services/?products=kubernetes-service\">here</a> to find out the Azure regions where the Azure Kubernettes Service is available."
],
"metadata": {}
},
{
"cell_type": "code",
"source": "azure_subscription_id = \"\"\nazure_vm_size = \"Standard_E4s_v3\"\nazure_region = \"eastus\"\nazure_vm_count = int(5)",
"source": [
"if env_var_flag:\n",
" azure_subscription_id = os.environ[\"AZDATA_NB_VAR_BDC_AZURE_SUBSCRIPTION\"]\n",
" azure_vm_size = os.environ[\"AZDATA_NB_VAR_BDC_AZURE_VM_SIZE\"]\n",
" azure_region = os.environ[\"AZDATA_NB_VAR_BDC_AZURE_REGION\"]\n",
" azure_vm_count = int(os.environ[\"AZDATA_NB_VAR_BDC_VM_COUNT\"])\n",
"else:\n",
" azure_subscription_id = \"\"\n",
" azure_vm_size = \"Standard_E4s_v3\"\n",
" azure_region = \"eastus\"\n",
" azure_vm_count = int(5)"
],
"metadata": {},
"outputs": [],
"execution_count": 3
@@ -73,31 +146,73 @@
},
{
"cell_type": "code",
"source": "import time\nmssql_cluster_name = 'mssql-cluster'\nmssql_controller_username = 'admin'\nazure_resource_group = mssql_cluster_name + '-' + time.strftime(\"%Y%m%d%H%M%S\", time.localtime())\naks_cluster_name = azure_resource_group\nconfiguration_profile = 'aks-dev-test'\nconfiguration_folder = 'mssql-bdc-configuration'\nprint(f'Azure subscription: {azure_subscription_id}')\nprint(f'Azure VM size: {azure_vm_size}')\nprint(f'Azure VM count: {str(azure_vm_count)}')\nprint(f'Azure region: {azure_region}')\nprint(f'Azure resource group: {azure_resource_group}')\nprint(f'AKS cluster name: {aks_cluster_name}')\nprint(f'SQL Server big data cluster name: {mssql_cluster_name}')\nprint(f'SQL Server big data cluster controller user name: {mssql_controller_username}')\nprint(f'Deployment configuration profile: {configuration_profile}')\nprint(f'Deployment configuration: {configuration_folder}')",
"source": [
"if env_var_flag:\n",
" mssql_cluster_name = os.environ[\"AZDATA_NB_VAR_BDC_NAME\"]\n",
" mssql_controller_username = os.environ[\"AZDATA_NB_VAR_BDC_CONTROLLER_USERNAME\"]\n",
" azure_resource_group = os.environ[\"AZDATA_NB_VAR_BDC_RESOURCEGROUP_NAME\"]\n",
" aks_cluster_name = os.environ[\"AZDATA_NB_VAR_BDC_AKS_NAME\"]\n",
"else:\n",
" mssql_cluster_name = 'mssql-cluster'\n",
" mssql_controller_username = 'admin'\n",
" azure_resource_group = mssql_cluster_name + '-' + time.strftime(\"%Y%m%d%H%M%S\", time.localtime())\n",
" aks_cluster_name = azure_resource_group\n",
"configuration_profile = 'aks-dev-test'\n",
"configuration_folder = 'mssql-bdc-configuration'\n",
"print(f'Azure subscription: {azure_subscription_id}')\n",
"print(f'Azure VM size: {azure_vm_size}')\n",
"print(f'Azure VM count: {str(azure_vm_count)}')\n",
"print(f'Azure region: {azure_region}')\n",
"print(f'Azure resource group: {azure_resource_group}')\n",
"print(f'AKS cluster name: {aks_cluster_name}')\n",
"print(f'SQL Server big data cluster name: {mssql_cluster_name}')\n",
"print(f'SQL Server big data cluster controller user name: {mssql_controller_username}')\n",
"print(f'Deployment configuration profile: {configuration_profile}')\n",
"print(f'Deployment configuration: {configuration_folder}')"
],
"metadata": {},
"outputs": [],
"execution_count": 4
},
{
"cell_type": "markdown",
"source": "### **Login to Azure**\n\nThis will open a web browser window to enable credentials to be entered. If this cells is hanging forever, it might be because your Web browser windows is waiting for you to enter your Azure credentials!\n",
"source": [
"### **Login to Azure**\n",
"\n",
"This will open a web browser window to enable credentials to be entered. If this cells is hanging forever, it might be because your Web browser windows is waiting for you to enter your Azure credentials!\n",
""
],
"metadata": {}
},
{
"cell_type": "code",
"source": "cmd = f'az login'\nrun_command()",
"source": [
"cmd = f'az login'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 5
},
{
"cell_type": "markdown",
"source": "\n### **Set active Azure subscription**",
"source": [
"\n",
"### **Set active Azure subscription**"
],
"metadata": {}
},
{
"cell_type": "code",
"source": "if azure_subscription_id != \"\":\n cmd = f'az account set --subscription {azure_subscription_id}'\n run_command()\nelse:\n print('Using the default Azure subscription', {azure_subscription_id})\ncmd = f'az account show'\nrun_command()",
"source": [
"if azure_subscription_id != \"\":\n",
" cmd = f'az account set --subscription {azure_subscription_id}'\n",
" run_command()\n",
"else:\n",
" print('Using the default Azure subscription', {azure_subscription_id})\n",
"cmd = f'az account show'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 6
@@ -109,7 +224,10 @@
},
{
"cell_type": "code",
"source": "cmd = f'az group create --name {azure_resource_group} --location {azure_region}'\nrun_command()",
"source": [
"cmd = f'az group create --name {azure_resource_group} --location {azure_region}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 7
@@ -121,7 +239,10 @@
},
{
"cell_type": "code",
"source": "cmd = f'az aks create --name {aks_cluster_name} --resource-group {azure_resource_group} --generate-ssh-keys --node-vm-size {azure_vm_size} --node-count {azure_vm_count}' \nrun_command()",
"source": [
"cmd = f'az aks create --name {aks_cluster_name} --resource-group {azure_resource_group} --generate-ssh-keys --node-vm-size {azure_vm_size} --node-count {azure_vm_count}' \n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 8
@@ -133,7 +254,10 @@
},
{
"cell_type": "code",
"source": "cmd = f'az aks get-credentials --resource-group {azure_resource_group} --name {aks_cluster_name} --admin --overwrite-existing'\r\nrun_command()",
"source": [
"cmd = f'az aks get-credentials --resource-group {azure_resource_group} --name {aks_cluster_name} --admin --overwrite-existing'\r\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 9
@@ -145,7 +269,13 @@
},
{
"cell_type": "code",
"source": "import os\nos.environ[\"ACCEPT_EULA\"] = 'yes'\ncmd = f'azdata bdc config init --source {configuration_profile} --target {configuration_folder} --force'\nrun_command()\ncmd = f'azdata bdc config replace -c {configuration_folder}/cluster.json -j metadata.name={mssql_cluster_name}'\nrun_command()",
"source": [
"os.environ[\"ACCEPT_EULA\"] = 'yes'\n",
"cmd = f'azdata bdc config init --source {configuration_profile} --target {configuration_folder} --force'\n",
"run_command()\n",
"cmd = f'azdata bdc config replace -c {configuration_folder}/cluster.json -j metadata.name={mssql_cluster_name}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 10
@@ -157,7 +287,15 @@
},
{
"cell_type": "code",
"source": "print (f'Creating SQL Server 2019 big data cluster: {mssql_cluster_name} using configuration {configuration_folder}')\nos.environ[\"CONTROLLER_USERNAME\"] = mssql_controller_username\nos.environ[\"CONTROLLER_PASSWORD\"] = mssql_password\nos.environ[\"MSSQL_SA_PASSWORD\"] = mssql_password\nos.environ[\"KNOX_PASSWORD\"] = mssql_password\ncmd = f'azdata bdc create -c {configuration_folder}'\nrun_command()",
"source": [
"print (f'Creating SQL Server 2019 big data cluster: {mssql_cluster_name} using configuration {configuration_folder}')\n",
"os.environ[\"CONTROLLER_USERNAME\"] = mssql_controller_username\n",
"os.environ[\"CONTROLLER_PASSWORD\"] = mssql_password\n",
"os.environ[\"MSSQL_SA_PASSWORD\"] = mssql_password\n",
"os.environ[\"KNOX_PASSWORD\"] = mssql_password\n",
"cmd = f'azdata bdc create -c {configuration_folder}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 11
@@ -169,7 +307,10 @@
},
{
"cell_type": "code",
"source": "cmd = f'azdata login --cluster-name {mssql_cluster_name}'\nrun_command()",
"source": [
"cmd = f'azdata login --cluster-name {mssql_cluster_name}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 12
@@ -181,19 +322,38 @@
},
{
"cell_type": "code",
"source": "import json,html,pandas\nfrom IPython.display import *\npandas.set_option('display.max_colwidth', -1)\ncmd = f'azdata bdc endpoint list'\ncmdOutput = !{cmd}\nendpoints = json.loads(''.join(cmdOutput))\nendpointsDataFrame = pandas.DataFrame(endpoints)\nendpointsDataFrame.columns = [' '.join(word[0].upper() + word[1:] for word in columnName.split()) for columnName in endpoints[0].keys()]\ndisplay(HTML(endpointsDataFrame.to_html(index=False, render_links=True)))",
"source": [
"from IPython.display import *\n",
"pandas.set_option('display.max_colwidth', -1)\n",
"cmd = f'azdata bdc endpoint list'\n",
"cmdOutput = !{cmd}\n",
"endpoints = json.loads(''.join(cmdOutput))\n",
"endpointsDataFrame = pandas.DataFrame(endpoints)\n",
"endpointsDataFrame.columns = [' '.join(word[0].upper() + word[1:] for word in columnName.split()) for columnName in endpoints[0].keys()]\n",
"display(HTML(endpointsDataFrame.to_html(index=False, render_links=True)))"
],
"metadata": {},
"outputs": [],
"execution_count": 13
},
{
"cell_type": "markdown",
"source": "### **Connect to master SQL Server instance in Azure Data Studio**\r\nClick the link below to connect to the master SQL Server instance of the SQL Server 2019 big data cluster.",
"source": [
"### **Connect to master SQL Server instance in Azure Data Studio**\r\n",
"Click the link below to connect to the master SQL Server instance of the SQL Server 2019 big data cluster."
],
"metadata": {}
},
{
"cell_type": "code",
"source": "sqlEndpoints = [x for x in endpoints if x['name'] == 'sql-server-master']\r\nif sqlEndpoints and len(sqlEndpoints) == 1:\r\n connectionParameter = '{\"serverName\":\"' + sqlEndpoints[0]['endpoint'] + '\",\"providerName\":\"MSSQL\",\"authenticationType\":\"SqlLogin\",\"userName\":\"sa\",\"password\":' + json.dumps(mssql_password) + '}'\r\n display(HTML('<br/><a href=\"command:azdata.connect?' + html.escape(connectionParameter)+'\"><font size=\"3\">Click here to connect to master SQL Server instance</font></a><br/>'))\r\nelse:\r\n sys.exit('Could not find the master SQL Server instance endpoint')",
"source": [
"sqlEndpoints = [x for x in endpoints if x['name'] == 'sql-server-master']\r\n",
"if sqlEndpoints and len(sqlEndpoints) == 1:\r\n",
" connectionParameter = '{\"serverName\":\"' + sqlEndpoints[0]['endpoint'] + '\",\"providerName\":\"MSSQL\",\"authenticationType\":\"SqlLogin\",\"userName\":\"sa\",\"password\":' + json.dumps(mssql_password) + '}'\r\n",
" display(HTML('<br/><a href=\"command:azdata.connect?' + html.escape(connectionParameter)+'\"><font size=\"3\">Click here to connect to master SQL Server instance</font></a><br/>'))\r\n",
"else:\r\n",
" sys.exit('Could not find the master SQL Server instance endpoint.')"
],
"metadata": {},
"outputs": [],
"execution_count": 14

View File

@@ -0,0 +1,249 @@
{
"metadata": {
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python",
"version": "3.7.3",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
}
},
"nbformat_minor": 2,
"nbformat": 4,
"cells": [
{
"cell_type": "markdown",
"source": [
"![Microsoft](https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft-small-logo.png)\n",
" \n",
"## Deploy SQL Server 2019 CTP 3.2 big data cluster on an existing Azure Kubernetes Service (AKS) cluster\n",
" \n",
"This notebook walks through the process of deploying a <a href=\"https://docs.microsoft.com/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions\">SQL Server 2019 CTP 3.2 big data cluster</a> on an existing AKS cluster.\n",
" \n",
"* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n",
"* Make sure you have the target cluster set as the current context in your kubectl config file.\n",
" The config file would typically be under C:\\Users\\(userid)\\.kube on Windows, and under ~/.kube/ for macOS and Linux for a default installation.\n",
" In the kubectl config file, look for \"current-context\" and ensure it is set to the AKS cluster that the SQL Server 2019 CTP 3.2 big data cluster will be deployed to.\n",
"* The **Required information** cell will prompt you for password that will be used to access the cluster controller, SQL Server, and Knox.\n",
"* The values in the **Default settings** cell can be changed as appropriate.\n",
"\n",
"<span style=\"color:red\"><font size=\"3\">Please press the \"Run Cells\" button to run the notebook</font></span>"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### **Prerequisites** \n",
"Ensure the following tools are installed and added to PATH before proceeding.\n",
" \n",
"|Tools|Description|Installation|\n",
"|---|---|---|\n",
"|kubectl | Command-line tool for monitoring the underlying Kuberentes cluster | [Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-native-package-management) |\n",
"|azdata | Command-line tool for installing and managing a big data cluster |[Installation](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) |"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### **Check dependencies**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"import pandas,sys,os,json,html,getpass,time\r\n",
"pandas_version = pandas.__version__.split('.')\r\n",
"pandas_major = int(pandas_version[0])\r\n",
"pandas_minor = int(pandas_version[1])\r\n",
"pandas_patch = int(pandas_version[2])\r\n",
"if not (pandas_major > 0 or (pandas_major == 0 and pandas_minor > 24) or (pandas_major == 0 and pandas_minor == 24 and pandas_patch >= 2)):\r\n",
" sys.exit('Please upgrade the Notebook dependency before you can proceed, you can do it by running the \"Reinstall Notebook dependencies\" command in Azure Data Studio.')\r\n",
"\r\n",
"def run_command():\r\n",
" print(\"Executing: \" + cmd)\r\n",
" !{cmd}\r\n",
" if _exit_code != 0:\r\n",
" sys.exit(f'Command execution failed with exit code: {str(_exit_code)}.\\n\\t{cmd}\\n')\r\n",
" print(f'Successfully executed: {cmd}')\r\n",
"\r\n",
"cmd = 'kubectl version --client=true'\r\n",
"run_command()\r\n",
"cmd = 'azdata --version'\r\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 1
},
{
"cell_type": "markdown",
"source": "### **Show current context**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"cmd = ' kubectl config current-context'\r\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 2
},
{
"cell_type": "markdown",
"source": "### **Required information**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"env_var_flag = \"AZDATA_NB_VAR_BDC_CONTROLLER_PASSWORD\" in os.environ\n",
"if env_var_flag:\n",
" mssql_password = os.environ[\"AZDATA_NB_VAR_BDC_CONTROLLER_PASSWORD\"]\n",
"else: \n",
" mssql_password = getpass.getpass(prompt = 'SQL Server 2019 big data cluster controller password')\n",
" if mssql_password == \"\":\n",
" sys.exit(f'Password is required.')\n",
" confirm_password = getpass.getpass(prompt = 'Confirm password')\n",
" if mssql_password != confirm_password:\n",
" sys.exit(f'Passwords do not match.')\n",
"print('Password accepted, you can also use the same password to access Knox and SQL Server.')"
],
"metadata": {},
"outputs": [],
"execution_count": 3
},
{
"cell_type": "markdown",
"source": "### **Default settings**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"if env_var_flag:\n",
" mssql_cluster_name = os.environ[\"AZDATA_NB_VAR_BDC_NAME\"]\n",
" mssql_controller_username = os.environ[\"AZDATA_NB_VAR_BDC_CONTROLLER_USERNAME\"]\n",
"else:\n",
" mssql_cluster_name = 'mssql-cluster'\n",
" mssql_controller_username = 'admin'\n",
"configuration_profile = 'aks-dev-test'\n",
"configuration_folder = 'mssql-bdc-configuration'\n",
"print(f'SQL Server big data cluster name: {mssql_cluster_name}')\n",
"print(f'SQL Server big data cluster controller user name: {mssql_controller_username}')\n",
"print(f'Deployment configuration profile: {configuration_profile}')\n",
"print(f'Deployment configuration: {configuration_folder}')"
],
"metadata": {},
"outputs": [],
"execution_count": 4
},
{
"cell_type": "markdown",
"source": "### **Create a deployment configuration file**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"os.environ[\"ACCEPT_EULA\"] = 'yes'\n",
"cmd = f'azdata bdc config init --source {configuration_profile} --target {configuration_folder} --force'\n",
"run_command()\n",
"cmd = f'azdata bdc config replace -c {configuration_folder}/cluster.json -j metadata.name={mssql_cluster_name}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 6
},
{
"cell_type": "markdown",
"source": "### **Create SQL Server 2019 big data cluster**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"print (f'Creating SQL Server 2019 big data cluster: {mssql_cluster_name} using configuration {configuration_folder}')\n",
"os.environ[\"CONTROLLER_USERNAME\"] = mssql_controller_username\n",
"os.environ[\"CONTROLLER_PASSWORD\"] = mssql_password\n",
"os.environ[\"MSSQL_SA_PASSWORD\"] = mssql_password\n",
"os.environ[\"KNOX_PASSWORD\"] = mssql_password\n",
"cmd = f'azdata bdc create -c {configuration_folder}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 7
},
{
"cell_type": "markdown",
"source": "### **Login to SQL Server 2019 big data cluster**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"cmd = f'azdata login --cluster-name {mssql_cluster_name}'\n",
"run_command()"
],
"metadata": {},
"outputs": [],
"execution_count": 8
},
{
"cell_type": "markdown",
"source": "### **Show SQL Server 2019 big data cluster endpoints**",
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from IPython.display import *\n",
"pandas.set_option('display.max_colwidth', -1)\n",
"cmd = f'azdata bdc endpoint list'\n",
"cmdOutput = !{cmd}\n",
"endpoints = json.loads(''.join(cmdOutput))\n",
"endpointsDataFrame = pandas.DataFrame(endpoints)\n",
"endpointsDataFrame.columns = [' '.join(word[0].upper() + word[1:] for word in columnName.split()) for columnName in endpoints[0].keys()]\n",
"display(HTML(endpointsDataFrame.to_html(index=False, render_links=True)))"
],
"metadata": {},
"outputs": [],
"execution_count": 9
},
{
"cell_type": "markdown",
"source": [
"### **Connect to master SQL Server instance in Azure Data Studio**\r\n",
"Click the link below to connect to the master SQL Server instance of the SQL Server 2019 big data cluster."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"sqlEndpoints = [x for x in endpoints if x['name'] == 'sql-server-master']\r\n",
"if sqlEndpoints and len(sqlEndpoints) == 1:\r\n",
" connectionParameter = '{\"serverName\":\"' + sqlEndpoints[0]['endpoint'] + '\",\"providerName\":\"MSSQL\",\"authenticationType\":\"SqlLogin\",\"userName\":\"sa\",\"password\":' + json.dumps(mssql_password) + '}'\r\n",
" display(HTML('<br/><a href=\"command:azdata.connect?' + html.escape(connectionParameter)+'\"><font size=\"3\">Click here to connect to master SQL Server instance</font></a><br/>'))\r\n",
"else:\r\n",
" sys.exit('Could not find the master SQL Server instance endpoint')"
],
"metadata": {},
"outputs": [],
"execution_count": 10
}
]
}

View File

@@ -1,142 +0,0 @@
{
"metadata": {
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python",
"version": "3.6.6",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
}
},
"nbformat_minor": 2,
"nbformat": 4,
"cells": [
{
"cell_type": "markdown",
"source": "![Microsoft](https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft-small-logo.png)\n \n## Deploy SQL Server 2019 CTP 3.2 big data cluster on an existing Azure Kubernetes Service (AKS) cluster\n \nThis notebook walks through the process of deploying a <a href=\"https://docs.microsoft.com/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions\">SQL Server 2019 CTP 3.2 big data cluster</a> on an existing AKS cluster.\n \n* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n* Make sure you have the target cluster set as the current context in your kubectl config file.\n The config file would typically be under C:\\Users\\(userid)\\.kube on Windows, and under ~/.kube/ for macOS and Linux for a default installation.\n In the kubectl config file, look for \"current-context\" and ensure it is set to the AKS cluster that the SQL Server 2019 CTP 3.2 big data cluster will be deployed to.\n* The **Required information** cell will prompt you for password that will be used to access the cluster controller, SQL Server, and Knox.\n* The values in the **Default settings** cell can be changed as appropriate.",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### **Prerequisites** \nEnsure the following tools are installed and added to PATH before proceeding.\n \n|Tools|Description|Installation|\n|---|---|---|\n|kubectl | Command-line tool for monitoring the underlying Kuberentes cluster | [Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-native-package-management) |\n|azdata | Command-line tool for installing and managing a big data cluster |[Installation](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) |",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### **Check dependencies**",
"metadata": {}
},
{
"cell_type": "code",
"source": "import sys\r\ndef run_command():\r\n print(\"Executing: \" + cmd)\r\n !{cmd}\r\n if _exit_code != 0:\r\n sys.exit(f'Command execution failed with exit code: {str(_exit_code)}.\\n\\t{cmd}\\n')\r\n print(f'Successfully executed: {cmd}')\r\n\r\ncmd = 'kubectl version --client=true'\r\nrun_command()\r\ncmd = 'azdata --version'\r\nrun_command()",
"metadata": {},
"outputs": [],
"execution_count": 1
},
{
"cell_type": "markdown",
"source": "### **Show current context**",
"metadata": {}
},
{
"cell_type": "code",
"source": "cmd = ' kubectl config current-context'\r\nrun_command()",
"metadata": {},
"outputs": [],
"execution_count": 2
},
{
"cell_type": "markdown",
"source": "### **Required information**",
"metadata": {}
},
{
"cell_type": "code",
"source": "import getpass\nmssql_password = getpass.getpass(prompt = 'SQL Server 2019 big data cluster controller password')\nif mssql_password == \"\":\n sys.exit(f'Password is required')\nconfirm_password = getpass.getpass(prompt = 'Confirm password')\nif mssql_password != confirm_password:\n sys.exit(f'Passwords do not match.')\nprint('Password accepted, you can also use the same password to access Knox and SQL Server.')",
"metadata": {},
"outputs": [],
"execution_count": 3
},
{
"cell_type": "markdown",
"source": "### **Default settings**",
"metadata": {}
},
{
"cell_type": "code",
"source": "mssql_cluster_name = 'mssql-cluster'\nmssql_controller_username = 'admin'\nconfiguration_profile = 'aks-dev-test'\nconfiguration_folder = 'mssql-bdc-configuration'\nprint(f'SQL Server big data cluster name: {mssql_cluster_name}')\nprint(f'SQL Server big data cluster controller user name: {mssql_controller_username}')\nprint(f'Deployment configuration profile: {configuration_profile}')\nprint(f'Deployment configuration: {configuration_folder}')",
"metadata": {},
"outputs": [],
"execution_count": 4
},
{
"cell_type": "markdown",
"source": "### **Create a deployment configuration file**",
"metadata": {}
},
{
"cell_type": "code",
"source": "import os\nos.environ[\"ACCEPT_EULA\"] = 'yes'\ncmd = f'azdata bdc config init --source {configuration_profile} --target {configuration_folder} --force'\nrun_command()\ncmd = f'azdata bdc config replace -c {configuration_folder}/cluster.json -j metadata.name={mssql_cluster_name}'\nrun_command()",
"metadata": {},
"outputs": [],
"execution_count": 6
},
{
"cell_type": "markdown",
"source": "### **Create SQL Server 2019 big data cluster**",
"metadata": {}
},
{
"cell_type": "code",
"source": "import os\nprint (f'Creating SQL Server 2019 big data cluster: {mssql_cluster_name} using configuration {configuration_folder}')\nos.environ[\"CONTROLLER_USERNAME\"] = mssql_controller_username\nos.environ[\"CONTROLLER_PASSWORD\"] = mssql_password\nos.environ[\"MSSQL_SA_PASSWORD\"] = mssql_password\nos.environ[\"KNOX_PASSWORD\"] = mssql_password\ncmd = f'azdata bdc create -c {configuration_folder}'\nrun_command()",
"metadata": {},
"outputs": [],
"execution_count": 7
},
{
"cell_type": "markdown",
"source": "### **Login to SQL Server 2019 big data cluster**",
"metadata": {}
},
{
"cell_type": "code",
"source": "cmd = f'azdata login --cluster-name {mssql_cluster_name}'\nrun_command()",
"metadata": {},
"outputs": [],
"execution_count": 8
},
{
"cell_type": "markdown",
"source": "### **Show SQL Server 2019 big data cluster endpoints**",
"metadata": {}
},
{
"cell_type": "code",
"source": "import json,html,pandas\nfrom IPython.display import *\npandas.set_option('display.max_colwidth', -1)\ncmd = f'azdata bdc endpoint list'\ncmdOutput = !{cmd}\nendpoints = json.loads(''.join(cmdOutput))\nendpointsDataFrame = pandas.DataFrame(endpoints)\nendpointsDataFrame.columns = [' '.join(word[0].upper() + word[1:] for word in columnName.split()) for columnName in endpoints[0].keys()]\ndisplay(HTML(endpointsDataFrame.to_html(index=False, render_links=True)))",
"metadata": {},
"outputs": [],
"execution_count": 9
},
{
"cell_type": "markdown",
"source": "### **Connect to master SQL Server instance in Azure Data Studio**\r\nClick the link below to connect to the master SQL Server instance of the SQL Server 2019 big data cluster.",
"metadata": {}
},
{
"cell_type": "code",
"source": "sqlEndpoints = [x for x in endpoints if x['name'] == 'sql-server-master']\r\nif sqlEndpoints and len(sqlEndpoints) == 1:\r\n connectionParameter = '{\"serverName\":\"' + sqlEndpoints[0]['endpoint'] + '\",\"providerName\":\"MSSQL\",\"authenticationType\":\"SqlLogin\",\"userName\":\"sa\",\"password\":' + json.dumps(mssql_password) + '}'\r\n display(HTML('<br/><a href=\"command:azdata.connect?' + html.escape(connectionParameter)+'\"><font size=\"3\">Click here to connect to master SQL Server instance</font></a><br/>'))\r\nelse:\r\n sys.exit('Could not find the master SQL Server instance endpoint')",
"metadata": {},
"outputs": [],
"execution_count": 10
}
]
}