mirror of
https://github.com/ckaczor/azuredatastudio.git
synced 2026-02-16 18:46:40 -05:00
update the notebooks for the new release (#6451)
* get ready for new release * azdata * header update * comments * update text * remove kubernetes version option
This commit is contained in:
@@ -0,0 +1,202 @@
|
||||
{
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.6.6",
|
||||
"mimetype": "text/x-python",
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"pygments_lexer": "ipython3",
|
||||
"nbconvert_exporter": "python",
|
||||
"file_extension": ".py"
|
||||
}
|
||||
},
|
||||
"nbformat_minor": 2,
|
||||
"nbformat": 4,
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "\n \n## Create Azure Kubernetes Service cluster and deploy SQL Server 2019 CTP 3.2 big data cluster\n \nThis notebook walks through the process of creating a new Azure Kubernetes Service cluster first, and then deploys a <a href=\"https://docs.microsoft.com/sql/big-data-cluster/big-data-cluster-overview?view=sqlallproducts-allversions\">SQL Server 2019 CTP 3.2 big data cluster</a> on the newly created AKS cluster.\n \n* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n* The **Required information** cell will prompt you for a password that will be used to access the cluster controller, SQL Server, and Knox.\n* The values in the **Azure settings** and **Default settings** cell can be changed as appropriate.",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Prerequisites**\nEnsure the following tools are installed and added to PATH before proceeding.\n\n|Tools|Description|Installation|\n|---|---|---|\n| Azure CLI |Command-line tool for managing Azure services. Used to create AKS cluster | [Installation](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest) |\n|kubectl | Command-line tool for monitoring the underlying Kuberentes cluster | [Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-native-package-management) |\n|azdata | Command-line tool for installing and managing a big data cluster |[Installation](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) |",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Check dependencies**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "import sys\r\ndef run_command():\r\n print(\"Executing: \" + cmd)\r\n !{cmd}\r\n if _exit_code != 0:\r\n sys.exit(f'Command execution failed with exit code: {str(_exit_code)}.\\n\\t{cmd}\\n')\r\n print(f'Successfully executed: {cmd}')\r\n\r\ncmd = 'az --version'\r\nrun_command()\r\ncmd = 'kubectl version --client=true'\r\nrun_command()\r\ncmd = 'azdata --version'\r\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 1
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Required information**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "import getpass\nmssql_password = getpass.getpass(prompt = 'SQL Server 2019 big data cluster controller password')\nif mssql_password == \"\":\n sys.exit(f'Password is required')\nconfirm_password = getpass.getpass(prompt = 'Confirm password')\nif mssql_password != confirm_password:\n sys.exit(f'Passwords do not match.')\nprint('Password accepted, you can also use the same password to access Knox and SQL Server.')",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 2
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Azure settings**\n*Subscription ID*: visit <a href=\"https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade\">here</a> to find out the subscriptions you can use, if you leave it unspecified, the default subscription will be used.\n\n*VM Size*: visit <a href=\"https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes\">here</a> to find out the available VM sizes you could use. \n \n*Region*: visit <a href=\"https://azure.microsoft.com/en-us/global-infrastructure/services/?products=kubernetes-service\">here</a> to find out the Azure regions where the Azure Kubernettes Service is available.",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "azure_subscription_id = \"\"\nazure_vm_size = \"Standard_E4s_v3\"\nazure_region = \"eastus\"\nazure_vm_count = int(5)",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Default settings**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "import time\nmssql_cluster_name = 'mssql-cluster'\nmssql_controller_username = 'admin'\nazure_resource_group = mssql_cluster_name + '-' + time.strftime(\"%Y%m%d%H%M%S\", time.localtime())\naks_cluster_name = azure_resource_group\nconfiguration_profile = 'aks-dev-test'\nconfiguration_folder = 'mssql-bdc-configuration'\nprint(f'Azure subscription: {azure_subscription_id}')\nprint(f'Azure VM size: {azure_vm_size}')\nprint(f'Azure VM count: {str(azure_vm_count)}')\nprint(f'Azure region: {azure_region}')\nprint(f'Azure resource group: {azure_resource_group}')\nprint(f'AKS cluster name: {aks_cluster_name}')\nprint(f'SQL Server big data cluster name: {mssql_cluster_name}')\nprint(f'SQL Server big data cluster controller user name: {mssql_controller_username}')\nprint(f'Deployment configuration profile: {configuration_profile}')\nprint(f'Deployment configuration: {configuration_folder}')",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 4
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Login to Azure**\n\nThis will open a web browser window to enable credentials to be entered. If this cells is hanging forever, it might be because your Web browser windows is waiting for you to enter your Azure credentials!\n",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "cmd = f'az login'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 5
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "\n### **Set active Azure subscription**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "if azure_subscription_id != \"\":\n cmd = f'az account set --subscription {azure_subscription_id}'\n run_command()\nelse:\n print('Using the default Azure subscription', {azure_subscription_id})\ncmd = f'az account show'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 6
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Create Azure resource group**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "cmd = f'az group create --name {azure_resource_group} --location {azure_region}'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 7
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Create AKS cluster**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "cmd = f'az aks create --name {aks_cluster_name} --resource-group {azure_resource_group} --generate-ssh-keys --node-vm-size {azure_vm_size} --node-count {azure_vm_count}' \nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 8
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Set the new AKS cluster as current context**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "cmd = f'az aks get-credentials --resource-group {azure_resource_group} --name {aks_cluster_name} --admin --overwrite-existing'\r\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 9
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Create a deployment configuration file**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "import os\nos.environ[\"ACCEPT_EULA\"] = 'yes'\ncmd = f'azdata bdc config init --source {configuration_profile} --target {configuration_folder} --force'\nrun_command()\ncmd = f'azdata bdc config replace -c {configuration_folder}/cluster.json -j metadata.name={mssql_cluster_name}'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 10
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Create SQL Server 2019 big data cluster**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "print (f'Creating SQL Server 2019 big data cluster: {mssql_cluster_name} using configuration {configuration_folder}')\nos.environ[\"CONTROLLER_USERNAME\"] = mssql_controller_username\nos.environ[\"CONTROLLER_PASSWORD\"] = mssql_password\nos.environ[\"MSSQL_SA_PASSWORD\"] = mssql_password\nos.environ[\"KNOX_PASSWORD\"] = mssql_password\ncmd = f'azdata bdc create -c {configuration_folder}'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 11
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Login to SQL Server 2019 big data cluster**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "cmd = f'azdata login --cluster-name {mssql_cluster_name}'\nrun_command()",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 12
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Show SQL Server 2019 big data cluster endpoints**",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "import json,html,pandas\nfrom IPython.display import *\npandas.set_option('display.max_colwidth', -1)\ncmd = f'azdata bdc endpoint list'\ncmdOutput = !{cmd}\nendpoints = json.loads(''.join(cmdOutput))\nendpointsDataFrame = pandas.DataFrame(endpoints)\nendpointsDataFrame.columns = [' '.join(word[0].upper() + word[1:] for word in columnName.split()) for columnName in endpoints[0].keys()]\ndisplay(HTML(endpointsDataFrame.to_html(index=False, render_links=True)))",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 13
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": "### **Connect to master SQL Server instance in Azure Data Studio**\r\nClick the link below to connect to the master SQL Server instance of the SQL Server 2019 big data cluster.",
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": "sqlEndpoints = [x for x in endpoints if x['name'] == 'sql-server-master']\r\nif sqlEndpoints and len(sqlEndpoints) == 1:\r\n connectionParameter = '{\"serverName\":\"' + sqlEndpoints[0]['endpoint'] + '\",\"providerName\":\"MSSQL\",\"authenticationType\":\"SqlLogin\",\"userName\":\"sa\",\"password\":' + json.dumps(mssql_password) + '}'\r\n display(HTML('<br/><a href=\"command:azdata.connect?' + html.escape(connectionParameter)+'\"><font size=\"3\">Click here to connect to master SQL Server instance</font></a><br/>'))\r\nelse:\r\n sys.exit('Could not find the master SQL Server instance endpoint')",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"execution_count": 14
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user