{ "metadata": { "kernelspec": { "name": "python3", "display_name": "Python 3", "language": "python" }, "language_info": { "name": "python", "version": "3.6.6", "mimetype": "text/x-python", "codemirror_mode": { "name": "ipython", "version": 3 }, "pygments_lexer": "ipython3", "nbconvert_exporter": "python", "file_extension": ".py" } }, "nbformat_minor": 2, "nbformat": 4, "cells": [ { "cell_type": "markdown", "source": [ "![Microsoft](https://raw.githubusercontent.com/microsoft/azuredatastudio/main/extensions/arc/images/microsoft-small-logo.png)\n", " \n", "## Create a PostgreSQL Hyperscale - Azure Arc on an existing Azure Arc Data Controller\n", " \n", "This notebook walks through the process of creating a PostgreSQL Hyperscale - Azure Arc on an existing Azure Arc Data Controller.\n", " \n", "* Follow the instructions in the **Prerequisites** cell to install the tools if not already installed.\n", "* Make sure you have the target Azure Arc Data Controller already created.\n", "\n", "Please press the \"Run All\" button to run the notebook" ], "metadata": { "azdata_cell_guid": "e4ed0892-7b5a-4d95-bd0d-a6c3eb0b2c99" } }, { "cell_type": "markdown", "source": [ "### **Prerequisites** \n", "Ensure the following tools are installed and added to PATH before proceeding.\n", " \n", "|Tools|Description|Installation|\n", "|---|---|---|\n", "|Azure Data CLI (azdata) | Command-line tool for installing and managing resources in an Azure Arc cluster |[Installation](https://docs.microsoft.com/sql/azdata/install/deploy-install-azdata) |" ], "metadata": { "azdata_cell_guid": "20fe3985-a01e-461c-bce0-235f7606cc3c" } }, { "cell_type": "markdown", "source": [ "### **Setup and Check Prerequisites**" ], "metadata": { "azdata_cell_guid": "68531b91-ddce-47d7-a1d8-2ddc3d17f3e7" } }, { "cell_type": "code", "source": [ "import sys,os,json,subprocess\n", "def run_command():\n", " print(\"Executing: \" + cmd)\n", " output = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True )\n", " if output.returncode != 0:\n", " print(f'Command: {cmd} failed \\n')\n", " print(f'\\t>>>Error output: {output.stderr.decode(\"utf-8\")}\\n')\n", " sys.exit(f'exit code: {output.returncode}\\n')\n", " print(f'Successfully executed: {cmd}')\n", " print(f'\\t>>>Output: {output.stdout.decode(\"utf-8\")}\\n')\n", " return output.stdout.decode(\"utf-8\")\n", "cmd = 'azdata --version'\n", "out = run_command()\n", "" ], "metadata": { "azdata_cell_guid": "749d8dba-3da8-46e9-ae48-2b38056ab7a2", "tags": [] }, "outputs": [], "execution_count": null }, { "cell_type": "markdown", "source": [ "### **Set variables**\n", "\n", "#### \n", "\n", "Generated by Azure Data Studio using the values collected in the 'Deploy PostgreSQL Hyperscale - Azure Arc instance' wizard" ], "metadata": { "azdata_cell_guid": "68ec0760-27d1-4ded-9a9f-89077c40b8bb" } }, { "cell_type": "markdown", "source": [ "### **Creating the PostgreSQL Hyperscale - Azure Arc instance**" ], "metadata": { "azdata_cell_guid": "90b0e162-2987-463f-9ce6-12dda1267189" } }, { "cell_type": "code", "source": [ "# Login to the data controller.\n", "#\n", "os.environ[\"AZDATA_PASSWORD\"] = os.environ[\"AZDATA_NB_VAR_CONTROLLER_PASSWORD\"]\n", "os.environ[\"KUBECONFIG\"] = controller_kubeconfig\n", "os.environ[\"KUBECTL_CONTEXT\"] = controller_kubectl_context\n", "endpoint_option = f' -e {controller_endpoint}' if controller_endpoint else \"\"\n", "cmd = f'azdata login --namespace {arc_data_controller_namespace} -u {controller_username}{endpoint_option}'\n", "out=run_command()" ], "metadata": { "azdata_cell_guid": "71366399-5963-4e24-b2f2-6bb5bffba4ec" }, "outputs": [], "execution_count": null }, { "cell_type": "code", "source": [ "print (f'Creating the PostgreSQL Hyperscale - Azure Arc instance')\n", "\n", "workers_option = f' -w {postgres_server_group_workers}' if postgres_server_group_workers else \"\"\n", "port_option = f' --port \"{postgres_server_group_port}\"' if postgres_server_group_port else \"\"\n", "engine_version_option = f' -ev {postgres_server_group_engine_version}' if postgres_server_group_engine_version else \"\"\n", "extensions_option = f' --extensions \"{postgres_server_group_extensions}\"' if postgres_server_group_extensions else \"\"\n", "volume_size_data_option = f' -vsd {postgres_server_group_volume_size_data}Gi' if postgres_server_group_volume_size_data else \"\"\n", "volume_size_logs_option = f' -vsl {postgres_server_group_volume_size_logs}Gi' if postgres_server_group_volume_size_logs else \"\"\n", "volume_size_backups_option = f' -vsb {postgres_server_group_volume_size_backups}Gi' if postgres_server_group_volume_size_backups else \"\"\n", "cores_request_option = f' -cr \"c={postgres_server_group_coordinator_cores_request},w={postgres_server_group_workers_cores_request}\"' if postgres_server_group_coordinator_cores_request and postgres_server_group_workers_cores_request else f' -cr \"c={postgres_server_group_coordinator_cores_request}\"' if postgres_server_group_coordinator_cores_request else f' -cr \"w={postgres_server_group_workers_cores_request}\"' if postgres_server_group_workers_cores_request else \"\"\n", "cores_limit_option = f' -cl \"c={postgres_server_group_coordinator_cores_limit},w={postgres_server_group_workers_cores_limit}\"' if postgres_server_group_coordinator_cores_limit and postgres_server_group_workers_cores_limit else f' -cl \"c={postgres_server_group_coordinator_cores_limit}\"' if postgres_server_group_coordinator_cores_limit else f' -cl \"w={postgres_server_group_workers_cores_limit}\"' if postgres_server_group_workers_cores_limit else \"\"\n", "memory_request_option = f' -mr \"c={postgres_server_group_coordinator_memory_request}Gi,w={postgres_server_group_workers_memory_request}Gi\"' if postgres_server_group_coordinator_memory_request and postgres_server_group_workers_memory_request else f' -mr \"c={postgres_server_group_coordinator_memory_request}Gi\"' if postgres_server_group_coordinator_memory_request else f' -mr \"w={postgres_server_group_workers_memory_request}Gi\"' if postgres_server_group_workers_memory_request else \"\"\n", "memory_limit_option = f' -ml \"c={postgres_server_group_coordinator_memory_limit}Gi,w={postgres_server_group_workers_memory_limit}Gi\"' if postgres_server_group_coordinator_memory_limit and postgres_server_group_workers_memory_limit else f' -ml \"c={postgres_server_group_coordinator_memory_limit}Gi\"' if postgres_server_group_coordinator_memory_limit else f' -ml \"w={postgres_server_group_workers_memory_limit}Gi\"' if postgres_server_group_workers_memory_limit else \"\"\n", "\n", "os.environ[\"AZDATA_PASSWORD\"] = os.environ[\"AZDATA_NB_VAR_POSTGRES_SERVER_GROUP_PASSWORD\"]\n", "cmd = f'azdata arc postgres server create -n {postgres_server_group_name} -scd {postgres_storage_class_data} -scl {postgres_storage_class_logs} -scb {postgres_storage_class_backups}{workers_option}{port_option}{engine_version_option}{extensions_option}{volume_size_data_option}{volume_size_logs_option}{volume_size_backups_option}{cores_request_option}{cores_limit_option}{memory_request_option}{memory_limit_option}'\n", "out=run_command()" ], "metadata": { "azdata_cell_guid": "4fbaf071-55a1-40bc-be7e-7b9b5547b886" }, "outputs": [], "execution_count": null } ] }