diff --git a/resources/xlf/en/mssql.xlf b/resources/xlf/en/mssql.xlf index 0f9a976afb..80aed39300 100644 --- a/resources/xlf/en/mssql.xlf +++ b/resources/xlf/en/mssql.xlf @@ -1,58 +1,5 @@ - - - Copy - - - Application Proxy - - - Cluster Management Service - - - Gateway to access HDFS files, Spark - - - Metrics Dashboard - - - Log Search Dashboard - - - Proxy for running Spark statements, jobs, applications - - - Management Proxy - - - Management Proxy - - - Spark Jobs Management and Monitoring Dashboard - - - SQL Server Master Instance Front-End - - - HDFS File System Proxy - - - Spark Diagnostics and Monitoring Dashboard - - - Metrics Dashboard - - - Log Search Dashboard - - - Spark Jobs Management and Monitoring Dashboard - - - Spark Diagnostics and Monitoring Dashboard - - Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account was selected. Please retry the query and select a linked Azure account when prompted. @@ -66,542 +13,31 @@ Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account is available. Please add a linked Azure account and retry the query. - - - - Error applying permission changes: {0} - - - Applying permission changes to '{0}'. - - - Applying permission changes recursively under '{0}' - - - Permission changes applied successfully. - - - - - Bad Request - - - Unauthorized - - - Forbidden - - - Not Found - - - Internal Server Error - - - Invalid Data Structure - - - Unable to create WebHDFS client due to missing options: ${0} - - - '${0}' is undefined. - - - Unexpected Redirect - - - Unknown Error - - - Node Command called without any node passed - - - Access - - - Add - - - Add User or Group - - - Apply - - - Apply Recursively - - - Default - - - Default User and Groups - - - Delete - - - Enter name - - - Unexpected error occurred while applying changes : {0} - - - Everyone else - - - Execute - Failed to find azure account {0} when executing token refresh Failed to find tenant '{0}' in account '{1}' when refreshing security token - - Group - - - Group - - - Inherit Defaults - - - Location : - - - Manage Access - - - Named Users and Groups - - - Owner - - - - Owner - - - - Owning Group - - - Permissions - - - Read - - - Sticky Bit - {0} AAD token refresh failed, please reconnect to enable {0} Editor token refresh failed, autocompletion will be disabled until the editor is disconnected and reconnected - - User - - - User or Group Icon - - - Write - - - Please connect to the Spark cluster before View {0} History. - - - Get Application Id Failed. {0} - - - Local file will be uploaded to HDFS. - - - Local file {0} does not existed. - - - No SQL Server Big Data Cluster found. - - - Submitting job {0} ... - - - Uploading file from local {0} to HDFS folder: {1} - - - Spark History Url: {0} - - - .......................... Submit Spark Job End ............................ - - - Spark Job Submission Failed. {0} - - - The Spark Job has been submitted. - - - Upload file to cluster Failed. {0} - - - Upload file to cluster Succeeded! - - - YarnUI Url: {0} - - - This sample code loads the file into a data frame and shows the first 10 results. - An error occurred converting the SQL document to a Notebook. Error : {0} An error occurred converting the Notebook document to SQL. Error : {0} - - Could not find the controller endpoint for this instance - - - Notebooks - - - Only .ipynb Notebooks are supported - - - - - Stream operation canceled by the user - - - - - Cancel operation? - - - Cancel - Search Server Names - - $(sync~spin) {0}... - - - - - Some missing properties in connectionInfo.options: {0} - - - ConnectionInfo.options is undefined. - - - ConnectionInfo is undefined. - - - - - NOTICE: This file has been truncated at {0} for preview. - - - The file has been truncated at {0} for preview. - - - - - All Files - - - Error on copying path: {0} - - - Error on deleting files: {0} - - - Enter directory name - - - Upload - - - Creating directory - - - An unexpected error occurred while opening the Manage Access dialog: {0} - - - Error on making directory: {0} - - - Operation was canceled - - - Are you sure you want to delete this file? - - - Are you sure you want to delete this folder and its contents? - - - Error on previewing file: {0} - - - Generating preview - - - Save operation was canceled - - - Error on saving file: {0} - - - Saving HDFS Files - - - Upload operation was canceled - - - Error uploading files: {0} - - - Uploading files to HDFS - - - - - Cannot delete a connection. Only subfolders and files can be deleted. - - - Error: {0} - - - - - HDFS - - - Error notifying of node change: {0} - - - Please provide the password to connect to HDFS: - - - Please provide the username to connect to HDFS: - - - Root - - - Session for node {0} does not exist - - - - - No - - - Yes - - - - - The selected server does not belong to a SQL Server Big Data Cluster - - - Select other SQL Server - - - Error Get File Path: {0} - - - No SQL Server is selected. - - - Please select SQL Server with Big Data Cluster. - - - - - ADVANCED - - - Reference Files - - - Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;) - - - Reference Jars - - - Jars to be placed in executor working directory. The Jar path needs to be an HDFS Path. Multiple paths should be split by semicolon (;) - - - Reference py Files - - - Py Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;) - - - Configuration Values - - - List of name value pairs containing Spark configuration values. Encoded as JSON dictionary. Example: '{"name":"value", "name2":"value2"}'. - - - Driver Cores - - - Amount of CPU cores to allocate to the driver. - - - Driver Memory - - - Amount of memory to allocate to the driver. Specify units as part of value. Example 512M or 2G. - - - Executor Cores - - - Amount of CPU cores to allocate to the executor. - - - Executor Count - - - Number of instances of the executor to run. - - - Executor Memory - - - Amount of memory to allocate to the executor. Specify units as part of value. Example 512M or 2G. - - - Queue Name - - - Name of the Spark queue to execute the session in. - - - - - Arguments - - - Command line arguments used in your main class, multiple arguments should be split by space. - - - Path to a .jar or .py file - - - GENERAL - - - The specified HDFS file does not exist. - - - {0} does not exist in Cluster or exception thrown. - - - Job Name - - - Enter a name ... - - - The selected local file will be uploaded to HDFS: {0} - - - Main Class - - - JAR/py File - - - Property JAR/py File is not specified. - - - Property Job Name is not specified. - - - Property Main Class is not specified. - - - Error in locating the file due to Error: {0} - - - Spark Cluster - - - Select - - - - - Cancel - - - Submit - - - New Job - - - Parameters for SparkJobSubmissionDialog is illegal - - - .......................... Submit Spark Job Start .......................... - - - {0} Spark Job Submission: - - - - - Get Application Id time out. {0}[Log] {1} - - - livyBatchId is invalid. - - - Property Path is not specified. - - - Parameters for SparkJobSubmissionModel is illegal - - - Property localFilePath or hdfsFolderPath is not specified. - - - submissionArgs is invalid. - - - - - No Spark job batch id is returned from response.{0}[Error] {1} - - - No log is returned within response.{0}[Error] {1} - - - - - Error: {0}. - - - Please provide the password to connect to the BDC Controller - - - {0}Please provide the username to connect to the BDC Controller: - - - Username and password are required - @@ -1108,33 +544,6 @@ [Optional] Log level for backend services. Azure Data Studio generates a file name every time it starts and if the file already exists the logs entries are appended to that file. For cleanup of old log files see logRetentionMinutes and logFilesRemovalLimit settings. The default tracingLevel does not log much. Changing verbosity could lead to extensive logging and disk space requirements for the logs. Error includes Critical, Warning includes Error, Information includes Warning and Verbose includes Information - - Copy Path - - - Delete - - - Manage Access - - - New directory - - - Preview - - - Save - - - Upload files - - - New Notebook - - - Open Notebook - Name @@ -1165,57 +574,20 @@ Version - - Tasks and information about your SQL Server Big Data Cluster - - - SQL Server Big Data Cluster - - - Notebooks - Search: Clear Search Server Results - - Configure Python for Notebooks - Design - - Service Endpoints - - - Install Packages - - - New Spark Job - New Table - - Cluster -Dashboard - - - View Spark History - - - View Yarn History - Search: Servers Show Log File - - Submit Spark Job - - - Tasks - \ No newline at end of file diff --git a/resources/xlf/en/notebook.xlf b/resources/xlf/en/notebook.xlf index fa18660d26..5e959ee191 100644 --- a/resources/xlf/en/notebook.xlf +++ b/resources/xlf/en/notebook.xlf @@ -196,9 +196,6 @@ New Section (Preview) - - Spark kernels require a connection to a SQL Server Big Data Cluster master instance. - No Jupyter Books are currently selected in the viewlet. @@ -232,9 +229,6 @@ Open untitled notebook {0} as untitled failed: {1} - - Non-MSSQL providers are not supported for spark kernels. - Failed to read Jupyter Book {0}: {1} @@ -270,9 +264,6 @@ - - This sample code loads the file into a data frame and shows the first 10 results. - No notebook editor is active @@ -572,24 +563,9 @@ - - Error: {0}. - - - A connection to the cluster controller is required to run Spark jobs - Cannot start a session, the manager is not yet initialized - - Could not find Knox gateway endpoint - - - Please provide the password to connect to the BDC Controller - - - {0}Please provide the username to connect to the BDC Controller: - diff --git a/resources/xlf/en/resource-deployment.xlf b/resources/xlf/en/resource-deployment.xlf index 7f60593495..a9e88f2228 100644 --- a/resources/xlf/en/resource-deployment.xlf +++ b/resources/xlf/en/resource-deployment.xlf @@ -260,44 +260,6 @@ updating your brew repository for azure-cli installation … - - - - adding the azdata repository information … - - - getting packages needed for azdata installation … - - - updating repository information … - - - deleting previously downloaded Azdata.msi if one exists … - - - displaying the installation log … - - - downloading and installing the signing key for azdata … - - - downloading Azdata.msi and installing azdata-cli … - - - installing azdata … - - - tapping into the brew repository for azdata-cli … - - - updating the brew repository for azdata-cli installation … - - - Azure Data command line interface - - - Azure Data CLI - @@ -735,665 +697,6 @@ Select a different subscription containing at least one server Select a valid virtual machine size. - - - - Deploy SQL Server 2019 Big Data Cluster on an existing AKS cluster - - - Deploy SQL Server 2019 Big Data Cluster on an existing Azure Red Hat OpenShift cluster - - - Deploy SQL Server 2019 Big Data Cluster on an existing kubeadm cluster - - - Deploy SQL Server 2019 Big Data Cluster on an existing OpenShift cluster - - - Deploy SQL Server 2019 Big Data Cluster on a new AKS cluster - - - Config files saved to {0} - - - Save config files - - - Script to Notebook - - - Save config files - - - - - AKS cluster name - - - View available Azure locations - - - Configure the settings to create an Azure Kubernetes Service cluster - - - Azure settings - - - Location - - - Please fill out the required fields marked with red asterisks. - - - New resource group name - - - The default subscription will be used if you leave this field blank. - - - Subscription id - - - View available Azure subscriptions - - - Use my default Azure subscription - - - VM count - - - VM size - - - View available VM sizes - - - - - Account prefix - - - A unique prefix for AD accounts SQL Server Big Data Cluster will generate. If not provided, the subdomain name will be used as the default value. If a subdomain is not provided, the cluster name will be used as the default value. - - - Active Directory settings - - - Password - - - This password can be used to access the controller, SQL Server and gateway. - - - Password - - - Admin username - - - This username will be used for controller and SQL Server. Username for the gateway will be root. - - - App owners - - - The Active Directory users or groups with app owners role. Use comma to separate multiple users/groups. - - - Use comma to separate the values. - - - App readers - - - The Active Directory users or groups of app readers. Use comma as separator them if there are multiple users/groups. - - - Use comma to separate the values. - - - Authentication mode - - - Active Directory - - - Basic - - - Cluster admin group - - - The Active Directory group for cluster admin. - - - Cluster name - - - The cluster name must consist only of alphanumeric lowercase characters or '-' and must start and end with an alphanumeric character. - - - Configure the SQL Server Big Data Cluster settings - - - Cluster settings - - - Cluster users - - - The Active Directory users/groups with cluster users role. Use comma to separate multiple users/groups. - - - Use comma to separate the values. - - - Confirm password - - - Image tag - - - Password - - - Registry - - - Repository - - - Docker settings - - - Username - - - Fully qualified domain names for the domain controller. For example: DC1.CONTOSO.COM. Use comma to separate multiple FQDNs. - - - Domain controller FQDNs - - - Use comma to separate the values. - - - Domain DNS IP addresses - - - Domain DNS servers' IP Addresses. Use comma to separate multiple IP addresses. - - - Use comma to separate the values. - - - Domain DNS name - - - Service account password - - - Service account username - - - Domain service account for Big Data Cluster - - - Organizational unit - - - Distinguished name for the organizational unit. For example: OU=bdc,DC=contoso,DC=com. - - - If not provided, the domain DNS name will be used as the default value. - - - Subdomain - - - A unique DNS subdomain to use for this SQL Server Big Data Cluster. If not provided, the cluster name will be used as the default value. - - - - - Note: The settings of the deployment profile can be customized in later steps. - - - Please select a deployment profile. - - - Service - - - Storage type - - - Active Directory authentication - - - Basic authentication - - - Compute - - - Data - - - Data - - - Features - - - Feature - - - High Availability - - - HDFS + Spark - - - Failed to load the deployment profiles: {0} - - - Loading profiles - - - Loading profiles completed - - - Logs - - - SQL Server Master - - - No - - - Deployment configuration profile - - - Service scale settings (Instances) - - - Service storage settings (GB per Instance) - - - Select the target configuration profile - - - Deployment configuration profile - - - Yes - - - - - By default Controller storage settings will be applied to other services as well, you can expand the advanced storage settings to configure storage for other services. - - - Application proxy DNS name - - - Application proxy port - - - Application proxy - - - Compute pool instances - - - Controller DNS name - - - Controller port - - - Controller - - - DNS name - - - Claim size for data (GB) - - - Data pool - - - Data pool instances - - - Storage class for data - - - Endpoint settings - - - Gateway DNS name - - - Gateway port - - - Gateway - - - Include Spark in storage pool - - - Storage class for logs - - - Claim size for logs (GB) - - - SQL Server Master DNS name - - - SQL Server Master port - - - SQL Server master instances - - - SQL Server Master - - - Port - - - Readable secondary DNS name - - - Readable secondary port - - - Readable secondary - - - Service name - - - Management proxy DNS name - - - Management proxy port - - - Management proxy - - - Service settings - - - Invalid Spark configuration, you must check the 'Include Spark' checkbox or set the 'Spark pool instances' to at least 1. - - - Spark pool instances - - - Storage pool (HDFS) - - - Storage pool (HDFS) instances - - - Storage settings - - - Storage settings - - - Controller's data storage claim size (Gigabytes) - - - Controller's data storage class - - - Controller's logs storage claim size (Gigabytes) - - - Controller's logs storage class - - - Data pool's data storage claim size (Gigabytes) - - - Data pool's data storage class - - - Data pool's logs storage claim size (Gigabytes) - - - Data pool's logs storage class - - - Scale settings - - - SQL Server master's data storage claim size (Gigabytes) - - - SQL Server master's data storage class - - - SQL Server master's logs storage claim size (Gigabytes) - - - SQL Server master's logs storage class - - - Use controller settings - - - Storage pool's data storage claim size (Gigabytes) - - - Storage pool's data storage class - - - Storage pool's logs storage claim size (Gigabytes) - - - Storage pool's logs storage class - - - - - Account prefix - - - AKS cluster name - - - App owners - - - App readers - - - Application proxy - - - Authentication mode - - - Active Directory - - - Basic - - - Azure settings - - - Cluster admin group - - - Cluster context - - - Cluster name - - - Cluster settings - - - Cluster users - - - Compute pool instances - - - Controller - - - Controller username - - - Claim size for data (GB) - - - Data pool instances - - - Storage class for data - - - Data - - - Default Azure Subscription - - - Deployment profile - - - Deployment target - - - Domain controller FQDNs - - - Domain DNS IP addresses - - - Domain DNS name - - - Service account username - - - Endpoint settings - - - Gateway - - - Kube config - - - Location - - - Storage class for logs - - - Claim size for logs (GB) - - - SQL Server master instances - - - SQL Server Master - - - Organizational unit - - - Readable secondary - - - Resource group - - - Scale settings - - - Service - - - Management proxy - - - Spark pool instances - - - SQL Server Master - - - Storage pool (HDFS) - - - Storage pool (HDFS) instances - - - Storage settings - - - Subdomain - - - Subscription id - - - VM count - - - VM size - - - (Spark included) - - - Summary - - - - - Please select a cluster context. - - - Failed to load the config file - - - Select the kube config file and then select a cluster context from the list - - - Target cluster context - - - Browse - - - Cluster Contexts - - - No cluster information is found in the config file or an error ocurred while loading the config file - - - Kube config file path - - - Select - diff --git a/resources/xlf/en/sql.xlf b/resources/xlf/en/sql.xlf index 8d22debda4..ff4a52646e 100644 --- a/resources/xlf/en/sql.xlf +++ b/resources/xlf/en/sql.xlf @@ -5948,13 +5948,13 @@ Error: {1} - Optional execution target this magic indicates, for example Spark vs SQL + Optional execution target this magic indicates, for example Python vs SQL What file extensions should be registered to this notebook provider - Optional set of kernels this is valid for, e.g. python3, pyspark, sql + Optional set of kernels this is valid for, e.g. python3, sql The cell language to be used if this cell magic is included in the cell