Associate schemas to JSON files in the current project A URL to a schema or a relative path to a schema in the current directory An array of file patterns to match against when resolving JSON files to schemas. A file pattern that can contain '*' to match against when resolving JSON files to schemas. The schema definition for the given URL. The schema only needs to be provided to avoid accesses to the schema URL. Enable/disable default JSON formatter (requires restart) Upload files New directory Delete Preview Save Copy Path Manage Access New Notebook Open Notebook Tasks and information about your SQL Server Big Data Cluster SQL Server Big Data Cluster Submit Spark Job New Spark Job View Spark History View Yarn History Tasks Install Packages Configure Python for Notebooks Cluster Dashboard Search: Servers Search: Clear Search Server Results Service Endpoints Notebooks Show Log File Disabled Enabled Export Notebook as SQL Export SQL as Notebook MSSQL configuration Should BIT columns be displayed as numbers (1 or 0)? If false, BIT columns will be displayed as 'true' or 'false' Number of XML characters to store after running a query Should column definitions be aligned? Should data types be formatted as UPPERCASE, lowercase, or none (not formatted) Should keywords be formatted as UPPERCASE, lowercase, or none (not formatted) should commas be placed at the beginning of each statement in a list e.g. ', mycolumn2' instead of at the end e.g. 'mycolumn1,' Should references to objects in a select statements be split into separate lines? E.g. for 'SELECT C1, C2 FROM T1' both C1 and C2 will be on separate lines [Optional] Log debug output to the console (View -> Output) and then select appropriate output channel from the dropdown [Optional] Log level for backend services. Azure Data Studio generates a file name every time it starts and if the file already exists the logs entries are appended to that file. For cleanup of old log files see logRetentionMinutes and logFilesRemovalLimit settings. The default tracingLevel does not log much. Changing verbosity could lead to extensive logging and disk space requirements for the logs. Error includes Critical, Warning includes Error, Information includes Warning and Verbose includes Information Number of minutes to retain log files for backend services. Default is 1 week. Maximum number of old files to remove upon startup that have expired mssql.logRetentionMinutes. Files that do not get cleaned up due to this limitation get cleaned up next time Azure Data Studio starts up. Should IntelliSense be enabled Should IntelliSense error checking be enabled Should IntelliSense suggestions be enabled Should IntelliSense quick info be enabled Should IntelliSense suggestions be lowercase Maximum number of rows to return before the server stops processing your query. Maximum size of text and ntext data returned from a SELECT statement An execution time-out of 0 indicates an unlimited wait (no time-out) Enable SET NOCOUNT option Enable SET NOEXEC option Enable SET PARSEONLY option Enable SET ARITHABORT option Enable SET STATISTICS TIME option Enable SET STATISTICS IO option Enable SET XACT_ABORT ON option Enable SET TRANSACTION ISOLATION LEVEL option Enable SET DEADLOCK_PRIORITY option Enable SET LOCK TIMEOUT option (in milliseconds) Enable SET QUERY_GOVERNOR_COST_LIMIT Enable SET ANSI_DEFAULTS Enable SET QUOTED_IDENTIFIER Enable SET ANSI_NULL_DFLT_ON Enable SET IMPLICIT_TRANSACTIONS Enable SET CURSOR_CLOSE_ON_COMMIT Enable SET ANSI_PADDING Enable SET ANSI_WARNINGS Enable SET ANSI_NULLS Enable Parameterization for Always Encrypted [Optional] Do not show unsupported platform warnings Recovery Model Last Database Backup Last Log Backup Compatibility Level Owner Version Edition Computer Name OS Version Edition Pricing Tier Compatibility Level Owner Version Type Microsoft SQL Server Name (optional) Custom name of the connection Server Name of the SQL Server instance Database The name of the initial catalog or database int the data source Authentication type Specifies the method of authenticating with SQL Server SQL Login Windows Authentication Azure Active Directory - Universal with MFA support User name Indicates the user ID to be used when connecting to the data source Password Indicates the password to be used when connecting to the data source Application intent Declares the application workload type when connecting to a server Asynchronous processing When true, enables usage of the Asynchronous functionality in the .Net Framework Data Provider Connect timeout The length of time (in seconds) to wait for a connection to the server before terminating the attempt and generating an error Current language The SQL Server language record name Always Encrypted Enables or disables Always Encrypted for the connection Attestation Protocol Specifies a protocol for attesting a server-side enclave used with Always Encrypted with secure enclaves Azure Attestation Host Guardian Service Enclave Attestation URL Specifies an endpoint for attesting a server-side enclave used with Always Encrypted with secure enclaves Encrypt When true, SQL Server uses SSL encryption for all data sent between the client and server if the server has a certificate installed Persist security info When false, security-sensitive information, such as the password, is not returned as part of the connection Trust server certificate When true (and encrypt=true), SQL Server uses SSL encryption for all data sent between the client and server without validating the server certificate Attached DB file name The name of the primary file, including the full path name, of an attachable database Context connection When true, indicates the connection should be from the SQL server context. Available only when running in the SQL Server process Port Connect retry count Number of attempts to restore connection Connect retry interval Delay between attempts to restore connection Application name The name of the application Workstation Id The name of the workstation connecting to SQL Server Pooling When true, the connection object is drawn from the appropriate pool, or if necessary, is created and added to the appropriate pool Max pool size The maximum number of connections allowed in the pool Min pool size The minimum number of connections allowed in the pool Load balance timeout The minimum amount of time (in seconds) for this connection to live in the pool before being destroyed Replication Used by SQL Server in Replication Attach DB filename Failover partner The name or network address of the instance of SQL Server that acts as a failover partner Multi subnet failover Multiple active result sets When true, multiple result sets can be returned and read from one connection Packet size Size in bytes of the network packets used to communicate with an instance of SQL Server Type system version Indicates which server type system the provider will expose through the DataReader Name Status Size (MB) Last backup Name Node Command called without any node passed Manage Access Location : Permissions - Owner Owner Group - Owning Group Everyone else User Group Access Default Delete Sticky Bit Inherit Defaults Read Write Execute Add User or Group Enter name Add Named Users and Groups Default User and Groups User or Group Icon Apply Apply Recursively Unexpected error occurred while applying changes : {0} Local file will be uploaded to HDFS. .......................... Submit Spark Job End ............................ Uploading file from local {0} to HDFS folder: {1} Upload file to cluster Succeeded! Upload file to cluster Failed. {0} Submitting job {0} ... The Spark Job has been submitted. Spark Job Submission Failed. {0} YarnUI Url: {0} Spark History Url: {0} Get Application Id Failed. {0} Local file {0} does not existed. No SQL Server Big Data Cluster found. Please connect to the Spark cluster before View {0} History. NOTICE: This file has been truncated at {0} for preview. The file has been truncated at {0} for preview. $(sync~spin) {0}... Cancel Cancel operation? Search Server Names No Spark job batch id is returned from response.{0}[Error] {1} No log is returned within response.{0}[Error] {1} {0}Please provide the username to connect to the BDC Controller: Please provide the password to connect to the BDC Controller Error: {0}. Username and password are required All Files Upload Uploading files to HDFS Upload operation was canceled Error uploading files: {0} Creating directory Operation was canceled Error on making directory: {0} Enter directory name Error on deleting files: {0} Are you sure you want to delete this folder and its contents? Are you sure you want to delete this file? Saving HDFS Files Save operation was canceled Error on saving file: {0} Generating preview Error on previewing file: {0} Error on copying path: {0} An unexpected error occurred while opening the Manage Access dialog: {0} Invalid Data Structure Unable to create WebHDFS client due to missing options: ${0} '${0}' is undefined. Bad Request Unauthorized Forbidden Not Found Internal Server Error Unknown Error Unexpected Redirect ConnectionInfo is undefined. ConnectionInfo.options is undefined. Some missing properties in connectionInfo.options: {0} View Known Issues {0} component exited unexpectedly. Please restart Azure Data Studio. This sample code loads the file into a data frame and shows the first 10 results. An error occurred converting the SQL document to a Notebook. Error : {0} An error occurred converting the Notebook document to SQL. Error : {0} Notebooks Only .ipynb Notebooks are supported Could not find the controller endpoint for this instance Applying permission changes recursively under '{0}' Permission changes applied successfully. Applying permission changes to '{0}'. Error applying permission changes: {0} Yes No Select other SQL Server Please select SQL Server with Big Data Cluster. No SQL Server is selected. The selected server does not belong to a SQL Server Big Data Cluster Error Get File Path: {0} Parameters for SparkJobSubmissionDialog is illegal New Job Cancel Submit {0} Spark Job Submission: .......................... Submit Spark Job Start .......................... Parameters for SparkJobSubmissionModel is illegal submissionArgs is invalid. livyBatchId is invalid. Get Application Id time out. {0}[Log] {1} Property localFilePath or hdfsFolderPath is not specified. Property Path is not specified. GENERAL Enter a name ... Job Name Spark Cluster Path to a .jar or .py file The selected local file will be uploaded to HDFS: {0} JAR/py File Main Class Arguments Command line arguments used in your main class, multiple arguments should be split by space. Property Job Name is not specified. Property JAR/py File is not specified. Property Main Class is not specified. {0} does not exist in Cluster or exception thrown. The specified HDFS file does not exist. Select Error in locating the file due to Error: {0} ADVANCED Reference Jars Jars to be placed in executor working directory. The Jar path needs to be an HDFS Path. Multiple paths should be split by semicolon (;) Reference py Files Py Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;) Reference Files Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;) Driver Memory Amount of memory to allocate to the driver. Specify units as part of value. Example 512M or 2G. Driver Cores Amount of CPU cores to allocate to the driver. Executor Memory Amount of memory to allocate to the executor. Specify units as part of value. Example 512M or 2G. Executor Cores Amount of CPU cores to allocate to the executor. Executor Count Number of instances of the executor to run. Queue Name Name of the Spark queue to execute the session in. Configuration Values List of name value pairs containing Spark configuration values. Encoded as JSON dictionary. Example: '{"name":"value", "name2":"value2"}'. Please provide the username to connect to HDFS: Please provide the password to connect to HDFS: Session for node {0} does not exist Error notifying of node change: {0} HDFS Root Error: {0} Cannot delete a connection. Only subfolders and files can be deleted. Stream operation canceled by the user Metrics Dashboard Log Search Dashboard Spark Jobs Management and Monitoring Dashboard Spark Diagnostics and Monitoring Dashboard Copy Application Proxy Cluster Management Service Gateway to access HDFS files, Spark Management Proxy Management Proxy SQL Server Master Instance Front-End Metrics Dashboard Log Search Dashboard Spark Diagnostics and Monitoring Dashboard Spark Jobs Management and Monitoring Dashboard HDFS File System Proxy Proxy for running Spark statements, jobs, applications {0} Started Starting {0} Failed to start {0} Installing {0} to {1} Installing {0} Installed {0} Downloading {0} ({0} KB) Downloading {0} Done installing {0} Extracted {0} ({1}/{2}) Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account is available. Please add a linked Azure account and retry the query. Please select a linked Azure account: Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account was selected. Please retry the query and select a linked Azure account when prompted. The configured Azure account for {0} does not have sufficient permissions for Azure Key Vault to access a column master key for Always Encrypted.