Associate schemas to JSON files in the current projectA URL to a schema or a relative path to a schema in the current directoryAn array of file patterns to match against when resolving JSON files to schemas.A file pattern that can contain '*' to match against when resolving JSON files to schemas.The schema definition for the given URL. The schema only needs to be provided to avoid accesses to the schema URL.Enable/disable default JSON formatter (requires restart)Upload filesNew directoryDeletePreviewSaveCopy PathManage AccessNew NotebookOpen NotebookTasks and information about your SQL Server Big Data ClusterSQL Server Big Data ClusterSubmit Spark JobNew Spark JobView Spark HistoryView Yarn HistoryTasksInstall PackagesConfigure Python for NotebooksCluster
DashboardSearch: ServersSearch: Clear Search Server ResultsService EndpointsNotebooksShow Log FileDisabledEnabledExport Notebook as SQLExport SQL as NotebookMSSQL configurationShould BIT columns be displayed as numbers (1 or 0)? If false, BIT columns will be displayed as 'true' or 'false'Number of XML characters to store after running a queryShould column definitions be aligned?Should data types be formatted as UPPERCASE, lowercase, or none (not formatted)Should keywords be formatted as UPPERCASE, lowercase, or none (not formatted)should commas be placed at the beginning of each statement in a list e.g. ', mycolumn2' instead of at the end e.g. 'mycolumn1,'Should references to objects in a select statements be split into separate lines? E.g. for 'SELECT C1, C2 FROM T1' both C1 and C2 will be on separate lines[Optional] Log debug output to the console (View -> Output) and then select appropriate output channel from the dropdown[Optional] Log level for backend services. Azure Data Studio generates a file name every time it starts and if the file already exists the logs entries are appended to that file. For cleanup of old log files see logRetentionMinutes and logFilesRemovalLimit settings. The default tracingLevel does not log much. Changing verbosity could lead to extensive logging and disk space requirements for the logs. Error includes Critical, Warning includes Error, Information includes Warning and Verbose includes InformationNumber of minutes to retain log files for backend services. Default is 1 week.Maximum number of old files to remove upon startup that have expired mssql.logRetentionMinutes. Files that do not get cleaned up due to this limitation get cleaned up next time Azure Data Studio starts up.Should IntelliSense be enabledShould IntelliSense error checking be enabledShould IntelliSense suggestions be enabledShould IntelliSense quick info be enabledShould IntelliSense suggestions be lowercaseMaximum number of rows to return before the server stops processing your query.Maximum size of text and ntext data returned from a SELECT statementAn execution time-out of 0 indicates an unlimited wait (no time-out)Enable SET NOCOUNT optionEnable SET NOEXEC optionEnable SET PARSEONLY optionEnable SET ARITHABORT optionEnable SET STATISTICS TIME optionEnable SET STATISTICS IO optionEnable SET XACT_ABORT ON optionEnable SET TRANSACTION ISOLATION LEVEL optionEnable SET DEADLOCK_PRIORITY optionEnable SET LOCK TIMEOUT option (in milliseconds)Enable SET QUERY_GOVERNOR_COST_LIMITEnable SET ANSI_DEFAULTSEnable SET QUOTED_IDENTIFIEREnable SET ANSI_NULL_DFLT_ONEnable SET IMPLICIT_TRANSACTIONSEnable SET CURSOR_CLOSE_ON_COMMITEnable SET ANSI_PADDINGEnable SET ANSI_WARNINGSEnable SET ANSI_NULLSEnable Parameterization for Always Encrypted[Optional] Do not show unsupported platform warningsRecovery ModelLast Database BackupLast Log BackupCompatibility LevelOwnerVersionEditionComputer NameOS VersionEditionPricing TierCompatibility LevelOwnerVersionTypeMicrosoft SQL ServerName (optional)Custom name of the connectionServerName of the SQL Server instanceDatabaseThe name of the initial catalog or database int the data sourceAuthentication typeSpecifies the method of authenticating with SQL ServerSQL LoginWindows AuthenticationAzure Active Directory - Universal with MFA supportUser nameIndicates the user ID to be used when connecting to the data sourcePasswordIndicates the password to be used when connecting to the data sourceApplication intentDeclares the application workload type when connecting to a serverAsynchronous processingWhen true, enables usage of the Asynchronous functionality in the .Net Framework Data ProviderConnect timeoutThe length of time (in seconds) to wait for a connection to the server before terminating the attempt and generating an errorCurrent languageThe SQL Server language record nameAlways EncryptedEnables or disables Always Encrypted for the connectionAttestation ProtocolSpecifies a protocol for attesting a server-side enclave used with Always Encrypted with secure enclavesAzure AttestationHost Guardian ServiceEnclave Attestation URLSpecifies an endpoint for attesting a server-side enclave used with Always Encrypted with secure enclavesEncryptWhen true, SQL Server uses SSL encryption for all data sent between the client and server if the server has a certificate installedPersist security infoWhen false, security-sensitive information, such as the password, is not returned as part of the connectionTrust server certificateWhen true (and encrypt=true), SQL Server uses SSL encryption for all data sent between the client and server without validating the server certificateAttached DB file nameThe name of the primary file, including the full path name, of an attachable databaseContext connectionWhen true, indicates the connection should be from the SQL server context. Available only when running in the SQL Server processPortConnect retry countNumber of attempts to restore connectionConnect retry intervalDelay between attempts to restore connectionApplication nameThe name of the applicationWorkstation IdThe name of the workstation connecting to SQL ServerPoolingWhen true, the connection object is drawn from the appropriate pool, or if necessary, is created and added to the appropriate poolMax pool sizeThe maximum number of connections allowed in the poolMin pool sizeThe minimum number of connections allowed in the poolLoad balance timeoutThe minimum amount of time (in seconds) for this connection to live in the pool before being destroyedReplicationUsed by SQL Server in ReplicationAttach DB filenameFailover partnerThe name or network address of the instance of SQL Server that acts as a failover partnerMulti subnet failoverMultiple active result setsWhen true, multiple result sets can be returned and read from one connectionPacket sizeSize in bytes of the network packets used to communicate with an instance of SQL ServerType system versionIndicates which server type system the provider will expose through the DataReaderNameStatusSize (MB)Last backupNameNode Command called without any node passedManage AccessLocation : Permissions - OwnerOwnerGroup - Owning GroupEveryone elseUserGroupAccessDefaultDeleteSticky BitInherit DefaultsReadWriteExecuteAdd User or GroupEnter nameAddNamed Users and GroupsDefault User and GroupsUser or Group IconApplyApply RecursivelyUnexpected error occurred while applying changes : {0}Local file will be uploaded to HDFS. .......................... Submit Spark Job End ............................Uploading file from local {0} to HDFS folder: {1}Upload file to cluster Succeeded!Upload file to cluster Failed. {0}Submitting job {0} ... The Spark Job has been submitted.Spark Job Submission Failed. {0} YarnUI Url: {0} Spark History Url: {0} Get Application Id Failed. {0}Local file {0} does not existed. No SQL Server Big Data Cluster found.Please connect to the Spark cluster before View {0} History.NOTICE: This file has been truncated at {0} for preview. The file has been truncated at {0} for preview.$(sync~spin) {0}...CancelCancel operation?Search Server NamesNo Spark job batch id is returned from response.{0}[Error] {1}No log is returned within response.{0}[Error] {1}{0}Please provide the username to connect to the BDC Controller:Please provide the password to connect to the BDC ControllerError: {0}. Username and password are requiredAll FilesUploadUploading files to HDFSUpload operation was canceledError uploading files: {0}Creating directoryOperation was canceledError on making directory: {0}Enter directory nameError on deleting files: {0}Are you sure you want to delete this folder and its contents?Are you sure you want to delete this file?Saving HDFS FilesSave operation was canceledError on saving file: {0}Generating previewError on previewing file: {0}Error on copying path: {0}An unexpected error occurred while opening the Manage Access dialog: {0}Invalid Data StructureUnable to create WebHDFS client due to missing options: ${0}'${0}' is undefined.Bad RequestUnauthorizedForbiddenNot FoundInternal Server ErrorUnknown ErrorUnexpected RedirectConnectionInfo is undefined.ConnectionInfo.options is undefined.Some missing properties in connectionInfo.options: {0}View Known Issues{0} component exited unexpectedly. Please restart Azure Data Studio.This sample code loads the file into a data frame and shows the first 10 results.An error occurred converting the SQL document to a Notebook. Error : {0}An error occurred converting the Notebook document to SQL. Error : {0}NotebooksOnly .ipynb Notebooks are supportedCould not find the controller endpoint for this instanceApplying permission changes recursively under '{0}'Permission changes applied successfully.Applying permission changes to '{0}'.Error applying permission changes: {0}YesNoSelect other SQL ServerPlease select SQL Server with Big Data Cluster.No SQL Server is selected.The selected server does not belong to a SQL Server Big Data ClusterError Get File Path: {0}Parameters for SparkJobSubmissionDialog is illegalNew JobCancelSubmit{0} Spark Job Submission:.......................... Submit Spark Job Start ..........................Parameters for SparkJobSubmissionModel is illegalsubmissionArgs is invalid. livyBatchId is invalid. Get Application Id time out. {0}[Log] {1}Property localFilePath or hdfsFolderPath is not specified. Property Path is not specified. GENERALEnter a name ...Job NameSpark ClusterPath to a .jar or .py fileThe selected local file will be uploaded to HDFS: {0}JAR/py FileMain ClassArgumentsCommand line arguments used in your main class, multiple arguments should be split by space.Property Job Name is not specified.Property JAR/py File is not specified.Property Main Class is not specified.{0} does not exist in Cluster or exception thrown. The specified HDFS file does not exist. SelectError in locating the file due to Error: {0}ADVANCEDReference JarsJars to be placed in executor working directory. The Jar path needs to be an HDFS Path. Multiple paths should be split by semicolon (;)Reference py FilesPy Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;)Reference FilesFiles to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;)Driver MemoryAmount of memory to allocate to the driver. Specify units as part of value. Example 512M or 2G.Driver CoresAmount of CPU cores to allocate to the driver.Executor MemoryAmount of memory to allocate to the executor. Specify units as part of value. Example 512M or 2G.Executor CoresAmount of CPU cores to allocate to the executor.Executor CountNumber of instances of the executor to run.Queue NameName of the Spark queue to execute the session in.Configuration ValuesList of name value pairs containing Spark configuration values. Encoded as JSON dictionary. Example: '{"name":"value", "name2":"value2"}'.Please provide the username to connect to HDFS:Please provide the password to connect to HDFS:Session for node {0} does not existError notifying of node change: {0}HDFSRootError: {0}Cannot delete a connection. Only subfolders and files can be deleted.Stream operation canceled by the userMetrics DashboardLog Search DashboardSpark Jobs Management and Monitoring DashboardSpark Diagnostics and Monitoring DashboardCopyApplication ProxyCluster Management ServiceGateway to access HDFS files, SparkManagement ProxyManagement ProxySQL Server Master Instance Front-EndMetrics DashboardLog Search DashboardSpark Diagnostics and Monitoring DashboardSpark Jobs Management and Monitoring DashboardHDFS File System ProxyProxy for running Spark statements, jobs, applications{0} StartedStarting {0}Failed to start {0}Installing {0} to {1}Installing {0}Installed {0}Downloading {0}({0} KB)Downloading {0}Done installing {0}Extracted {0} ({1}/{2})Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account is available. Please add a linked Azure account and retry the query.Please select a linked Azure account:Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account was selected. Please retry the query and select a linked Azure account when prompted.The configured Azure account for {0} does not have sufficient permissions for Azure Key Vault to access a column master key for Always Encrypted.