diff --git a/resources/xlf/en/mssql.xlf b/resources/xlf/en/mssql.xlf
index 0f9a976afb..80aed39300 100644
--- a/resources/xlf/en/mssql.xlf
+++ b/resources/xlf/en/mssql.xlf
@@ -1,58 +1,5 @@
-
-
- Copy
-
-
- Application Proxy
-
-
- Cluster Management Service
-
-
- Gateway to access HDFS files, Spark
-
-
- Metrics Dashboard
-
-
- Log Search Dashboard
-
-
- Proxy for running Spark statements, jobs, applications
-
-
- Management Proxy
-
-
- Management Proxy
-
-
- Spark Jobs Management and Monitoring Dashboard
-
-
- SQL Server Master Instance Front-End
-
-
- HDFS File System Proxy
-
-
- Spark Diagnostics and Monitoring Dashboard
-
-
- Metrics Dashboard
-
-
- Log Search Dashboard
-
-
- Spark Jobs Management and Monitoring Dashboard
-
-
- Spark Diagnostics and Monitoring Dashboard
-
-Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account was selected. Please retry the query and select a linked Azure account when prompted.
@@ -66,542 +13,31 @@
Azure Data Studio needs to contact Azure Key Vault to access a column master key for Always Encrypted, but no linked Azure account is available. Please add a linked Azure account and retry the query.
-
-
-
- Error applying permission changes: {0}
-
-
- Applying permission changes to '{0}'.
-
-
- Applying permission changes recursively under '{0}'
-
-
- Permission changes applied successfully.
-
-
-
-
- Bad Request
-
-
- Unauthorized
-
-
- Forbidden
-
-
- Not Found
-
-
- Internal Server Error
-
-
- Invalid Data Structure
-
-
- Unable to create WebHDFS client due to missing options: ${0}
-
-
- '${0}' is undefined.
-
-
- Unexpected Redirect
-
-
- Unknown Error
-
-
- Node Command called without any node passed
-
-
- Access
-
-
- Add
-
-
- Add User or Group
-
-
- Apply
-
-
- Apply Recursively
-
-
- Default
-
-
- Default User and Groups
-
-
- Delete
-
-
- Enter name
-
-
- Unexpected error occurred while applying changes : {0}
-
-
- Everyone else
-
-
- Execute
- Failed to find azure account {0} when executing token refreshFailed to find tenant '{0}' in account '{1}' when refreshing security token
-
- Group
-
-
- Group
-
-
- Inherit Defaults
-
-
- Location :
-
-
- Manage Access
-
-
- Named Users and Groups
-
-
- Owner
-
-
- - Owner
-
-
- - Owning Group
-
-
- Permissions
-
-
- Read
-
-
- Sticky Bit
- {0} AAD token refresh failed, please reconnect to enable {0}Editor token refresh failed, autocompletion will be disabled until the editor is disconnected and reconnected
-
- User
-
-
- User or Group Icon
-
-
- Write
-
-
- Please connect to the Spark cluster before View {0} History.
-
-
- Get Application Id Failed. {0}
-
-
- Local file will be uploaded to HDFS.
-
-
- Local file {0} does not existed.
-
-
- No SQL Server Big Data Cluster found.
-
-
- Submitting job {0} ...
-
-
- Uploading file from local {0} to HDFS folder: {1}
-
-
- Spark History Url: {0}
-
-
- .......................... Submit Spark Job End ............................
-
-
- Spark Job Submission Failed. {0}
-
-
- The Spark Job has been submitted.
-
-
- Upload file to cluster Failed. {0}
-
-
- Upload file to cluster Succeeded!
-
-
- YarnUI Url: {0}
-
-
- This sample code loads the file into a data frame and shows the first 10 results.
- An error occurred converting the SQL document to a Notebook. Error : {0}An error occurred converting the Notebook document to SQL. Error : {0}
-
- Could not find the controller endpoint for this instance
-
-
- Notebooks
-
-
- Only .ipynb Notebooks are supported
-
-
-
-
- Stream operation canceled by the user
-
-
-
-
- Cancel operation?
-
-
- Cancel
- Search Server Names
-
- $(sync~spin) {0}...
-
-
-
-
- Some missing properties in connectionInfo.options: {0}
-
-
- ConnectionInfo.options is undefined.
-
-
- ConnectionInfo is undefined.
-
-
-
-
- NOTICE: This file has been truncated at {0} for preview.
-
-
- The file has been truncated at {0} for preview.
-
-
-
-
- All Files
-
-
- Error on copying path: {0}
-
-
- Error on deleting files: {0}
-
-
- Enter directory name
-
-
- Upload
-
-
- Creating directory
-
-
- An unexpected error occurred while opening the Manage Access dialog: {0}
-
-
- Error on making directory: {0}
-
-
- Operation was canceled
-
-
- Are you sure you want to delete this file?
-
-
- Are you sure you want to delete this folder and its contents?
-
-
- Error on previewing file: {0}
-
-
- Generating preview
-
-
- Save operation was canceled
-
-
- Error on saving file: {0}
-
-
- Saving HDFS Files
-
-
- Upload operation was canceled
-
-
- Error uploading files: {0}
-
-
- Uploading files to HDFS
-
-
-
-
- Cannot delete a connection. Only subfolders and files can be deleted.
-
-
- Error: {0}
-
-
-
-
- HDFS
-
-
- Error notifying of node change: {0}
-
-
- Please provide the password to connect to HDFS:
-
-
- Please provide the username to connect to HDFS:
-
-
- Root
-
-
- Session for node {0} does not exist
-
-
-
-
- No
-
-
- Yes
-
-
-
-
- The selected server does not belong to a SQL Server Big Data Cluster
-
-
- Select other SQL Server
-
-
- Error Get File Path: {0}
-
-
- No SQL Server is selected.
-
-
- Please select SQL Server with Big Data Cluster.
-
-
-
-
- ADVANCED
-
-
- Reference Files
-
-
- Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;)
-
-
- Reference Jars
-
-
- Jars to be placed in executor working directory. The Jar path needs to be an HDFS Path. Multiple paths should be split by semicolon (;)
-
-
- Reference py Files
-
-
- Py Files to be placed in executor working directory. The file path needs to be an HDFS Path. Multiple paths should be split by semicolon(;)
-
-
- Configuration Values
-
-
- List of name value pairs containing Spark configuration values. Encoded as JSON dictionary. Example: '{"name":"value", "name2":"value2"}'.
-
-
- Driver Cores
-
-
- Amount of CPU cores to allocate to the driver.
-
-
- Driver Memory
-
-
- Amount of memory to allocate to the driver. Specify units as part of value. Example 512M or 2G.
-
-
- Executor Cores
-
-
- Amount of CPU cores to allocate to the executor.
-
-
- Executor Count
-
-
- Number of instances of the executor to run.
-
-
- Executor Memory
-
-
- Amount of memory to allocate to the executor. Specify units as part of value. Example 512M or 2G.
-
-
- Queue Name
-
-
- Name of the Spark queue to execute the session in.
-
-
-
-
- Arguments
-
-
- Command line arguments used in your main class, multiple arguments should be split by space.
-
-
- Path to a .jar or .py file
-
-
- GENERAL
-
-
- The specified HDFS file does not exist.
-
-
- {0} does not exist in Cluster or exception thrown.
-
-
- Job Name
-
-
- Enter a name ...
-
-
- The selected local file will be uploaded to HDFS: {0}
-
-
- Main Class
-
-
- JAR/py File
-
-
- Property JAR/py File is not specified.
-
-
- Property Job Name is not specified.
-
-
- Property Main Class is not specified.
-
-
- Error in locating the file due to Error: {0}
-
-
- Spark Cluster
-
-
- Select
-
-
-
-
- Cancel
-
-
- Submit
-
-
- New Job
-
-
- Parameters for SparkJobSubmissionDialog is illegal
-
-
- .......................... Submit Spark Job Start ..........................
-
-
- {0} Spark Job Submission:
-
-
-
-
- Get Application Id time out. {0}[Log] {1}
-
-
- livyBatchId is invalid.
-
-
- Property Path is not specified.
-
-
- Parameters for SparkJobSubmissionModel is illegal
-
-
- Property localFilePath or hdfsFolderPath is not specified.
-
-
- submissionArgs is invalid.
-
-
-
-
- No Spark job batch id is returned from response.{0}[Error] {1}
-
-
- No log is returned within response.{0}[Error] {1}
-
-
-
-
- Error: {0}.
-
-
- Please provide the password to connect to the BDC Controller
-
-
- {0}Please provide the username to connect to the BDC Controller:
-
-
- Username and password are required
-
@@ -1108,33 +544,6 @@
[Optional] Log level for backend services. Azure Data Studio generates a file name every time it starts and if the file already exists the logs entries are appended to that file. For cleanup of old log files see logRetentionMinutes and logFilesRemovalLimit settings. The default tracingLevel does not log much. Changing verbosity could lead to extensive logging and disk space requirements for the logs. Error includes Critical, Warning includes Error, Information includes Warning and Verbose includes Information
-
- Copy Path
-
-
- Delete
-
-
- Manage Access
-
-
- New directory
-
-
- Preview
-
-
- Save
-
-
- Upload files
-
-
- New Notebook
-
-
- Open Notebook
- Name
@@ -1165,57 +574,20 @@
Version
-
- Tasks and information about your SQL Server Big Data Cluster
-
-
- SQL Server Big Data Cluster
-
-
- Notebooks
- Search: Clear Search Server Results
-
- Configure Python for Notebooks
- Design
-
- Service Endpoints
-
-
- Install Packages
-
-
- New Spark Job
- New Table
-
- Cluster
-Dashboard
-
-
- View Spark History
-
-
- View Yarn History
- Search: ServersShow Log File
-
- Submit Spark Job
-
-
- Tasks
-
\ No newline at end of file
diff --git a/resources/xlf/en/notebook.xlf b/resources/xlf/en/notebook.xlf
index fa18660d26..5e959ee191 100644
--- a/resources/xlf/en/notebook.xlf
+++ b/resources/xlf/en/notebook.xlf
@@ -196,9 +196,6 @@
New Section (Preview)
-
- Spark kernels require a connection to a SQL Server Big Data Cluster master instance.
- No Jupyter Books are currently selected in the viewlet.
@@ -232,9 +229,6 @@
Open untitled notebook {0} as untitled failed: {1}
-
- Non-MSSQL providers are not supported for spark kernels.
- Failed to read Jupyter Book {0}: {1}
@@ -270,9 +264,6 @@