Updated existing serialization code so it actually supports serialization. Still needs work to replace the saveAs function when a QueryProvider doesn't support save as, but want to handle in separate PR.
Removed separate MainThread/ExtHostSerializationProvider code as the DataProtocol code is the right place to put this code
Plumbed support through the gridOutputComponent to use the new serialize method in the serialization provider
Refactored the resultSerializer so majority of code can be shared between both implementations (for example file save dialog -> save -> show file on completion)
* Update to latest SQLToolsService release
* remove sync methods
* added assertion to log error
* create random folder
* change variable name to be more descriptive
* Update book.test.ts
* Update book.test.ts
- Fix spark job failing when 1 or more servers are connected
and listed in getActiveConnections
- If nothing is connected, prompts to choose a server
- Added option to dropdown when 1 or more server is connected
to support picking a different connection
Root bug was trying to access "host" not "server" in options bag, which no longer exists.
* Allow column resizing in preview table
* added property to TableComponentProperties
* hooked up new prop into model view
* new setOptions
* adding enum to azdata interface
* bring in slickgrid 2.3.30
* PR feedback
* Fix#6477 controller login + fix dashboard layout
- Service endpoints shoudl be on own column, cut off smaller screen
- Controller login not working due to 404 error
This is due to a breaking API change
We have requested fixes to help mitigate need for cluster name,
but for now have a default value for this
Finally, modified code so it's easier to update swagger API
and also added instructions on how to update in future
* add webpack for built in extensions
* fix the casing issue
* Rename azCLITool.ts to azCliTool.ts
* Rename kubectlTool.ts to kubeCtlTool.ts
* fix the error
* fix the packaging issue
* Replace Big Data Cluster with big data cluster
Official docs guidance is to use "big data cluster" instead of "Big Data Cluster"
* Use doublequotes and full product name
* initial commit: new modelview widget that shows books
* moved the new command action to notebook contribution and addressed the comments
* localize changes
* removed changes from src/vs file and string changes
* make directory for each contribution
* typo fix
This adds SQL Server Notebook as a built-in extension
by pulling it from blob storage.
It also adds support in mssql extension for reading the contribution points from other extensions.
This will contribute troubleshooting and other books as widgets.
In this commit:
- Bundle the extension in the build
- Bundle in sql.sh / sql.bat so it appears in local testing
- Avoid installing in Stable. Should only appear in Dev/Insiders builds
- Extensions with `notebook.books` contribution point will be discovered & their books available in MSSQL
Coming later:
- Integrate this with Maddy's work to show a Notebooks widget in the SQL Server big data cluster UI
- When clause isn't supported yet for filtering. Will be done as we refactor towards more books for different server types