* adding context
* apply extension changes
* shimming disconnect
* add data explorer context menu and add disconnect to it
* clean up shim code; better handle errors
* remove tpromise
* simplify code
* add node context on data explorer
* formatting
* add new Query action
* fix various errors with how the context menus work
* add manage and new query
* add refresh command
* formatting
* Fix#4505 Notebooks: New Notebook will not work if existing untitled notebooks are rehydrated
* Also fixes#4508
* Unify behavior across New Notebook entry points
- Use Notebook-{n} as the standard in both entry points
- Use SQL as default provider in both
- Ensure both check for other names and only use free number
* Fix#4029 Ensure changeKernels always resolves, even in error states
This is necessary to unblock reverting the kernel on canceling Python install
- startSession now correctly sets up kernel information, since a kernel is loaded there.
- Remove call to change kernel on session initialize. This isn't needed due to refactor
- Handle kernel change failure by attempting to fall back to old kernel
- ExtensionHost $startNewSession now ensures errors are sent across the wire.
- Update AttachTo and Kernel dropdowns so they handle kernel being available. This is needed since other changes mean the session is likely ready before these get going
* Fix to handle python cancel and load existing scenarios
- Made changes to handle failure flow when Python dialog is canceled
- Made changes to handle initial load fail. Kernel and Attach To dropdowns show No Kernel / None and you can choose a kernel
- Added error wrapping in ext host so that string errors make it across and aren't lost.
- Added to both built-in themes. Long term would be good to contribute this back to Seti so it works without a carbon edit tag.
- Issue #4418 tracks remaining problem where for new notebooks it's not set initially. This is blocked by Raj's work so will update once that lands.
* Fix#4356 New Notebook from connection doesn't connect
Fix new notebook error by passing profile instead of ID.
- I could've just sent the ID over, but this fix sets the stage for disconnected connections to work (since we have enough info to properly connect).
- There's a bug in NotebookModel blocking the disconnected connection part working, but Yurong's in progress fixes will unblock this. Hence checking in as-is and working to properly unblock once that's in.
* Support connection profile in commandline service
- Added new context API for things that want to work on commandline and object explorer
- Refactored commandlineservice slightly to be async & have a simpler execution flow (far fewer if/else statements)
* Fix unit tests
- Fixed 2 issues raised by tests (sholdn't do new query if no profile passed, shouldn't error on new query failing)
- Updated unit tests to pass as expected given changes to the APIs.
* pulling max bytes of data through the webhdfs api
* Added warning message for the Users to inform the data truncation.
* Updated the warning message on the notification flyer.
* Start single client session based on the default kernel or saved kernel in NB.
* Added kernel displayName to standardKernel.
Modified name to allign wtih Juptyer Kernel.name.
So we can show the displayName during startup and use the name to start the session.
* Change session.OnSessionReady event in KernelDropDown
* Added model.KernelChnaged for switching kernel in the same provider
* Fixed session.Ready sequence
* Fixed merge issues
* Solve merged issue
* Fixed wrong kernel name in saved NB
* Added new event in Model to notify kernel change.
Toolbar depends on ModelReady to load
* Change attachTo to wait for ModelReady like KenelDropDown
* sanitizeSavedKernelInfo to fix invalid kernel and display_name. For example: PySpark1111 and PySpark 1111
* Added _contextsChangingEmitter to change loadContext msg when changing kernel
* Resolve PR comments
The Attach To was showing only localhost for SQL, since we overrode the standard kernels from SQL with the ones from Jupyter.
Fix is to save all standard kernels.
Also, added dispose handling for some events I found during debugging and removed unused imports
* engine check when install extension
* gallery install/update and vsix install
* FIX COMMENTS
* Fix the detail not loading issue when version is invalid
* add more comments and adress PR comments
* add install telemetry for install from vsix scenario
* correct the name of the version property for telemetry
* Moved dacfx wizard into separate extension
* updating to use azdata
* one more azdata change
* bump import extension version
* renaming extension to dacpac
* Added verify OE child nodes tests for sandalone and BDC instance
* msg changes
* Added standalone OE node test
* Resolved PR comments
* Change env name
* Added scripts to set env for integration test
* Add azdata.d.ts for new extensibility APIs
* Update azdata typing files for connection API proposal
* Add implementation for azdata module
* Fix build break in agent
* WIP adding scripting support.
* Adding deploy command along with additional env vars needed.
* Adding script generation that sets envars, kube context, and mssqlctl
* Adding test email for docker email envar until we update UI.
* Adding cluster platform detection and disabling generate script after first click.
* Fix spacing and adding comment.
- Upate vscode-nls to 4.0.0 in notebook extension. Should be fix for Insiders build failure due to localization package failing
- Updated samples to remove as vscode-nls if not needed, or update if still needed
- Updated samples using `npm audit fix`
- Fixed compile errors in all the samples