* Fix#4029 Ensure changeKernels always resolves, even in error states
This is necessary to unblock reverting the kernel on canceling Python install
- startSession now correctly sets up kernel information, since a kernel is loaded there.
- Remove call to change kernel on session initialize. This isn't needed due to refactor
- Handle kernel change failure by attempting to fall back to old kernel
- ExtensionHost $startNewSession now ensures errors are sent across the wire.
- Update AttachTo and Kernel dropdowns so they handle kernel being available. This is needed since other changes mean the session is likely ready before these get going
* Fix to handle python cancel and load existing scenarios
- Made changes to handle failure flow when Python dialog is canceled
- Made changes to handle initial load fail. Kernel and Attach To dropdowns show No Kernel / None and you can choose a kernel
- Added error wrapping in ext host so that string errors make it across and aren't lost.
* Start single client session based on the default kernel or saved kernel in NB.
* Added kernel displayName to standardKernel.
Modified name to allign wtih Juptyer Kernel.name.
So we can show the displayName during startup and use the name to start the session.
* Change session.OnSessionReady event in KernelDropDown
* Added model.KernelChnaged for switching kernel in the same provider
* Fixed session.Ready sequence
* Fixed merge issues
* Solve merged issue
* Fixed wrong kernel name in saved NB
* Added new event in Model to notify kernel change.
Toolbar depends on ModelReady to load
* Change attachTo to wait for ModelReady like KenelDropDown
* sanitizeSavedKernelInfo to fix invalid kernel and display_name. For example: PySpark1111 and PySpark 1111
* Added _contextsChangingEmitter to change loadContext msg when changing kernel
* Resolve PR comments
* Fixed#4181 and #4167
The problems are:
- When change Kernel, setProviderIdForKernel switches Kernel first. So when we switch kernel across provider for exmple from PySpark3 to SQL. The Kernel is set SQL already in ClientSession.changeKernel. Then we lost oldKernel info.
The fix is cache the old session in model before switch and use it to get the correct context
- SQL Kenerl could make mulitple connects from "Add new connection". While we didn't track them, those connections didn't close properly when close the notebook
The fix is saving the connections made from "Add new connection" in model. and close them and activeConnection when close notebook
Problem is not solved yet in this PR:
- Didn't shutdown Jupytper when swich kernel from spark kernel to SQL.
- Added `runCell` API. Updated runCell button to listen to events on the model so it'll reflect run cell when called from other sources
- Plumbed through kernelspec info to the extension side so when changed, it's updated
- Fixed bug in ConnectionProfile where it didn't copy from options but instead overrode with empty wrapper functions
Here's the rough test code (it's in the sql-vnext extension and will be out in a separate PR)
```ts
it('Should connect to local notebook server with result 2', async function() {
this.timeout(60000);
let pythonNotebook = Object.assign({}, expectedNotebookContent, { metadata: { kernelspec: { name: "python3", display_name: "Python 3" }}});
let uri = writeNotebookToFile(pythonNotebook);
await ensureJupyterInstalled();
let notebook = await sqlops.nb.showNotebookDocument(uri);
should(notebook.document.cells).have.length(1);
let ran = await notebook.runCell(notebook.document.cells[0]);
should(ran).be.true('Notebook runCell failed');
let cellOutputs = notebook.document.cells[0].contents.outputs;
should(cellOutputs).have.length(1);
let result = (<sqlops.nb.IExecuteResult>cellOutputs[0]).data['text/plain'];
should(result).equal('2');
try {
// TODO support closing the editor. Right now this prompts and there's no override for this. Need to fix in core
// Close the editor using the recommended vscode API
//await vscode.commands.executeCommand('workbench.action.closeActiveEditor');
}
catch (e) {}
});
it('Should connect to remote spark server with result 2', async function() {
this.timeout(240000);
let uri = writeNotebookToFile(expectedNotebookContent);
await ensureJupyterInstalled();
// Given a connection to a server exists
let connectionId = await connectToSparkIntegrationServer();
// When I open a Spark notebook and run the cell
let notebook = await sqlops.nb.showNotebookDocument(uri, {
connectionId: connectionId
});
should(notebook.document.cells).have.length(1);
let ran = await notebook.runCell(notebook.document.cells[0]);
should(ran).be.true('Notebook runCell failed');
// Then I expect to get the output result of 1+1, executed remotely against the Spark endpoint
let cellOutputs = notebook.document.cells[0].contents.outputs;
should(cellOutputs).have.length(4);
let sparkResult = (<sqlops.nb.IStreamResult>cellOutputs[3]).text;
should(sparkResult).equal('2');
try {
// TODO support closing the editor. Right now this prompts and there's no override for this. Need to fix in core
// Close the editor using the recommended vscode API
//await vscode.commands.executeCommand('workbench.action.closeActiveEditor');
}
catch (e) {}
});
});
```
* Scenarios work besides loading saved kernel
* Fix compilation issue
* Save and load functional
* Fix loading kernesl issue when sql kernel is not enabled
* Fix language mapping to not be hardcoded any longer
* Remove unnecessary comment
* PR Comments vol. 1
* Code cleanup, use ConnectionProfile instead of IConnectionProfile when accessing serverName
* PR changes vol. 2
* One final comment for PR
* Fix linting issue
* working on formatting
* fixed basic lint errors; starting moving things to their appropriate location
* formatting
* update tslint to match the version of vscode we have
* remove unused code
* work in progress fixing layering
* formatting
* moved connection management service to platform
* formatting
* add missing file
* moving more servies
* formatting
* moving more services
* formatting
* wip
* moving more services
* formatting
* revert back tslint rules
* move css file
* add missing svgs
* first attach to working
* Transfer changes from sqlopsstudioextensions PR 448
* Transfer changes from sqlopsstudioextensions PR 447
* Transfer changes from sqlopsstudioextensions PR 456
* Transfer changes from sqlopsstudioextensions PR 465
* Transfer changes from sqlopsstudioextensions PR 463
* Transfer changes from sqlopsstudioextensions PR 482
* Transfer changes from sqlopsstudioextensions PR 485
* Session and Kernel implementation except executeRequest
* Attach to port compiling
* Further tweaks to attach to dropdown, re-enable opening connection dialog
* Revert "Merge remote-tracking branch 'origin/Notebook/sessionExtension' into feature/workingAttachTo"
This reverts commit 94703db87c85416c4ae36762afc1094d6e71166a, reversing
changes made to e4dc25331036d259e9c762cfe8741f957bb5c590.
* Fix code formatting
* Fix for new Add new connection issue
Full plumb through of Session support. Also fixed some test issues
- Load session and get necessary information in kernels list
- Run Cell button now works as expected
- Added a ToggleAction base class which can be used for anything that switches icons. I'd still prefer to have this be dynamic and as clean as the extension classes
- Fixed account test unhandled promise rejections (caused by incorrect / invalid tests) that made it hard to see all the test run output.
* Initial toolbar work
- This is in-progress and needs additional fixes
* Resolve PR comments
* Added empty kernel and hook up with Kernel dropdown
* Resolve PR comments
Implements provider contribution in the MainThreadNotebook, with matching function calls in the ExtHostNotebook class. This will allow us to proxy through notebook providers (specifically, creation of a notebook manager with required content, server managers) from an extension up through to the main process.
Implemented in this PR:
- Callthroughs for content and server manager APIs
- Very basic unit tests covering provider & manager registration
Not implemented:
- Fuller unit tests on the specific callthrough methods for content & server manager.
- Contribution point needed to test this (so we can actually pass through the extension's existing Notebook implementation)
- Defines a new NotebookService in Azure Data Studio which will be used to interact with notebooks. Since notebooks can require per-file instantiation the provider is just used to create & track managers for a given URI.
- Inject this into notebook.component.ts and pass required parameters that'll be used to properly initialize a manger into the method. Actual initialization not done yet.
- Port over & recompile notebook model code
- Define most required APIs in sqlops.proposed.d.ts. In the future, these will be used by extensions to contribute their own providers.