- Handles empty file scenario, with fixes along the way for missing metadata (bonus win)
- In non-empty file still shows error and kernel stuck in loading state. #3964 opened to track this issue and fix later
- Fixed so it's now invisible instead of empty when not selected.
- This fixes clickability and issue where it stayed visible in 1 fix
- Also fixed cell output action which used active cell instead of context cell.
* Spark features are enabled
* Fixed as PR comments
* minor change
* PR comments fixed
* minor fix
* change constant name to avoid conflicts with sqlopsextension
* sqlContext to context
* Changed tab name to SQL Server Big Data Cluster
* Added isCluster to ContextProvider to control display big data cluster dashboard tab
Ported New/open Notebook code to mssql extension and enable them in dashboard
* Fixed tslint
This was reviewed / worked on with Smitha and will be signed off on by PM via mail.
1 thing left (make run button look better when not selected) will be one in separate review.
Changes
- Add top/bottom padding to editor so it's not cramped
- Added an (on by default) setting `notebook.overrideEditorTheming`. This controls whether new colors etc. are used for notebook editors or if users should see vanilla UI like in standard editor. Settings under this flag are:
- When unselected, editor has same color as toolbar. On selection it goes back to regular editor view so colors work "right"
- In standard light/dark themes we now use a filled in background color instead of border box.
* Added hover support, adding box shadow and light outline on hovering and the "more actions" button showing on hover
* Added box shadow for dark themes (hooray!)
* Remove border from everything but the code cell unless a cell is selected or hovered over. This ensures this looks like a document
* Fix high contrast theming issues.
* Ported Analyze notebook code from SqlOpsStudio and make it work.
if config.notebook.sqlKernelEnabled is true, use SQL provider;
Use Jupyter provider if Python is install, otherwise use buildIn Kernel.
* Analyze in Notebook Kernel can only be Python or "No Kernel". So remove Sql Kernel.
- SQLKernel is the only place to listen for batch and query complete messages now
- It routes to the 1 and only future (since can only have 1 at a time
- It handles query cancelation and not-connected issues correctly
- Editor layout gets called sometimes when other events happen (and Notebook isn't visible)
- Add in a layout call on re-setting input so the cell is updated. This fixes the problem by laying out once the UI is visible again.
Note: long term, should really be destroying the UI (while preserving the model), then restoring it including scroll selection etc. and hooking back up to the model. That is... much more work, but something we'll need long term to avoid issues where we have many Notebooks open at once. Not in scope for this PR
- Toolbar background is now differentiated from the editor
- For unselected cells there's no longer a line selection in the cell. This makes it clearer what the active cell is (and cleans the UI up)
* Added data service context menu: file related operations. All new files are ported from SqlOpsStudio. Will remove these functionality from SqlOpsStudio.
* Used the existing constant hadoopKnoxEndpointName
* Rename nodeType name from hdfs to bdc. So we can have file context menu in both mssql and SqlOpsStudio. Need to add "Create External Table from CSV" support for bdc nodeType
* Rename bdc to mssqlcluster
Fixing an issue where we got a 501 Not Implemented because kernel display name sanitization was not occurring with the _defaultKernel case.
In addition, changed a method name to make it more clear, and removed an erroneous error that would occur every time you opened a notebook without any existing connections. I'm just removing this, as it adds no value.
Fixes#3856. Matches the Jupyter behavior that we have, where we don't show any message when a connection is required. We no longer will throw a bizarre exception about getOptionsKey being undefined.
Also sets max rows returned to 2000.
* First grid support in notebooks
* still trying to get nteract ipynb to display grid correctly
* works opening with existing 'application/vnd.dataresource+json' table
* fixing merge issue due to core folder structure changing a bit
* PR feedback, fix for XSS
- Added `runCell` API. Updated runCell button to listen to events on the model so it'll reflect run cell when called from other sources
- Plumbed through kernelspec info to the extension side so when changed, it's updated
- Fixed bug in ConnectionProfile where it didn't copy from options but instead overrode with empty wrapper functions
Here's the rough test code (it's in the sql-vnext extension and will be out in a separate PR)
```ts
it('Should connect to local notebook server with result 2', async function() {
this.timeout(60000);
let pythonNotebook = Object.assign({}, expectedNotebookContent, { metadata: { kernelspec: { name: "python3", display_name: "Python 3" }}});
let uri = writeNotebookToFile(pythonNotebook);
await ensureJupyterInstalled();
let notebook = await sqlops.nb.showNotebookDocument(uri);
should(notebook.document.cells).have.length(1);
let ran = await notebook.runCell(notebook.document.cells[0]);
should(ran).be.true('Notebook runCell failed');
let cellOutputs = notebook.document.cells[0].contents.outputs;
should(cellOutputs).have.length(1);
let result = (<sqlops.nb.IExecuteResult>cellOutputs[0]).data['text/plain'];
should(result).equal('2');
try {
// TODO support closing the editor. Right now this prompts and there's no override for this. Need to fix in core
// Close the editor using the recommended vscode API
//await vscode.commands.executeCommand('workbench.action.closeActiveEditor');
}
catch (e) {}
});
it('Should connect to remote spark server with result 2', async function() {
this.timeout(240000);
let uri = writeNotebookToFile(expectedNotebookContent);
await ensureJupyterInstalled();
// Given a connection to a server exists
let connectionId = await connectToSparkIntegrationServer();
// When I open a Spark notebook and run the cell
let notebook = await sqlops.nb.showNotebookDocument(uri, {
connectionId: connectionId
});
should(notebook.document.cells).have.length(1);
let ran = await notebook.runCell(notebook.document.cells[0]);
should(ran).be.true('Notebook runCell failed');
// Then I expect to get the output result of 1+1, executed remotely against the Spark endpoint
let cellOutputs = notebook.document.cells[0].contents.outputs;
should(cellOutputs).have.length(4);
let sparkResult = (<sqlops.nb.IStreamResult>cellOutputs[3]).text;
should(sparkResult).equal('2');
try {
// TODO support closing the editor. Right now this prompts and there's no override for this. Need to fix in core
// Close the editor using the recommended vscode API
//await vscode.commands.executeCommand('workbench.action.closeActiveEditor');
}
catch (e) {}
});
});
```
* Added Unified connection support
* Use generic way to do expandNode.
Cleanup the ported code and removed unreference code. Added as needed later.
Resolved PR comments.
* Minor fixes and removed timer for all expanders for now. If any providers can't response, the tree node will spin and wait. We may improve later.
* Change handSessionClose to not thenable.
Added a node to OE to show error message instead of reject. So we could show partial expanded result if get any.
Resolve PR comments
* Minor fixes of PR comments