This change is a reworking of the way that messages are sent to clients from the service layer. It is also a reworking of the protocol to ensure that all formulations of query send back events to the client in a deterministic ordering. To support the first change:
* Added a new event that will be sent when a message is generated
* Messages now indicate which Batch (if any) generated them
* Messages now indicate if they were error level
* Removed message storage in Batch objects and BatchSummary objects
* Batch objects no longer have error state
It's an overhaul of the Save As mechanism to utilize the file reader/writer classes to better align with the patterns laid out by the rest of the query execution. Why make this change? This change makes our code base more uniform and adherent to the patterns/paradigms we've set up. This change also helps with the encapsulation of the classes to "separate the concerns" of each component of the save as function.
* Replumbing the save as execution to pass the call down the query stack as QueryExecutionService->Query->Batch->ResultSet
* Each layer performs it's own parameter checking
* QueryExecutionService checks if the query exists
* Query checks if the batch exists
* Batch checks if the result set exists
* ResultSet checks if the row counts are valid and if the result set has been executed
* Success/Failure delegates are passed down the chain as well
* Determination of whether a save request is a "selection" moved to the SaveResultsRequest class to eliminate duplication of code and creation of utility classes
* Making the IFileStream* classes more generic
* Removing the requirements of max characters to store from the GetWriter method, and moving it into the constructor for the temporary buffer writer - the values have been moved to the settings and given defaults
* Removing the individual type writers from IFileStreamWriter
* Removing the individual type writers from IFIleStreamReader
* Adding a new overload for WriteRow to IFileStreamWriter that will write out data, given a row's worth of data and the list of columns
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to CSV files
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to JSON files
* Dramatically simplified the CSV encoding functionality
* Removed duplicated logic for saving in different types and condensed down to a single chain that only differs based on what type of factory is provided
* Removing the logic for managing the list of save as tasks, since the ResultSet now performs the actual saving work, there's no real need to expose the internals of the ResultSet
* Adding new strings to the sr.strings file for save as error messages
* Completely rewriting the unit tests for the save as mechanism. Very fine grained unit tests now that should cover majority of cases (aside from race conditions)
* Refactoring maxchars params into settings and out of file stream factory
* Removing write*/read* methods from file stream readers/writers
* Migrating the CSV save as to the resultset
* Tweaks to unit testing to eliminate writing files to disk
* WIP, moving to a base class for save results writers
* Everything is wired up and compiles
* Adding unit tests for CSV encoding
* Adding unit tests for CSV and Json writers
* Adding tests to the result set for saving
* Refactor to throw exceptions on errors instead of calling failure handler
* Unit tests for batch/query argument in range
* Unit tests
* Adding service integration unit tests
* Final polish, copyright notices, etc
* Adding NULL logic
* Fixing issue of unicode to utf8
* Fixing issues as per @kburtram code review comments
* Adding files that got broken?
This basically is a replacement to the fix for Adding Milliseconds to DateTime fields. I didn't take into consideration that `DATE` columns would report as DateTime type. Date columns have a numeric scale set to 255, leading to the formatting string for the date time to include 255 millisecond places, which is invalid.
This change also reverses the change to store DateTime precision in the buffer file. Instead, the column metadata is now used when deserializing the rows from the db. `DATETIME` and `DATETIME2` columns are differentiated by their numeric scale while `DATE` columns are differentiated by their datatype name field.
More unit tests were added. Additionally, this fixes an unreported bug that `DATE` columns were being displayed with times, which is incorrect.
* Revert "Adding Milliseconds to DateTime fields (#173)"
This reverts commit 431dfa4156.
* Reworking the reader to use the column metadata for date types
* DbColumn -> DbColumnWrapper
* Fina tweaks to support DATETIME2(0)
* Removing the unused arguments
This change is a refactor of the DbColumnWrapper class. This fixes a huge oversight/gotcha in the DbColumnWrapper where treating the DbColumnWrapper as a DbColumn (ie, polymorphism) would result in almost all the fields of the column being null. This is a highly unexpected behavior. This change fixes that.
* DbColumn and ReliableConnection tests
* More retry connection tests
* More tests
* Fix broken peek definition integration tests
* Fix test bug
* Add a couple batch tests
* Add some more tests
* More tests for code coverage.
* Validation and Diagnostic tests
* A few more tests
* A few mote test changes.
* Update file path tests to run on Windows only
This is a slightly larger change than anticipated due the difference between `DATETIME`, `DATETIME2`, and `DateTime`. The `DATETIME` type always uses 3 decimal points of a second, while the `DATETIME2` type has 7 (although since `DATETIME2(7)` is default in SSMS suggesting that it is a variable precision type). Regardless of the db type, the engine returns `DateTime` C# type. The db types are only made visible via the column info, via the numeric precision and numeric scale. My findings were as such:
`DATETIME `: Precision = 23, Scale = 3
`DATETIME2`: Precision = 255, Scale = 7
The scale corresponds neatly with the number of second decimal points to show. The buffer file writer was modified to store both the scale and the number of ticks. Then the buffer file reader was modified to read in the precision and the number of ticks and generate the ToString version of the DateTime to add "f" as many times as there is scale, which corresponds to milliseconds.
* Code for writing milliseconds of datetime/datetime2 columns
* Adding unit tests
* Fixing potential bug with datetime2(0)
This change is part of the progressive results code. It will submit a notification from the service layer to indicate when execution of a batch has completed. This notification will contain the selection for batch, execution start time, and its ID. This will enable the extension to produce a header for the batch before the batch completes, in order to make it more clear to the user that execution is going on.
* Adding new event for batch start
* Unit tests
* Fixing comments as per @kevcunnane
The main change in this pull request is to add a new event that will be fired upon completion of a resultset but before the completion of a batch. This event will only fire if a resultset is available and generated.
Changes:
* ConnectionService - Slight changes to enable mocking, cleanup
* Batch - Moving summary generation into ResultSet class, adding generation of ordinals for resultset and locking of result set list (which needs further refinement, but would be outside scope of this change)
* Adding new event and associated parameters for completion of a resultset. Params return the resultset summary
* Adding logic for assigning the event a handler in the query execution service
* Adding unit tests for testing the new event /making sure the existing tests work
* Refactoring some private properties into member variables
* Refactor to remove SectionData class in favor of BufferRange
* Adding callback for batch completion that will let the extension know that a batch has completed execution
* Refactoring to make progressive results work as per async query execution
* Allowing retrieval of batch results while query is in progress
* reverting global.json, whoops
* Adding a few missing comments, and fixing a couple code style bugs
* Using SelectionData everywhere again
* One more missing comment
* Adding new notification type for result set completion
* Plumbing event for result set completion
* Unit tests for result set events
This includes a fairly substantial change to create a mock of the
ConnectionService and to create separate memorystream storage arrays. It
preserves more correct behavior with a integration test, fixes an issue
where the test db reader will return n-1 rows because the Reliable
Connection Helper steals a record.
* Adding locking to ResultSets for thread safety
* Adding/fixing unit tests
* Adding batch ID to result set summary
This is a fairly large set of changes to the unit tests that help isolate the effectiveness of the unit tests.
* Unit tests for query execution have been split into separate files for different classes.
* Unit tests have been added for the ResultSet class which previously did not have tests
* The InMemoryStreamWrapper has been improved to share memory, creating a simulated filesystem
* Creating a mock ConnectionService to decrease noisy exceptions and prevent "row stealing". Unfortunately this lowers code coverage. However, since the tests that touched the connection service were not really testing it, this helps keep us honest. But it will require adding more unit tests for connection service.
* Standardizing the await mechanism for query execution
* Cleaning up the mechanism for getting WorkspaceService mocks and mock FileStreamFactories
* Refactor the query execution tests into their own files
* Removing tests from ExecuteTests.cs that were moved to separate files
* Adding tests for ResultSet class
* Adding test for the FOR XML/JSON component of the resultset class
* Setting up shared storage between file stream readers/writers
* Standardizing on Workspace mocking, awaiting execution completion
* Adding comment for ResultSet class
The main feature of this pull request is a new callback that's added to the query class that is called when a batch has completed execution and retrieval of results. This callback will send an event to the extension with the batch summary information. After that, the extension can submit subset requests for the resultsets of the batch.
Other smaller changes in this pull request:
Refactor to assign a batch a id when its created instead of when returning the list of batch summaries
Passing the SelectionData around instead of extracting the values for it
Moving creation of BatchSummary into the Batch class
Retrieval of results is now permitted even if the entire query has not completed, as long as the batch requested has completed.
Also note, this does not break the protocol. It adds a new event that a queryRunner can listen to, but it doesn't require it to be listened to.
* Refactor to remove SectionData class in favor of BufferRange
* Adding callback for batch completion that will let the extension know that a batch has completed execution
* Refactoring to make progressive results work as per async query execution
* Allowing retrieval of batch results while query is in progress
* reverting global.json, whoops
* Adding a few missing comments, and fixing a couple code style bugs
* Using SelectionData everywhere again
* One more missing comment
* Make save results asynchronous
* Prevent write share of file
* Lock objects in stages
* Create Save result objects
* refactor and write rows in batches
* CHange batchSize from test value
* Remove await in handler
* Removing the file reader as a member of the resultset
* Change Dispose to wait for save
* Change concurrentBag
* PascalCase variables
* Modify function signature and tests
* Safe file methods
* refactor ResultSets to Ilist and remove ToList
* Change dictionary key and prevent add to saveTasks during dispose
* Simplify row concatenation
* Fix prevent add
* Fix prevent add
* Add methods to expose saveTasks and isBeingDisposed
After much thinking, this change brings the message behavior into line with SSMS, including if the NOCOUNT is set. This is a somewhat non-obvious solution, but the StatementCompleted event handler will only be fired if there is a record count to return. We'll add the message for number of records if the StatementCompleted event is fired, otherwise we won't add any messages while processing the resultsets of the batch. If any messages are returned from the server, we'll capture those. Then, if at the end of the batch, we haven't collected any messages from StatementCompleted events or server messages, then we'll add the "completed successfully" message.
This matches behavior of SSMS that will only emit a "completed successfully" message if there were no other messages for the batch.
* Solution to issue, some unit tests needed to be tweaked
* Comments for the event handler
Moving some logic around such that when a query is cancelled, it isn't thrown away, allowing reading of the partial results, a la SSMS.
Adding a configure await to fix a tenacious bug causing query cancellations to hang for 20s or more.
Capturing sql errors for user cancellation, to return all user cancelation scenarios using the same messages. (ie, cancelling during ExecuteReaderAsync will yield a error from the server whereas cancelling during ReadAsync throws a TaskCancelledException
No changes to protocol, just implementation changes.
* Test of try/finally
* Fixed issue where resultsets are unreadable after query cancellation
* Fix for await/async issue
The two main changes in this pull request:
Launching query execution as an asynchronous task that performs a callback upon completion or failure of a query. (Which also sets us up for callbacks progressive results)
Moving away from using the Result of a query execution to return an error. Instead we'll use an error event to return an error
Additionally, some nice refactoring and cleaning up of the unit tests to take advantage of the cool RequestContext mock tooling by @kevcunnane
* Initial commit of refactor to run execution truely asynchronously
* Moving the storage of the task into Query class
Callbacks for completion of a query and failure of a query are setup as
events in the Query class. This actually sets us up for a very nice
framework for adding batch and resultset completion callbacks.
However, this also exposes a problem with cancelling queries and returning
errors -- we don't properly handle errors during execution of a query
(aside from DB errors).
* Wrapping things up in order to submit for code review
* Adding fixes as per comments
Fixing a bug where in various situations, the files used for temporary storage of query results would be leftover. In particular, the following changes were made:
* When the dispose query request is submitted, the corresponding query is now disposed in addition from being removed from the list of active queries
* When a query is cancelled, it is disposed after it is cancelled
* If a query already exists for a given ownerURI, the existing query is disposed before creating a new query
* All queries are disposed when the query execution service is disposed (ie, at shutdown of the service)
A unit test to verify the action of the dispose method for a ResultSet was added.
* Ensuring queries are disposed
Adding logic to dispose any queries when:
* URI that already has a query executes another query
* A request to dispose a query is submitted
* A request to cancel a query is submitted
* Small tweaks for cleanup of query execution service
* added support for timestamps
* fixed tests
* Moved message class to own file; added 'z' to end of date strings
* added default time constructor
* removed unnecessary z
* added time string format info in comment
* changed from utc time to using local time
* Do not use ReliableCommand in the query execution service.
* Fixing the logic to remove InfoMessage handlers from ReliableSqlConnection
* Adding test to query UDT
* Enable IntelliSense settings
* Fix up some bugs in the IntelliSense settings.
* Code cleans for PR
* Fix a couple exceptions that are breaks query execute and intellisense.
* Add useLowerCase flag and settings tests
* inital pipe of line numbers and getting text from workspace services
* tests compile
* Fixed bug regarding tests using connections on mac
* updated tests
* fixed workspace service and fixed tests
* integrated feedback
* WIP for ability to localize cell values
* Changing how DateTimeOffsets are stored, getting unit tests going
* Reworking BufferFileStreamWriter to use dictionary approach
* Plumbing the DbCellValue type the rest of the way through
* Removing unused components to simplify contract
* Cleanup and making sure byte[] appears in parity with SSMS
* CR comments, small tweaks for optimizing LINQ
Fixing a bug from the unit tests on OSX/Unix where attempting to create a file with a name that's all whitespace succeeds when it should fail. This was passing in Windows because File.Open throws an ArgumentException when an all whitespace name is provided. In Unix systems, File.Open does not throw, causing the test to fail.
Solution is to check for whitespace in the sanity check.
* Strings sweep for connection service
* String sweep for credentials service
* String sweep for hosting
* String sweep for query execution service
* String sweep for Workspace service
* Renaming utility namespace to match standards
Renaming Microsoft.SqlTools.EditorServices.Utility to
Microsoft.SqlTools.ServiceLayer.Utility to match the naming changes done a
while back. Also renaming them on the files that use them
* Namespace change on reliable connection
* Adding the new resx and designer files
* Final bug fixes for srgen
Fixing flakey moq package name
* Removing todo as per @kevcunnane
* Adding using statements as per @llali's comment
* Fixing issues from broken unit tests
Note: This feature contains changes that will break the contract for
saving as CSV and JSON. On success, null is returned as a message instead
of "Success". Changes will be made to the vscode component to handle this
change.
* Initial commit of reliable connection port
* Made ReliableSqlConnection inherit from DbConnection instead of IDbConnection
* Cleanup
* Fixed autocomplete service to use reliable connection
* Fix copyright headers
* Renamed ConnectResponse.Server to ServerInfo
* Removed unused using
* Addressing code review feedback
This was an issue where non-string types were being read back as strings (ie, int was read back as a string) resulting in the objects coming back being completely unintelligible values. It was a very simple issue where the column datatype wasn't being set correctly in our wrapper type.
* Code changes to save results as csv
* changes to save resultset as csv
* removed csvHelper
* Retrieve right resultSet after batch execution]
* code clean up
* encode column names and use string.Join
* code review changes - property fix and rowBuilder removal
* changes to fix execution tests
* Code clean up
* Code clean up
* Test save as CSV
* Fix tests for Mac/Linux
* Add doc comment
* Add doc comments
* Delete file if exception occurs