* Revert NetCore target to 1.0.0 to fix Jenkins
- Changing to 1.* ends up requiring .Net Core 1.1 to be install on the machine. We need a better solution that can work around this and let us stay on 1.0 for now. Checking in to unblock builds, will fix Travis CI later.
* Installing dotnet as part of Travis setup. There is a build in dotnet: argument that uses dotnet-install scripts and supports specific version installation
* DbColumn and ReliableConnection tests
* More retry connection tests
* More tests
* Fix broken peek definition integration tests
* Fix test bug
* Add a couple batch tests
* Add some more tests
* More tests for code coverage.
* Validation and Diagnostic tests
* A few more tests
* A few mote test changes.
* Update file path tests to run on Windows only
This is a slightly larger change than anticipated due the difference between `DATETIME`, `DATETIME2`, and `DateTime`. The `DATETIME` type always uses 3 decimal points of a second, while the `DATETIME2` type has 7 (although since `DATETIME2(7)` is default in SSMS suggesting that it is a variable precision type). Regardless of the db type, the engine returns `DateTime` C# type. The db types are only made visible via the column info, via the numeric precision and numeric scale. My findings were as such:
`DATETIME `: Precision = 23, Scale = 3
`DATETIME2`: Precision = 255, Scale = 7
The scale corresponds neatly with the number of second decimal points to show. The buffer file writer was modified to store both the scale and the number of ticks. Then the buffer file reader was modified to read in the precision and the number of ticks and generate the ToString version of the DateTime to add "f" as many times as there is scale, which corresponds to milliseconds.
* Code for writing milliseconds of datetime/datetime2 columns
* Adding unit tests
* Fixing potential bug with datetime2(0)
This change is part of the progressive results code. It will submit a notification from the service layer to indicate when execution of a batch has completed. This notification will contain the selection for batch, execution start time, and its ID. This will enable the extension to produce a header for the batch before the batch completes, in order to make it more clear to the user that execution is going on.
* Adding new event for batch start
* Unit tests
* Fixing comments as per @kevcunnane
* Fix Integrated auth error and Uri for *nix/Mac
* Format code
* Add Logging and unit tests
* Modify tests for Windows:
* Workaround missing default schema on *nix and Mac
* Add unit tests
* Correct comments
* Change loop length
* Fix Log message
The main change in this pull request is to add a new event that will be fired upon completion of a resultset but before the completion of a batch. This event will only fire if a resultset is available and generated.
Changes:
* ConnectionService - Slight changes to enable mocking, cleanup
* Batch - Moving summary generation into ResultSet class, adding generation of ordinals for resultset and locking of result set list (which needs further refinement, but would be outside scope of this change)
* Adding new event and associated parameters for completion of a resultset. Params return the resultset summary
* Adding logic for assigning the event a handler in the query execution service
* Adding unit tests for testing the new event /making sure the existing tests work
* Refactoring some private properties into member variables
* Refactor to remove SectionData class in favor of BufferRange
* Adding callback for batch completion that will let the extension know that a batch has completed execution
* Refactoring to make progressive results work as per async query execution
* Allowing retrieval of batch results while query is in progress
* reverting global.json, whoops
* Adding a few missing comments, and fixing a couple code style bugs
* Using SelectionData everywhere again
* One more missing comment
* Adding new notification type for result set completion
* Plumbing event for result set completion
* Unit tests for result set events
This includes a fairly substantial change to create a mock of the
ConnectionService and to create separate memorystream storage arrays. It
preserves more correct behavior with a integration test, fixes an issue
where the test db reader will return n-1 rows because the Reliable
Connection Helper steals a record.
* Adding locking to ResultSets for thread safety
* Adding/fixing unit tests
* Adding batch ID to result set summary
This is another large code review. I want to make a few more changes, but since these changes will stand on their own, I'll hold back on making this change set any larger than it already is.
Changes in this request:
To address Microsoft/vscode-mssql#326, instead of doing taskkill on the service layer when WaitForExit is executed, we now make an educated guess at which service layer was spawned when the test starts and do a Process.Kill on it when we shut down the test.
All the perf tests have been moved into a new project. This was done to keep them easily separated from code coverage test runs. At the same time the perf tests were separated into separate classes for logical categorization. This process will likely be repeated on the stress tests. The tests can still easily be ran from Visual Studio Test Explorer
To address Microsoft/vscode-mssql#349, a new SelfCleaningFile class was created to allow for easy cleanup of temporary files generated for integration tests via using blocks.
Due to some of the refactoring done while moving the perf tests to a new project, the TestBase class had to be switched to more of a helper class style. As such, all tests that use inherit from TestBase now create a TestBase object on start via a using block. This also simplifies the cleanup at the end of the test.
* Solution for hanging code coverage runs
Code coverage runs would hang in certain scenarios if a test failed before
the service process could be spawned. The taskkill command would fail to
find the service process. The test would then wait for opencover to exit,
but it would not since the service process it had spawned would still be
running, causing the test run to hang indefinitely.
Solution was to capture the service process after it launched and
explicitly kill it when shutting down the test driver.
* Setting the test name in the propery in the class and removign the parameter from each method
* New project for perf tests
* Reworking integration tests to cleanup temp files
* Changes as per @llali review comments
* Adding copyright notices
* Renaming TestBase => TestHelper
* Renaming SelfCleaningFile => SelfCleaningTempFile
* Removing code that sets TestName property
* Fixing compilation error due to removed code
This is a fairly large set of changes to the unit tests that help isolate the effectiveness of the unit tests.
* Unit tests for query execution have been split into separate files for different classes.
* Unit tests have been added for the ResultSet class which previously did not have tests
* The InMemoryStreamWrapper has been improved to share memory, creating a simulated filesystem
* Creating a mock ConnectionService to decrease noisy exceptions and prevent "row stealing". Unfortunately this lowers code coverage. However, since the tests that touched the connection service were not really testing it, this helps keep us honest. But it will require adding more unit tests for connection service.
* Standardizing the await mechanism for query execution
* Cleaning up the mechanism for getting WorkspaceService mocks and mock FileStreamFactories
* Refactor the query execution tests into their own files
* Removing tests from ExecuteTests.cs that were moved to separate files
* Adding tests for ResultSet class
* Adding test for the FOR XML/JSON component of the resultset class
* Setting up shared storage between file stream readers/writers
* Standardizing on Workspace mocking, awaiting execution completion
* Adding comment for ResultSet class
* Adding useful unit tests for this functionality
* Adding callback functionality for when a file is closed
* Fixing bad data issue w/closing/opening untitled doc
* Adding useful unit tests for this functionality
* Adding callback functionality for when a file is closed
* Moving from public to internal
The main feature of this pull request is a new callback that's added to the query class that is called when a batch has completed execution and retrieval of results. This callback will send an event to the extension with the batch summary information. After that, the extension can submit subset requests for the resultsets of the batch.
Other smaller changes in this pull request:
Refactor to assign a batch a id when its created instead of when returning the list of batch summaries
Passing the SelectionData around instead of extracting the values for it
Moving creation of BatchSummary into the Batch class
Retrieval of results is now permitted even if the entire query has not completed, as long as the batch requested has completed.
Also note, this does not break the protocol. It adds a new event that a queryRunner can listen to, but it doesn't require it to be listened to.
* Refactor to remove SectionData class in favor of BufferRange
* Adding callback for batch completion that will let the extension know that a batch has completed execution
* Refactoring to make progressive results work as per async query execution
* Allowing retrieval of batch results while query is in progress
* reverting global.json, whoops
* Adding a few missing comments, and fixing a couple code style bugs
* Using SelectionData everywhere again
* One more missing comment
Auto-merging test-only changes. Please review the commit and I'll make changes in next iteration.
* Add more tests to boast code coverage
* Add more reliable connection tests.
Next round of code coverage test cases. Please review the commit for next iteration.
* Add connection retry tests
* More test coverage
* Update diagnostics end-to-end test
Test-only changes for code coverage. Please review the comment and I'll include the changes in the next iteration.
* Add more tests
* Add some more additional test cases
These are test-only changes to improve code coverage so I'll merge directly. Please review the commit and I'll pickup those changes in the next iteration.
* Add integration test batch file
* Exclude Linux and MacOS from Windows code coverage builds
* Enable code coverage for test driver e2e tests
* Use the windows only build for code coverage runs
* Make save results asynchronous
* Prevent write share of file
* Lock objects in stages
* Create Save result objects
* refactor and write rows in batches
* CHange batchSize from test value
* Remove await in handler
* Removing the file reader as a member of the resultset
* Change Dispose to wait for save
* Change concurrentBag
* PascalCase variables
* Modify function signature and tests
* Safe file methods
* refactor ResultSets to Ilist and remove ToList
* Change dictionary key and prevent add to saveTasks during dispose
* Simplify row concatenation
* Fix prevent add
* Fix prevent add
* Add methods to expose saveTasks and isBeingDisposed
After much thinking, this change brings the message behavior into line with SSMS, including if the NOCOUNT is set. This is a somewhat non-obvious solution, but the StatementCompleted event handler will only be fired if there is a record count to return. We'll add the message for number of records if the StatementCompleted event is fired, otherwise we won't add any messages while processing the resultsets of the batch. If any messages are returned from the server, we'll capture those. Then, if at the end of the batch, we haven't collected any messages from StatementCompleted events or server messages, then we'll add the "completed successfully" message.
This matches behavior of SSMS that will only emit a "completed successfully" message if there were no other messages for the batch.
* Solution to issue, some unit tests needed to be tweaked
* Comments for the event handler
* Ported ReliableConnectionTests from DacFx and added a few more tests
* Fix style
* Created integration tests configuration and fixed minor test issue
The two main changes in this pull request:
Launching query execution as an asynchronous task that performs a callback upon completion or failure of a query. (Which also sets us up for callbacks progressive results)
Moving away from using the Result of a query execution to return an error. Instead we'll use an error event to return an error
Additionally, some nice refactoring and cleaning up of the unit tests to take advantage of the cool RequestContext mock tooling by @kevcunnane
* Initial commit of refactor to run execution truely asynchronously
* Moving the storage of the task into Query class
Callbacks for completion of a query and failure of a query are setup as
events in the Query class. This actually sets us up for a very nice
framework for adding batch and resultset completion callbacks.
However, this also exposes a problem with cancelling queries and returning
errors -- we don't properly handle errors during execution of a query
(aside from DB errors).
* Wrapping things up in order to submit for code review
* Adding fixes as per comments