This change is a reworking of the way that messages are sent to clients from the service layer. It is also a reworking of the protocol to ensure that all formulations of query send back events to the client in a deterministic ordering. To support the first change:
* Added a new event that will be sent when a message is generated
* Messages now indicate which Batch (if any) generated them
* Messages now indicate if they were error level
* Removed message storage in Batch objects and BatchSummary objects
* Batch objects no longer have error state
Add IntegrationTests project. Move all tests ifdef'd with LIVE_CONNECTION_TESTS to IntegrationTests project. Delete files that have no remaining code. Update codecoverage.bat to run integration tests
It's an overhaul of the Save As mechanism to utilize the file reader/writer classes to better align with the patterns laid out by the rest of the query execution. Why make this change? This change makes our code base more uniform and adherent to the patterns/paradigms we've set up. This change also helps with the encapsulation of the classes to "separate the concerns" of each component of the save as function.
* Replumbing the save as execution to pass the call down the query stack as QueryExecutionService->Query->Batch->ResultSet
* Each layer performs it's own parameter checking
* QueryExecutionService checks if the query exists
* Query checks if the batch exists
* Batch checks if the result set exists
* ResultSet checks if the row counts are valid and if the result set has been executed
* Success/Failure delegates are passed down the chain as well
* Determination of whether a save request is a "selection" moved to the SaveResultsRequest class to eliminate duplication of code and creation of utility classes
* Making the IFileStream* classes more generic
* Removing the requirements of max characters to store from the GetWriter method, and moving it into the constructor for the temporary buffer writer - the values have been moved to the settings and given defaults
* Removing the individual type writers from IFileStreamWriter
* Removing the individual type writers from IFIleStreamReader
* Adding a new overload for WriteRow to IFileStreamWriter that will write out data, given a row's worth of data and the list of columns
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to CSV files
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to JSON files
* Dramatically simplified the CSV encoding functionality
* Removed duplicated logic for saving in different types and condensed down to a single chain that only differs based on what type of factory is provided
* Removing the logic for managing the list of save as tasks, since the ResultSet now performs the actual saving work, there's no real need to expose the internals of the ResultSet
* Adding new strings to the sr.strings file for save as error messages
* Completely rewriting the unit tests for the save as mechanism. Very fine grained unit tests now that should cover majority of cases (aside from race conditions)
* Refactoring maxchars params into settings and out of file stream factory
* Removing write*/read* methods from file stream readers/writers
* Migrating the CSV save as to the resultset
* Tweaks to unit testing to eliminate writing files to disk
* WIP, moving to a base class for save results writers
* Everything is wired up and compiles
* Adding unit tests for CSV encoding
* Adding unit tests for CSV and Json writers
* Adding tests to the result set for saving
* Refactor to throw exceptions on errors instead of calling failure handler
* Unit tests for batch/query argument in range
* Unit tests
* Adding service integration unit tests
* Final polish, copyright notices, etc
* Adding NULL logic
* Fixing issue of unicode to utf8
* Fixing issues as per @kburtram code review comments
* Adding files that got broken?
This is a documentation-only update so automerging. Please review the commit if there are any follow-ups requested.
* Update .gitignore for docfx genertated files
* Update documenation (part 1)
* Update docs and add some samples
* More doc updates
This is a reworking of the unit tests to permit us to better test events from the service host. This new Event Flow Validator class allows creating a chain of events that can then be validated after execution of the request. Each event can have its own custom validation logic for verifying that the object sent via the service host is correct. It also allows us to validate that the order of events are correct.
The big drawback is that (at this time) the validator cannot support asynchronous events or non-determinant ordering of events. We don't need this for the query execution functionality despite messages being sent asynchronously because async messages aren't sent during unit tests (due to the db message event only being present on SqlDbConnection classes). If the need arises to do async or out of order event validation, then I have some ideas for how we can do that.
* Applying the event flow validator to the query execution service integration tests
* Undoing changes to events that were included in cherry-picked commit
* Cleaning up event flow validation to query execution
* Add efv to cancel tests
* Adding efv to dispose tests
* Adding efv to subset tests
* Adding efv to SaveResults tests
* Copyright
This basically is a replacement to the fix for Adding Milliseconds to DateTime fields. I didn't take into consideration that `DATE` columns would report as DateTime type. Date columns have a numeric scale set to 255, leading to the formatting string for the date time to include 255 millisecond places, which is invalid.
This change also reverses the change to store DateTime precision in the buffer file. Instead, the column metadata is now used when deserializing the rows from the db. `DATETIME` and `DATETIME2` columns are differentiated by their numeric scale while `DATE` columns are differentiated by their datatype name field.
More unit tests were added. Additionally, this fixes an unreported bug that `DATE` columns were being displayed with times, which is incorrect.
* Revert "Adding Milliseconds to DateTime fields (#173)"
This reverts commit 431dfa4156.
* Reworking the reader to use the column metadata for date types
* DbColumn -> DbColumnWrapper
* Fina tweaks to support DATETIME2(0)
* Removing the unused arguments
This change is a refactor of the DbColumnWrapper class. This fixes a huge oversight/gotcha in the DbColumnWrapper where treating the DbColumnWrapper as a DbColumn (ie, polymorphism) would result in almost all the fields of the column being null. This is a highly unexpected behavior. This change fixes that.
This is documentation changes only so I'm just going to merge directly. The PR is for FYI purposes. Please leave feedback on the commit if you see anything I should follow-up on with this documentation.
* Revert NetCore target to 1.0.0 to fix Jenkins
- Changing to 1.* ends up requiring .Net Core 1.1 to be install on the machine. We need a better solution that can work around this and let us stay on 1.0 for now. Checking in to unblock builds, will fix Travis CI later.
* Installing dotnet as part of Travis setup. There is a build in dotnet: argument that uses dotnet-install scripts and supports specific version installation
* DbColumn and ReliableConnection tests
* More retry connection tests
* More tests
* Fix broken peek definition integration tests
* Fix test bug
* Add a couple batch tests
* Add some more tests
* More tests for code coverage.
* Validation and Diagnostic tests
* A few more tests
* A few mote test changes.
* Update file path tests to run on Windows only
This is a slightly larger change than anticipated due the difference between `DATETIME`, `DATETIME2`, and `DateTime`. The `DATETIME` type always uses 3 decimal points of a second, while the `DATETIME2` type has 7 (although since `DATETIME2(7)` is default in SSMS suggesting that it is a variable precision type). Regardless of the db type, the engine returns `DateTime` C# type. The db types are only made visible via the column info, via the numeric precision and numeric scale. My findings were as such:
`DATETIME `: Precision = 23, Scale = 3
`DATETIME2`: Precision = 255, Scale = 7
The scale corresponds neatly with the number of second decimal points to show. The buffer file writer was modified to store both the scale and the number of ticks. Then the buffer file reader was modified to read in the precision and the number of ticks and generate the ToString version of the DateTime to add "f" as many times as there is scale, which corresponds to milliseconds.
* Code for writing milliseconds of datetime/datetime2 columns
* Adding unit tests
* Fixing potential bug with datetime2(0)
This change is part of the progressive results code. It will submit a notification from the service layer to indicate when execution of a batch has completed. This notification will contain the selection for batch, execution start time, and its ID. This will enable the extension to produce a header for the batch before the batch completes, in order to make it more clear to the user that execution is going on.
* Adding new event for batch start
* Unit tests
* Fixing comments as per @kevcunnane
* Fix Integrated auth error and Uri for *nix/Mac
* Format code
* Add Logging and unit tests
* Modify tests for Windows:
* Workaround missing default schema on *nix and Mac
* Add unit tests
* Correct comments
* Change loop length
* Fix Log message