* locale set through CLI and tests updated
* adding cli locale support
* adding tests and test constants
* cleaning up new tests
* fixing test which only fails remotely
* adding support for MacOS/Linux
* triggering new build due to appveyor timeout
* updating usage printout
* Get db name from query connection
* Add comments
* Correct typos
* revert changes to .sln
* Add unit tests
* Fix typo
* Fix error due to a mistyped comment
Adding new methods for executing queries from other services (such as the upcoming edit data service). The code is written to avoid duplicating logic by using lambdas to perform custom logic.
Additionally, the service host protocol has been slightly modified to split the IMessageSender into IEventSender and IRequestSender. This allows us to use either a ServiceHost or any RequestContext<T> to send events. It becomes very convenient to use another service's request context to send the events for query execution.
**Breaking Change**: This removes the messages property for query dispose results and instead elects to use error for any errors encountered during query disposal. A result is only used when something is successful.
* Splitting IMessageSender into IEventSender and IRequestSender
* Adding inter-service method for executing queries
* Adding inter-service method for disposing of a query
* Adding null checking for the success/error handlers
This is a small API addition that allows us to execute queries directly as strings. This will make it easier to execute queries outside the confines of a workspace like VS Code.
* Refactor execution requests and events are now named less redundantly and moved into a separate namespace for organization. This is the bulk of the changes.
* QueryExecuteBatchNotification -> ExecuteRequests/BatchEvents
* QueryExecuteMessageNotification -> ExecuteRequests/MessageEvent
* QueryExecuteCompleteNotification -> ExecuteRequests/QueryCompleteEvent
* QueryExecuteResultSetCompleteNotification -> ExecuteRequests/ResultSetEvents
* QueryExecuteSubsetRequest -> SubsetRequest.cs
* Creating an inheritance pattern where
* `ExecuteRequestParamsBase` has execution options and ID for a query execute request
* `ExecuteDocumentSelectionParams` inherits from `ExecuteRequestParamsBase` and provides a document selection
* `ExecuteStringParams` inherits from `ExecuteRequestParamsBase` and provides the query text
* Adding a helper method to get SQL text based on request type
* Through the AWESOME POWER OF POLYMORPHISM, we are able to create a request for executing straight SQL basically for free.
* **Breaking change:** query/execute => query/executeDocumentSelection to make it more obvious what is expected.
* Adding unit tests for the code that gets SQL text
* Refactoring of execute contracts into their own namespace
* Refactoring application
* Adding new request for executing queries as strings
* Adding forgotten string request
* Changing the logic for checking the request param types
* Removing redundant declarations
* added a new tool to store SQL connections locally. Modified the peek definition tests to create test database before running test
* fixed failing test QueryExecutionPlanInvalidParamsTest
* Fixes based on code review comments
* fixed failing test GetSignatureHelpReturnsNotNullIfParseInfoInitialized
* Add codeGen for existing types
* Modify code gen logic to match current code
* Extend logic for new smo objects
* Add logic to retrieve token type from QuickInfo
* Remove duplicate types
* Add tests for new types
* Modify GetScript logic to use suggestions first
* Add more tests
* Modify codeGen to include quickInfo logic
* Cake build changes to run CodeGen
* CodeGen replace indentation
* Refactor GetScript and add more tests
* Refactor Resolver calls
* Fix TestDriver test for Definition
* Change quickInfo string case
* Revert change sto .sln file
* Fix typos in comments
* change String to string
* Rename test sql objects
* Add CancelTokenKey for uniquely identifying cancelations of Connections associated with an OwnerUri and ConnectionType string.
* Update ConnectionInfo to use ConcurrentDictionary of DbConnection instances. Add wrapper functions for the ConcurrentDictionary.
* Refactor Connect and Disconnect in ConnectionService.
* Update ConnectionService: Handle multiple connections per ConnectionInfo. Handle cancelation tokens uniquely identified with CancelTokenKey. Add GetOrOpenConnection() for other services to request an existing or create a new DbConnection.
* Add ConnectionType.cs for ConnectionType strings.
* Add ConnectionType string to ConnectParams, ConnectionCompleteNotification, DisconnectParams.
* Update Query ExecuteInternal to use the dedicated query connection and GetOrOpenConnection().
* Update test library to account for multiple connections in ConnectionInfo.
* Write tests ensuring multiple connections don’t create redundant data.
* Write tests ensuring database changes affect all connections of a given ConnectionInfo.
* Write tests for TRANSACTION statements and temp tables.
* Fix exception due to repeat disposal of stream
* Fix off by one index for saving selection
* Fix to overwrite file if file already exists
* Fix test for Json save selection
* Add Json Formatting
* experimental showplan implementation (tools side only)
* fix for redundant massages from showplan executions
* moved showplan batches to new variables to make it less confusing
* returns showplan as part of batch summary with in each result summary
* cleaned up showplan resultsets
* cleaning up code and making showplan var optional
* changes all var names to showplan
* adding estimated support
* small fixes
* updatin var names and adding EPOptions struct
* adding ssms execution plan logic based on server types
* adding special actions logic
* removing redundant name changes
* execution plan query handler added
* cleaning up functions and data structures
* seperated special actions into its own class
* cleaning up special actions
* cleaning up code
* small new line fixes
* commenting out pre-yukon code
* removing all pre yukon code
* last yukon code commented out
* fixes broken tests
* adding related unit tests; integration tests incoming
* finishing tests and cleaning up code
* semantic changes
* cleaning up semantics
* changes and test fixes, also adding new exceptions into RS
* fixing special actions and cleaning up request logic
* fixing comment to trigger new build
* triggering another build
* fixed up specialaction and added tests for it
This change is a reworking of the way that messages are sent to clients from the service layer. It is also a reworking of the protocol to ensure that all formulations of query send back events to the client in a deterministic ordering. To support the first change:
* Added a new event that will be sent when a message is generated
* Messages now indicate which Batch (if any) generated them
* Messages now indicate if they were error level
* Removed message storage in Batch objects and BatchSummary objects
* Batch objects no longer have error state
Add IntegrationTests project. Move all tests ifdef'd with LIVE_CONNECTION_TESTS to IntegrationTests project. Delete files that have no remaining code. Update codecoverage.bat to run integration tests
It's an overhaul of the Save As mechanism to utilize the file reader/writer classes to better align with the patterns laid out by the rest of the query execution. Why make this change? This change makes our code base more uniform and adherent to the patterns/paradigms we've set up. This change also helps with the encapsulation of the classes to "separate the concerns" of each component of the save as function.
* Replumbing the save as execution to pass the call down the query stack as QueryExecutionService->Query->Batch->ResultSet
* Each layer performs it's own parameter checking
* QueryExecutionService checks if the query exists
* Query checks if the batch exists
* Batch checks if the result set exists
* ResultSet checks if the row counts are valid and if the result set has been executed
* Success/Failure delegates are passed down the chain as well
* Determination of whether a save request is a "selection" moved to the SaveResultsRequest class to eliminate duplication of code and creation of utility classes
* Making the IFileStream* classes more generic
* Removing the requirements of max characters to store from the GetWriter method, and moving it into the constructor for the temporary buffer writer - the values have been moved to the settings and given defaults
* Removing the individual type writers from IFileStreamWriter
* Removing the individual type writers from IFIleStreamReader
* Adding a new overload for WriteRow to IFileStreamWriter that will write out data, given a row's worth of data and the list of columns
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to CSV files
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to JSON files
* Dramatically simplified the CSV encoding functionality
* Removed duplicated logic for saving in different types and condensed down to a single chain that only differs based on what type of factory is provided
* Removing the logic for managing the list of save as tasks, since the ResultSet now performs the actual saving work, there's no real need to expose the internals of the ResultSet
* Adding new strings to the sr.strings file for save as error messages
* Completely rewriting the unit tests for the save as mechanism. Very fine grained unit tests now that should cover majority of cases (aside from race conditions)
* Refactoring maxchars params into settings and out of file stream factory
* Removing write*/read* methods from file stream readers/writers
* Migrating the CSV save as to the resultset
* Tweaks to unit testing to eliminate writing files to disk
* WIP, moving to a base class for save results writers
* Everything is wired up and compiles
* Adding unit tests for CSV encoding
* Adding unit tests for CSV and Json writers
* Adding tests to the result set for saving
* Refactor to throw exceptions on errors instead of calling failure handler
* Unit tests for batch/query argument in range
* Unit tests
* Adding service integration unit tests
* Final polish, copyright notices, etc
* Adding NULL logic
* Fixing issue of unicode to utf8
* Fixing issues as per @kburtram code review comments
* Adding files that got broken?
This is a reworking of the unit tests to permit us to better test events from the service host. This new Event Flow Validator class allows creating a chain of events that can then be validated after execution of the request. Each event can have its own custom validation logic for verifying that the object sent via the service host is correct. It also allows us to validate that the order of events are correct.
The big drawback is that (at this time) the validator cannot support asynchronous events or non-determinant ordering of events. We don't need this for the query execution functionality despite messages being sent asynchronously because async messages aren't sent during unit tests (due to the db message event only being present on SqlDbConnection classes). If the need arises to do async or out of order event validation, then I have some ideas for how we can do that.
* Applying the event flow validator to the query execution service integration tests
* Undoing changes to events that were included in cherry-picked commit
* Cleaning up event flow validation to query execution
* Add efv to cancel tests
* Adding efv to dispose tests
* Adding efv to subset tests
* Adding efv to SaveResults tests
* Copyright
This basically is a replacement to the fix for Adding Milliseconds to DateTime fields. I didn't take into consideration that `DATE` columns would report as DateTime type. Date columns have a numeric scale set to 255, leading to the formatting string for the date time to include 255 millisecond places, which is invalid.
This change also reverses the change to store DateTime precision in the buffer file. Instead, the column metadata is now used when deserializing the rows from the db. `DATETIME` and `DATETIME2` columns are differentiated by their numeric scale while `DATE` columns are differentiated by their datatype name field.
More unit tests were added. Additionally, this fixes an unreported bug that `DATE` columns were being displayed with times, which is incorrect.
* Revert "Adding Milliseconds to DateTime fields (#173)"
This reverts commit 431dfa4156.
* Reworking the reader to use the column metadata for date types
* DbColumn -> DbColumnWrapper
* Fina tweaks to support DATETIME2(0)
* Removing the unused arguments
* Revert NetCore target to 1.0.0 to fix Jenkins
- Changing to 1.* ends up requiring .Net Core 1.1 to be install on the machine. We need a better solution that can work around this and let us stay on 1.0 for now. Checking in to unblock builds, will fix Travis CI later.
* Installing dotnet as part of Travis setup. There is a build in dotnet: argument that uses dotnet-install scripts and supports specific version installation
* DbColumn and ReliableConnection tests
* More retry connection tests
* More tests
* Fix broken peek definition integration tests
* Fix test bug
* Add a couple batch tests
* Add some more tests
* More tests for code coverage.
* Validation and Diagnostic tests
* A few more tests
* A few mote test changes.
* Update file path tests to run on Windows only
This is a slightly larger change than anticipated due the difference between `DATETIME`, `DATETIME2`, and `DateTime`. The `DATETIME` type always uses 3 decimal points of a second, while the `DATETIME2` type has 7 (although since `DATETIME2(7)` is default in SSMS suggesting that it is a variable precision type). Regardless of the db type, the engine returns `DateTime` C# type. The db types are only made visible via the column info, via the numeric precision and numeric scale. My findings were as such:
`DATETIME `: Precision = 23, Scale = 3
`DATETIME2`: Precision = 255, Scale = 7
The scale corresponds neatly with the number of second decimal points to show. The buffer file writer was modified to store both the scale and the number of ticks. Then the buffer file reader was modified to read in the precision and the number of ticks and generate the ToString version of the DateTime to add "f" as many times as there is scale, which corresponds to milliseconds.
* Code for writing milliseconds of datetime/datetime2 columns
* Adding unit tests
* Fixing potential bug with datetime2(0)
This change is part of the progressive results code. It will submit a notification from the service layer to indicate when execution of a batch has completed. This notification will contain the selection for batch, execution start time, and its ID. This will enable the extension to produce a header for the batch before the batch completes, in order to make it more clear to the user that execution is going on.
* Adding new event for batch start
* Unit tests
* Fixing comments as per @kevcunnane
* Fix Integrated auth error and Uri for *nix/Mac
* Format code
* Add Logging and unit tests
* Modify tests for Windows:
* Workaround missing default schema on *nix and Mac
* Add unit tests
* Correct comments
* Change loop length
* Fix Log message
The main change in this pull request is to add a new event that will be fired upon completion of a resultset but before the completion of a batch. This event will only fire if a resultset is available and generated.
Changes:
* ConnectionService - Slight changes to enable mocking, cleanup
* Batch - Moving summary generation into ResultSet class, adding generation of ordinals for resultset and locking of result set list (which needs further refinement, but would be outside scope of this change)
* Adding new event and associated parameters for completion of a resultset. Params return the resultset summary
* Adding logic for assigning the event a handler in the query execution service
* Adding unit tests for testing the new event /making sure the existing tests work
* Refactoring some private properties into member variables
* Refactor to remove SectionData class in favor of BufferRange
* Adding callback for batch completion that will let the extension know that a batch has completed execution
* Refactoring to make progressive results work as per async query execution
* Allowing retrieval of batch results while query is in progress
* reverting global.json, whoops
* Adding a few missing comments, and fixing a couple code style bugs
* Using SelectionData everywhere again
* One more missing comment
* Adding new notification type for result set completion
* Plumbing event for result set completion
* Unit tests for result set events
This includes a fairly substantial change to create a mock of the
ConnectionService and to create separate memorystream storage arrays. It
preserves more correct behavior with a integration test, fixes an issue
where the test db reader will return n-1 rows because the Reliable
Connection Helper steals a record.
* Adding locking to ResultSets for thread safety
* Adding/fixing unit tests
* Adding batch ID to result set summary
This is another large code review. I want to make a few more changes, but since these changes will stand on their own, I'll hold back on making this change set any larger than it already is.
Changes in this request:
To address Microsoft/vscode-mssql#326, instead of doing taskkill on the service layer when WaitForExit is executed, we now make an educated guess at which service layer was spawned when the test starts and do a Process.Kill on it when we shut down the test.
All the perf tests have been moved into a new project. This was done to keep them easily separated from code coverage test runs. At the same time the perf tests were separated into separate classes for logical categorization. This process will likely be repeated on the stress tests. The tests can still easily be ran from Visual Studio Test Explorer
To address Microsoft/vscode-mssql#349, a new SelfCleaningFile class was created to allow for easy cleanup of temporary files generated for integration tests via using blocks.
Due to some of the refactoring done while moving the perf tests to a new project, the TestBase class had to be switched to more of a helper class style. As such, all tests that use inherit from TestBase now create a TestBase object on start via a using block. This also simplifies the cleanup at the end of the test.
* Solution for hanging code coverage runs
Code coverage runs would hang in certain scenarios if a test failed before
the service process could be spawned. The taskkill command would fail to
find the service process. The test would then wait for opencover to exit,
but it would not since the service process it had spawned would still be
running, causing the test run to hang indefinitely.
Solution was to capture the service process after it launched and
explicitly kill it when shutting down the test driver.
* Setting the test name in the propery in the class and removign the parameter from each method
* New project for perf tests
* Reworking integration tests to cleanup temp files
* Changes as per @llali review comments
* Adding copyright notices
* Renaming TestBase => TestHelper
* Renaming SelfCleaningFile => SelfCleaningTempFile
* Removing code that sets TestName property
* Fixing compilation error due to removed code