The main goal of this feature is to enable a command that will
1) Generate a parameterized command for each edit that is in the session
2) Execute that command against the server
3) Update the cached results of the table/view that's being edited with the committed changes (including computed/identity columns)
There's some secret sauce in here where I cheated around worrying about gaps in the updated results. This was accomplished by implementing an IComparable for row edit objects that ensures deletes are the *last* actions to occur and that they occur from the bottom of the list up (highest row ID to lowest). Thus, all other actions that are dependent on the row ID are performed first, then the largest row ID is deleted, then next largest, etc. Nevertheless, by the end of a commit the associated ResultSet is still the source of truth. It is expected that the results grid will need updating once changes are committed.
Also worth noting, although this pull request supports a "many edits, one commit" approach, it will work just fine for a "one edit, one commit" approach.
* WIP
* Adding basic commit support. Deletions work!
* Nailing down the commit logic, insert commits work!
* Updates work!
* Fixing bug in DbColumnWrapper IsReadOnly setting
* Comments
* ResultSet unit tests, fixing issue with seeking in mock writers
* Unit tests for RowCreate commands
* Unit tests for RowDelete
* RowUpdate unit tests
* Session and edit base tests
* Fixing broken unit tests
* Moving constants to constants file
* Addressing code review feedback
* Fixes from merge issues, string consts
* Removing ad-hoc code
* fixing as per @abist requests
* Fixing a couple more issues
It's an overhaul of the Save As mechanism to utilize the file reader/writer classes to better align with the patterns laid out by the rest of the query execution. Why make this change? This change makes our code base more uniform and adherent to the patterns/paradigms we've set up. This change also helps with the encapsulation of the classes to "separate the concerns" of each component of the save as function.
* Replumbing the save as execution to pass the call down the query stack as QueryExecutionService->Query->Batch->ResultSet
* Each layer performs it's own parameter checking
* QueryExecutionService checks if the query exists
* Query checks if the batch exists
* Batch checks if the result set exists
* ResultSet checks if the row counts are valid and if the result set has been executed
* Success/Failure delegates are passed down the chain as well
* Determination of whether a save request is a "selection" moved to the SaveResultsRequest class to eliminate duplication of code and creation of utility classes
* Making the IFileStream* classes more generic
* Removing the requirements of max characters to store from the GetWriter method, and moving it into the constructor for the temporary buffer writer - the values have been moved to the settings and given defaults
* Removing the individual type writers from IFileStreamWriter
* Removing the individual type writers from IFIleStreamReader
* Adding a new overload for WriteRow to IFileStreamWriter that will write out data, given a row's worth of data and the list of columns
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to CSV files
* Creating a new IFileStreamFactory that creates a reader/writer pair for reading from the temporary files and writing to JSON files
* Dramatically simplified the CSV encoding functionality
* Removed duplicated logic for saving in different types and condensed down to a single chain that only differs based on what type of factory is provided
* Removing the logic for managing the list of save as tasks, since the ResultSet now performs the actual saving work, there's no real need to expose the internals of the ResultSet
* Adding new strings to the sr.strings file for save as error messages
* Completely rewriting the unit tests for the save as mechanism. Very fine grained unit tests now that should cover majority of cases (aside from race conditions)
* Refactoring maxchars params into settings and out of file stream factory
* Removing write*/read* methods from file stream readers/writers
* Migrating the CSV save as to the resultset
* Tweaks to unit testing to eliminate writing files to disk
* WIP, moving to a base class for save results writers
* Everything is wired up and compiles
* Adding unit tests for CSV encoding
* Adding unit tests for CSV and Json writers
* Adding tests to the result set for saving
* Refactor to throw exceptions on errors instead of calling failure handler
* Unit tests for batch/query argument in range
* Unit tests
* Adding service integration unit tests
* Final polish, copyright notices, etc
* Adding NULL logic
* Fixing issue of unicode to utf8
* Fixing issues as per @kburtram code review comments
* Adding files that got broken?
This is a slightly larger change than anticipated due the difference between `DATETIME`, `DATETIME2`, and `DateTime`. The `DATETIME` type always uses 3 decimal points of a second, while the `DATETIME2` type has 7 (although since `DATETIME2(7)` is default in SSMS suggesting that it is a variable precision type). Regardless of the db type, the engine returns `DateTime` C# type. The db types are only made visible via the column info, via the numeric precision and numeric scale. My findings were as such:
`DATETIME `: Precision = 23, Scale = 3
`DATETIME2`: Precision = 255, Scale = 7
The scale corresponds neatly with the number of second decimal points to show. The buffer file writer was modified to store both the scale and the number of ticks. Then the buffer file reader was modified to read in the precision and the number of ticks and generate the ToString version of the DateTime to add "f" as many times as there is scale, which corresponds to milliseconds.
* Code for writing milliseconds of datetime/datetime2 columns
* Adding unit tests
* Fixing potential bug with datetime2(0)
* WIP for ability to localize cell values
* Changing how DateTimeOffsets are stored, getting unit tests going
* Reworking BufferFileStreamWriter to use dictionary approach
* Plumbing the DbCellValue type the rest of the way through
* Removing unused components to simplify contract
* Cleanup and making sure byte[] appears in parity with SSMS
* CR comments, small tweaks for optimizing LINQ
* WIP for buffering in temporary file
* Adding support for writing to disk for buffering
* WIP - Adding file reader, factory for reader/writer
* Making long list use generics and implement IEnumerable
* Reading/Writing from file is working
* Removing unused 'skipValue' logic
* More tweaks to file buffer
Adding logic for cleaning up the temp files
Adding fix for empty/null column names
* Adding comments and cleanup
* Unit tests for FileStreamWrapper
* WIP adding more unit tests, and finishing up wiring up the output writers
* Finishing up initial unit tests
* Fixing bugs with long fields
* Squashed commit of the following:
commit df0ffc12a46cb286d801d08689964eac08ad71dd
Author: Benjamin Russell <beruss@microsoft.com>
Date: Wed Sep 7 14:45:39 2016 -0700
Removing last bit of async for file writing.
We're seeing a 8x improvement of file write speeds!
commit 08a4b9f32e825512ca24d5dc03ef5acbf7cc6d94
Author: Benjamin Russell <beruss@microsoft.com>
Date: Wed Sep 7 11:23:06 2016 -0700
Removing async wrappers
* Rolling back test code for Program.cs
* Changes as per code review
* Fixing broken unit tests
* More fixes for codereview