| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Change revision type from `qint64` to `size_t` for LMDB in a couple of places (LMDB supports `unsigned int` or `size_t` which are `long unsigned int` on my machine)
- Better support for database flags (duplicate, integer keys, integer values for now but is extensible)
- Main databases' keys are now revisions
- Some databases switched to integer keys databases:
- Main databases
- the revision to uid mapping database
- the revision to entity type mapping database
- Refactor the entity type's `typeDatabases` method (if in the future we need to change the main databases' flags again)
- New uid to revision mapping database (`uidsToRevisions`):
- Stores all revisions (not uid to latest revision) because we need it for cleaning old revisions
- Flags are: duplicates + integer values (so findLatest finds the latest revision for the given uid)
~~Problems to fix before merging:~~
All Fixed!
- ~~Sometimes Sink can't read what has just been written to the database (maybe because of transactions race conditions)~~
- ~~Most of the times, this results in Sink not able to find the uid for a given revision by reading the `revisions` database~~
- ~~`pipelinetest`'s `testModifyWithConflict` fails because the local changes are overridden~~
~~The first problem prevents me from running benchmarks~~
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D14974
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Use object oriented paradigm for Keys / Identifiers /Revisions
- "Compress" keys by using byte representation of Uuids
- Still some cleaning left to do
- Also run some benchmarks
- I'm questioning whether files other than entitystore (tests excluded) are allowed to access this API
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13735
|
|
|
|
| |
handling and are appropriately dealt with.
|
|
|
|
|
|
| |
If we didn't actually do anything we just carry on.
Failing to commit is harmless in that case and committing for every
revision is rather expensive.
|
| |
|
|
|
|
| |
We already see the resource exiting.
|
|
|
|
| |
...and improved debug output slightly.
|
|
|
|
| |
We really need to guard against this in kasync...
|
| |
|
|
|
|
|
|
| |
* use a log context
* clearer and simpler control flow
* No infinite recursive calling
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
|
|
|
|
| |
This had a significant performance impact when i.e. syncing a folder
with 10k messages.
|
|
|
|
| |
The old implementation would result in endlessly nested calls.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a single #define as debug area the new system allows for an
identifier for each debug message with the structure component.area.
The component is a dot separated identifier of the runtime component,
such as the process or the plugin.
The area is the code component, and can be as such defined at
compiletime.
The idea of this system is that it becomes possible to i.e. look at the
output of all messages in the query subsystem of a specific resource
(something that happens in the client process, but in the
resource-specific subcomponent).
The new macros are supposed to be less likely to clash with other names,
hence the new names.
|
| |
|
| |
|
|
|
|
|
| |
Otherwise we would never replay changes that failed because we were
offline or so.
|
| |
|
| |
|
| |
|
| |
|
|
changereplay and synchronization.
This cleans up the API and avoids the excessive passing around of
transactions. It also provides more flexibility in eventually using
different synchronization strategies for different resources.
|