| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
| |
If we get a fetchMore right between when the revision was updated and
the incrementalQuery actually running, we ended up loosing the update
because the result provider ended up with a too recent revision after
the additional initial query.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Notes:
- Introduces the concept of queries on multiple properties (which meant changing query's internals a bit)
- Dates are stored as well as the "reference" in the index to allow quick filtering without fetching the whole entity
- Buckets are weeks starting on Monday (guaranteed by the use of the Julian calendar)
- Some size improvements are definitely possible (dates are padded numbers again, not using integer databases, Julian calendar starts at a very old date, etc.)
Test Plan: Tested in querytest
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13477
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Notes:
- For now, only for QDateTime indexes
- Invalid QDateTimes are stored in the index (subject to change)
- Should be a drop-in replacement from ValueIndexes (except for `In` and `Contains` queries)
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13105
|
|
|
|
|
|
| |
Filtered entites are still passed through as removal, but if
there is no other value for the reduction, the reduction result is
empty.
|
| |
|
| |
|
|
|
|
|
| |
Instead we have to remember that something has changed and rerun an
incremental query.
|
| |
|
|
|
|
| |
Filtered entities would still end up in the entities list before.
|
| |
|
|
|
|
|
| |
This cuts into the sync performance by about 40%,
but gives us fast fulltext searching for all local content.
|
|
|
|
|
|
|
|
|
|
| |
There are only a few cases where have to access the list of dbis or
environments, so we can normally get away with just read-locking.
This seems to fix a segfault that was possibly caused be an environment
being reused that has already been freed in another thread. The
read-only lock when initially retrieving the environment seems to fix
that.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Incremental additions of children in the tree were filtered due to the
parent filter. This broke when we started to maintain state, thus
causing the filter in datastorequery containing the parent filter to be
carried over. Given that the incremental querying of children currently
doesn't really add much value (we don't have trees that are large/deep
enough), perhaps we're better off using a different approach.
|
|
|
|
|
|
| |
The incremental querying broke as soon as a revision update came in
since it would nuke the base-set. This fixes it, but it's definitely not
pretty.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
initial fetch.
|
|
|
|
|
| |
We often let removal updates through and expect the model to deal with
superfluous updates, this now actually implements that.
|
|
|
|
|
|
| |
Some filters need to maintain state between runs in order to be able to
emit only what has changed. This now also make reduction work for live
queries.
|
| |
|
|
|
|
|
| |
The uid is not existing for the mail and the threading requires a
messageId.
|
|
|
|
| |
After the initial bloom, it should turn into a regular filter.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This allows us to make sure that references are not taken out of
context (the resource).
Because we need to use the type-specific accessors more we also ran into
a problem that we cannot "downcast" a reference with the change
recording still working, for that we have the cast<T>() operator now.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This allows us to match properties from a subquery.
Unfortunately this also means that DataStoreQuery needs access to all
type implementations to issue the subquery (for potentially another type).
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The org.kde prefix is useless and possibly misleading.
Simply prefixing with sink is more unique and shorter.
|
| |
|