| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
| |
Filtered entites are still passed through as removal, but if
there is no other value for the reduction, the reduction result is
empty.
|
|
|
|
| |
Filtered entities would still end up in the entities list before.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A single QueryRunner should never have multiple workers running at the
same time. We did not properly enforce this in case of incremental
updates coming in.
The only way I managed to reproduce the crash:
* Open a large folder with lots of unread mail in kube
* Select a mail in the maillist and hold the down button
* This will:
* Repeatedly call fetch more
* Trigger lot's of mark as read modifications that result in
notifications.
* Eventually it crashes somewhere in EntityStore, likely because
of concurrent access of the filter structure which is shared through
the state.
We now ensure in the single threaded portion of the code that we only
ever run one worker at a time. If we did receive an update during,
we remember that change and fetch more once we're done.
To be able to call fetch again that portion was also factored out into a
separate function.
|
|
|
|
|
| |
This cuts into the sync performance by about 40%,
but gives us fast fulltext searching for all local content.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This brings the incremental closer to a regular query (about 1.5 times
as bad instead of 3.5 times).
For a comparison look at MailQueryBenchmark::testIncremental()
The optimization is built on the assumption that we i.e. get an update
with 100 revisions, and thus the optimization applies to the case where
we have multiple revisions within that batch that are part of the same
reduction. In such a case we can avoid redoing the reduction lookup over
and over.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
The incremental querying broke as soon as a revision update came in
since it would nuke the base-set. This fixes it, but it's definitely not
pretty.
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the offset to skip over old results requires
recalculating them, and resulted in some cases in results being added
multiple times to the model.
By just maintaining the state we can apply the offset directly to the
base-set, and maintain the state in reduction etc. which is necessary to
continue streaming results while making sure we don't report anything
twice.
|
| |
|
| |
|
|
|
|
|
|
| |
Some filters need to maintain state between runs in order to be able to
emit only what has changed. This now also make reduction work for live
queries.
|
| |
|
| |
|
|
|
|
| |
After the initial bloom, it should turn into a regular filter.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
To have hierarchical debug output we have to pass around something at
run-time, there is no reasonable alternative. Log::Context provides the
identifier to do just that and largely replaces the debug component
idea.
|
| |
|
| |
|
|
|
|
| |
Otherwise i.e. the counter will only ever count up
|
| |
|
| |
|
|
|
|
|
| |
We have to access properties, so we need the mapper anyways, and the
ApplicationDomainType type shouldn't be a large overhead anyways.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This allows us to match properties from a subquery.
Unfortunately this also means that DataStoreQuery needs access to all
type implementations to issue the subquery (for potentially another type).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
DataStoreQuery now encapsulates the low-level query that operates
directly on the storage. It no longer has access to the resource
buffers, and is instantiated by the type implementation, so we can
specialize the query alogorithm per type, but not per resource.
This will allow us to implement the threading queries for the mailtype.
|