| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A single QueryRunner should never have multiple workers running at the
same time. We did not properly enforce this in case of incremental
updates coming in.
The only way I managed to reproduce the crash:
* Open a large folder with lots of unread mail in kube
* Select a mail in the maillist and hold the down button
* This will:
* Repeatedly call fetch more
* Trigger lot's of mark as read modifications that result in
notifications.
* Eventually it crashes somewhere in EntityStore, likely because
of concurrent access of the filter structure which is shared through
the state.
We now ensure in the single threaded portion of the code that we only
ever run one worker at a time. If we did receive an update during,
we remember that change and fetch more once we're done.
To be able to call fetch again that portion was also factored out into a
separate function.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This cuts into the sync performance by about 40%,
but gives us fast fulltext searching for all local content.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLOB properties had a couple of intended purposes:
* Allow large payloads to be streamed directly to disk, and then be
handled by reference.
* Allow zero-copy handling.
* Keep the database values compact so we can avoid traversing large
BLOBS.
However, they came at the cost of code-complexity, and we lost all the
benefits of our storage layer, such as transactions.
Measurements showed, that for email (the intended primary usecase),
the overhead is hardly measurable, with most parts performing
better, or at least not worse. We additionally also gain file-system
independence, which may help on other platforms.
The biggest drawback is probably that large payloads need to be written
to disk twice, because of the synchronizer queue (once for the queue,
once for the actual data).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There are only a few cases where have to access the list of dbis or
environments, so we can normally get away with just read-locking.
This seems to fix a segfault that was possibly caused be an environment
being reused that has already been freed in another thread. The
read-only lock when initially retrieving the environment seems to fix
that.
|
|
|
|
| |
enough.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
When creating new messages the default should be that the full payload
is available. Not having the payload available is a specialcase used by
the imap resource.
|
| |
|
|
|
|
|
|
|
|
|
| |
Incremental additions of children in the tree were filtered due to the
parent filter. This broke when we started to maintain state, thus
causing the filter in datastorequery containing the parent filter to be
carried over. Given that the incremental querying of children currently
doesn't really add much value (we don't have trees that are large/deep
enough), perhaps we're better off using a different approach.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
corruption.
It looks like the memory corruption (malloc started to crash) was coming QLocalSocket related
signals. According to the docs it's not safe (whatever that means), to
delete a QObject with pending signals, so we use deleteLater to schedule
it's deletion. This resolved the crashes.
|
| |
|
|
|
|
|
|
| |
Doesn't work with CATCH_ERRORS=ON
This reverts commit 2bb2a10f5c4010d168b3d26e9937cf26365a0d0c.
|
| |
|
|
|
|
|
|
| |
Fixing this introduces some crashes. I'll have to revisit this.
This reverts commit 679f2d5d7d46b2f098e939883520b707f01b2a36.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Just truncating the file is not a good idea. If the headers end up being
larger (I just ran into that), then we just fail to parse the headers
and miss important stuff like subjects. So let's not.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This brings the incremental closer to a regular query (about 1.5 times
as bad instead of 3.5 times).
For a comparison look at MailQueryBenchmark::testIncremental()
The optimization is built on the assumption that we i.e. get an update
with 100 revisions, and thus the optimization applies to the case where
we have multiple revisions within that batch that are part of the same
reduction. In such a case we can avoid redoing the reduction lookup over
and over.
|
| |
|