| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLOB properties had a couple of intended purposes:
* Allow large payloads to be streamed directly to disk, and then be
handled by reference.
* Allow zero-copy handling.
* Keep the database values compact so we can avoid traversing large
BLOBS.
However, they came at the cost of code-complexity, and we lost all the
benefits of our storage layer, such as transactions.
Measurements showed, that for email (the intended primary usecase),
the overhead is hardly measurable, with most parts performing
better, or at least not worse. We additionally also gain file-system
independence, which may help on other platforms.
The biggest drawback is probably that large payloads need to be written
to disk twice, because of the synchronizer queue (once for the queue,
once for the actual data).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There are only a few cases where have to access the list of dbis or
environments, so we can normally get away with just read-locking.
This seems to fix a segfault that was possibly caused be an environment
being reused that has already been freed in another thread. The
read-only lock when initially retrieving the environment seems to fix
that.
|
|
|
|
| |
enough.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
When creating new messages the default should be that the full payload
is available. Not having the payload available is a specialcase used by
the imap resource.
|
| |
|
|
|
|
|
|
|
|
|
| |
Incremental additions of children in the tree were filtered due to the
parent filter. This broke when we started to maintain state, thus
causing the filter in datastorequery containing the parent filter to be
carried over. Given that the incremental querying of children currently
doesn't really add much value (we don't have trees that are large/deep
enough), perhaps we're better off using a different approach.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
corruption.
It looks like the memory corruption (malloc started to crash) was coming QLocalSocket related
signals. According to the docs it's not safe (whatever that means), to
delete a QObject with pending signals, so we use deleteLater to schedule
it's deletion. This resolved the crashes.
|
| |
|
|
|
|
|
|
| |
Doesn't work with CATCH_ERRORS=ON
This reverts commit 2bb2a10f5c4010d168b3d26e9937cf26365a0d0c.
|
| |
|
|
|
|
|
|
| |
Fixing this introduces some crashes. I'll have to revisit this.
This reverts commit 679f2d5d7d46b2f098e939883520b707f01b2a36.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Just truncating the file is not a good idea. If the headers end up being
larger (I just ran into that), then we just fail to parse the headers
and miss important stuff like subjects. So let's not.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This brings the incremental closer to a regular query (about 1.5 times
as bad instead of 3.5 times).
For a comparison look at MailQueryBenchmark::testIncremental()
The optimization is built on the assumption that we i.e. get an update
with 100 revisions, and thus the optimization applies to the case where
we have multiple revisions within that batch that are part of the same
reduction. In such a case we can avoid redoing the reduction lookup over
and over.
|
| |
|
|
|
|
| |
We run into a crash otherwise when creating the first account..
|
|
|
|
|
|
|
| |
The password (or any other secret), is now cached in the client process
(in-memory only), and delivered to the resource via command.
The resource avoids doing any operations against the source until the
secret is available.
|
| |
|
|
|
|
|
| |
We don't need an update for every mail if we download 50k mails. We just
need enough to animate a progress bar.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We used to simply return all uids.
Requires "sinksh upgrade"
|