| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This brings the incremental closer to a regular query (about 1.5 times
as bad instead of 3.5 times).
For a comparison look at MailQueryBenchmark::testIncremental()
The optimization is built on the assumption that we i.e. get an update
with 100 revisions, and thus the optimization applies to the case where
we have multiple revisions within that batch that are part of the same
reduction. In such a case we can avoid redoing the reduction lookup over
and over.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We used to simply return all uids.
Requires "sinksh upgrade"
|
|
|
|
|
|
| |
after the query.
This fixes status monitoring when creating a new account.
|
|
|
|
|
|
| |
This allows the aggregation to ignore resources where we don't have any
status information yet, so the account doesn't always end up being
offline.
|
|
|
|
|
| |
It can happen that thread messages are not delivered in order, which
means we will have to merge threads once all messages are available.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From Qt's documentation: "This macro is obsolete. Use
target_link_libraries with IMPORTED targets instead." It's only
recommended with cmake >=2.8.9 & < 2.8.12. Sink already requires
cmake 3.0. One advantage of using the imported targets is, that
cmake complains if a target isn't found before it's used, like
Qt5Concurrent missing from the find_package_call here.
Reviewers: #sink, cmollekopf
Reviewed By: #sink, cmollekopf
Subscribers: #sink
Tags: #sink
Differential Revision: https://phabricator.kde.org/D6361
|
| |
|
| |
|
|
|
|
| |
Necessary to get notifications for newly created resources.
|
| |
|
| |
|
|
|
|
|
| |
Otherwise the test is not aborted because the job doesn't get any error
set
|
|
|
|
|
|
|
| |
IMAP always requires CRLF, and so does the MIME standard, KMIME expects
LF-only.
We now just try to always use CRLF on disk, but convert LF-only messages should
we have to (e.g. because copied over from maildir or so).
|
|
|
|
|
|
| |
The incremental querying broke as soon as a revision update came in
since it would nuke the base-set. This fixes it, but it's definitely not
pretty.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Previsouly we would hit the maxreaders limit
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
|
|
|
|
| |
They don't get through to the resource consistently, so we have to
ignore them for now to make the test reliable.
|
|
|
|
|
|
|
|
| |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
The library asserts otherwise
|
| |
|
|
|
|
|
|
| |
When trying to reply to a mail from kube we ran into a deadlock.
The initial result callback is called from the main thread, and that can
thus directly lead to destruction of the emitter.
|