| Commit message (Collapse) | Author | Age |
|
|
|
| |
kimap should really have better error codes...
|
|
|
|
|
|
| |
This allows the aggregation to ignore resources where we don't have any
status information yet, so the account doesn't always end up being
offline.
|
| |
|
| |
|
|
|
|
| |
handling and are appropriately dealt with.
|
| |
|
|
|
|
| |
host not found is pretty much the same as offline for our purpose.
|
|
|
|
| |
Such as progress 0 out of 0 (happens on sync of already synced folder)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
|
|
|
| |
...and improved debug output slightly.
|
| |
|
| |
|
|
|
|
|
|
| |
against the source.
We used to replay no changes and then claim the resource was online.
|
|
|
|
|
|
|
|
| |
syncrequest
That way we can do the notification emitting in the synchronizer and it
keeps working even if the login already fails (so the synchronizing code
would never be executed).
|
|
|
|
|
|
|
|
| |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We use this frequently when loading conversations, so this results in a
significant preformance improvement.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
If one sync task depends on the previous sync task we want to flush in
between, so we can query for the results of the previous sync request
locally.
If we detect such a dependency we temporarily halt all processing of
synchronization requests until the flush completes, so we can continue
processing.
|
| |
|
| |
|
|
|
|
| |
A single request will replay until the latest revision.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
To have hierarchical debug output we have to pass around something at
run-time, there is no reasonable alternative. Log::Context provides the
identifier to do just that and largely replaces the debug component
idea.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
... because we really just enqueue the request and then wait for the
notification.
|
| |
|
|
|
|
|
|
|
| |
Instead of trying to actually flush queues, we send a special command
through the same queues as the other commands and can thus guarantee
that the respective commands have been processed without blocking
anything.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
By concentrating all communication to the source in one place we get rid
of several oddities.
* Quite a bit of duplication since both need access to the
synchronizationStore and the source.
* We currently have an akward locking in place because both classes
access the ync store. This is not easier to resolve cleanly.
* The live of resource implementers becomes easier.
* An implementation could elect to not use changereplay and always do a
full sync... (maybe?)
|