| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
|
|
|
|
| |
This is required to be able to resolve change-replay failures by
removing the entity.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Some notes:
- Needed to specialize some flatbuffers related functions for serializing QStringList and int
- Removed useless qWarnings in caldav test
- Rename EventSynchronizer -> CalDAVSynchronizer since it also synchronizes Calendars and Todos (and more to come!)
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D12695
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Notes:
- For calendars, only removal is implemented because:
- There is no DavCollectionCreateJob, possibly because there can't be an empty DAV collection
- DavCollectionModifyJob only allows modifying "properties", which we don't use (except for the name, if the name is considered a property)
- Currently, modifying an item with Sink overrides the one on the server, even if the store is not up-to-date
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D12611
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Doesn't work with CATCH_ERRORS=ON
This reverts commit 2bb2a10f5c4010d168b3d26e9937cf26365a0d0c.
|
| |
|
|
|
|
|
|
|
| |
The password (or any other secret), is now cached in the client process
(in-memory only), and delivered to the resource via command.
The resource avoids doing any operations against the source until the
secret is available.
|
|
|
|
|
| |
We don't need an update for every mail if we download 50k mails. We just
need enough to animate a progress bar.
|
|
|
|
| |
kimap should really have better error codes...
|
|
|
|
|
|
| |
This allows the aggregation to ignore resources where we don't have any
status information yet, so the account doesn't always end up being
offline.
|
| |
|
| |
|
|
|
|
| |
handling and are appropriately dealt with.
|
| |
|
|
|
|
| |
host not found is pretty much the same as offline for our purpose.
|
|
|
|
| |
Such as progress 0 out of 0 (happens on sync of already synced folder)
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
|
|
|
| |
...and improved debug output slightly.
|
| |
|
| |
|
|
|
|
|
|
| |
against the source.
We used to replay no changes and then claim the resource was online.
|
|
|
|
|
|
|
|
| |
syncrequest
That way we can do the notification emitting in the synchronizer and it
keeps working even if the login already fails (so the synchronizing code
would never be executed).
|
|
|
|
|
|
|
|
| |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We use this frequently when loading conversations, so this results in a
significant preformance improvement.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
If one sync task depends on the previous sync task we want to flush in
between, so we can query for the results of the previous sync request
locally.
If we detect such a dependency we temporarily halt all processing of
synchronization requests until the flush completes, so we can continue
processing.
|
| |
|
| |
|
|
|
|
| |
A single request will replay until the latest revision.
|
| |
|