| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
| |
Once the transaction is done or some modification is executed
that memory is no longer valid. So we always copy.
|
|
|
|
|
|
| |
lmdb and sink deal badly with e.g. a string containing a null in the
millde as db name. Thus we try to protect better against it.
This is an actual problem we triggered: https://phabricator.kde.org/T5880
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
fix build, add a default handler in the switch
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This is where this really belongs, only the indexing is part of storage.
This is necessary so preprocessors can move entities as well.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
message.
Otherwise the processor might think its done before it actually is.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
| |
| |
| |
| | |
We already see the resource exiting.
|
| |
| |
| |
| | |
...and improved debug output slightly.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
We really need to guard against this in kasync...
|
| |
| |
| |
| | |
...we selectively apply the filter in kube instead.
|
| |
| |
| |
| |
| |
| | |
against the source.
We used to replay no changes and then claim the resource was online.
|
| |
| |
| |
| |
| |
| |
| |
| | |
syncrequest
That way we can do the notification emitting in the synchronizer and it
keeps working even if the login already fails (so the synchronizing code
would never be executed).
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
With this it becomes difficult to test notifications, and notifications
may contain more interesting information, so we don't want to drop them
too early.
|
| | |
|
| |
| |
| |
| |
| | |
Otherwise we end up trying to start the resource from multiple places
in notifiertest.
|
| |
| |
| |
| |
| |
| |
| |
| | |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using the offset to skip over old results requires
recalculating them, and resulted in some cases in results being added
multiple times to the model.
By just maintaining the state we can apply the offset directly to the
base-set, and maintain the state in reduction etc. which is necessary to
continue streaming results while making sure we don't report anything
twice.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Having them separated is rather pointless (since we need one for every
type, and all types are the interface of sink, as one), and caused quite
a bit of friction when adding new types. This will also make it easier
to change things for all types.
|
| | |
|