| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
fix build, add a default handler in the switch
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This is where this really belongs, only the indexing is part of storage.
This is necessary so preprocessors can move entities as well.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
message.
Otherwise the processor might think its done before it actually is.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
| |
| |
| |
| | |
We already see the resource exiting.
|
| |
| |
| |
| | |
...and improved debug output slightly.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
We really need to guard against this in kasync...
|
| |
| |
| |
| | |
...we selectively apply the filter in kube instead.
|
| |
| |
| |
| |
| |
| | |
against the source.
We used to replay no changes and then claim the resource was online.
|
| |
| |
| |
| |
| |
| |
| |
| | |
syncrequest
That way we can do the notification emitting in the synchronizer and it
keeps working even if the login already fails (so the synchronizing code
would never be executed).
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
They don't get through to the resource consistently, so we have to
ignore them for now to make the test reliable.
|
| |
| |
| |
| |
| |
| | |
With this it becomes difficult to test notifications, and notifications
may contain more interesting information, so we don't want to drop them
too early.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We currently use the folder as the main db folder, meaning we remove
the folder when removing the db. This results in the lockfile vanishing
with the db, which then confuses QLockFile (resulting in a lot of
warnings). We may want to start moving everything into a
resource-instance folder, but then we have to do it properly across the
board.
|
| |
| |
| |
| |
| | |
Otherwise we end up trying to start the resource from multiple places
in notifiertest.
|
| |
| |
| |
| |
| |
| |
| |
| | |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using the offset to skip over old results requires
recalculating them, and resulted in some cases in results being added
multiple times to the model.
By just maintaining the state we can apply the offset directly to the
base-set, and maintain the state in reduction etc. which is necessary to
continue streaming results while making sure we don't report anything
twice.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Having them separated is rather pointless (since we need one for every
type, and all types are the interface of sink, as one), and caused quite
a bit of friction when adding new types. This will also make it easier
to change things for all types.
|