| Commit message (Collapse) | Author | Age |
... | |
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Only ever enter error state on non-recoverable errors.
Otherwise:
* Busy state while busy, then go back to online/offline/error.
* If we failed connect during replay/sync we assume we're offline.
* If we failed to login but could connect we have a known error
condition.
* If we succeeded to replay/sync something we are apprently online.
At the core we have the problem that we have no way of telling wether
we can connect to the server until we actually try (network is not
enough: vpns, firewalls, ....). Further the status always reflects the
latest status, so even if we were in an error state, once we retry we go
out of the error state and either end up back in the error state or not.
When aggregating states we have to similarly adjust the state to the
most relevant among the resources. The states are ordered like this:
* Error
* Busy
* Connected
* Offline
|
| |
| |
| |
| | |
We already see the resource exiting.
|
| |
| |
| |
| | |
...and improved debug output slightly.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
We really need to guard against this in kasync...
|
| |
| |
| |
| | |
...we selectively apply the filter in kube instead.
|
| |
| |
| |
| |
| |
| | |
against the source.
We used to replay no changes and then claim the resource was online.
|
| |
| |
| |
| |
| |
| |
| |
| | |
syncrequest
That way we can do the notification emitting in the synchronizer and it
keeps working even if the login already fails (so the synchronizing code
would never be executed).
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
With this it becomes difficult to test notifications, and notifications
may contain more interesting information, so we don't want to drop them
too early.
|
| | |
|
| |
| |
| |
| |
| | |
Otherwise we end up trying to start the resource from multiple places
in notifiertest.
|
| |
| |
| |
| |
| |
| |
| |
| | |
This will allow us to fold things like progress and sync status directly
into the model. Usecases are mail download progress and folder sync
progress.
Ideally we would also solve the resource/account state through this.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of using the offset to skip over old results requires
recalculating them, and resulted in some cases in results being added
multiple times to the model.
By just maintaining the state we can apply the offset directly to the
base-set, and maintain the state in reduction etc. which is necessary to
continue streaming results while making sure we don't report anything
twice.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Having them separated is rather pointless (since we need one for every
type, and all types are the interface of sink, as one), and caused quite
a bit of friction when adding new types. This will also make it easier
to change things for all types.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Set the flags on new mail as well
|
| |
| |
| |
| |
| | |
We already set the resource id for the resource process,
so adding it again really adds nothing at all.
|
| |
| |
| |
| |
| | |
...by setting dummy values for properties we do not actually have set in
the config.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
We use this frequently when loading conversations, so this results in a
significant preformance improvement.
|
| |
| |
| |
| |
| |
| | |
When trying to reply to a mail from kube we ran into a deadlock.
The initial result callback is called from the main thread, and that can
thus directly lead to destruction of the emitter.
|
| |
| |
| |
| |
| |
| | |
Otherwise if the source resource manages to clean up the revision before
the target resource gets to process the new entity, then the blob file
is gone already.
|
| | |
|