| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
|
|
|
|
| |
This fixes the performance regressions to a state where we are roughly
at the same performance as pre Identifier (but not any better either).
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Depends on D14289
- Fixes the `sinksh inspect …` command
- Introduces `isValid`, `isValidInternal` and `isValidDisplay` static functions in Key, Identifier and Revision
I still have to do a more extensive search for induced bugs in other commands
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D14404
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Depends on D14099
Notes:
- Tests pass without many modifications outside of resultset.cpp/.h
- `mGenerator` doesn't seem to be used?
Benchmarks
=========
Run benchmarks:
| Develop | D14099 | This patch |
| ---------------------------------- | ---------------------------------- | ---------------------------------- |
| Current Rss usage [kb]: 40700 | Current Rss usage [kb]: 38564 | Current Rss usage [kb]: 39112 |
| Peak Rss usage [kb]: 40700 | Peak Rss usage [kb]: 38564 | Peak Rss usage [kb]: 39112 |
| Rss growth [kb]: 15920 | Rss growth [kb]: 13352 | Rss growth [kb]: 13432 |
| Rss growth per entity [byte]: 3260 | Rss growth per entity [byte]: 2734 | Rss growth per entity [byte]: 2750 |
| Rss without db [kb]: 29736 | Rss without db [kb]: 29248 | Rss without db [kb]: 30100 |
| Percentage peak rss error: 0 | Percentage peak rss error: 0 | Percentage peak rss error: 0 |
| On disk [kb]: 10788 | On disk [kb]: 9140 | On disk [kb]: 8836 |
| Buffer size total [kb]: 898 | Buffer size total [kb]: 898 | Buffer size total [kb]: 898 |
| Write amplification: 12.0075 | Write amplification: 10.1732 | Write amplification: 9.83485 |
Test Disk Usage:
| Develop | D14099 | This patch |
| ----------------------------------- | ----------------------------------- | ----------------------------------- |
| Free pages: 412 | Free pages: 309 | Free pages: 312 |
| Total pages: 760 | Total pages: 599 | Total pages: 603 |
| Used size: 1425408 | Used size: 1187840 | Used size: 1191936 |
| Calculated key + value size: 856932 | Calculated key + value size: 702866 | Calculated key + value size: 702866 |
| Calculated total db sizes: 970752 | Calculated total db sizes: 954368 | Calculated total db sizes: 933888 |
| Main store on disk: 3112960 | Main store on disk: 2453504 | Main store on disk: 2469888 |
| Total on disk: 3293184 | Total on disk: 2633728 | Total on disk: 2650112 |
| Used size amplification: 1.66339 | Used size amplification: 1.68999 | Used size amplification: 1.69582 |
| Write amplification: 3.63268 | Write amplification: 3.49071 | Write amplification: 3.51402 |
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D14289
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D14099
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Only in TypeIndex, not in Index (since we might want to store something other than identifiers as values)
- We might want to do the same in the `SynchronizerStore` for localId ↔ remoteId indexes
Depends on D13735
Some quick benchmarks (against develop and D13735): {F6022279}
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13902
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Use object oriented paradigm for Keys / Identifiers /Revisions
- "Compress" keys by using byte representation of Uuids
- Still some cleaning left to do
- Also run some benchmarks
- I'm questioning whether files other than entitystore (tests excluded) are allowed to access this API
Reviewers: cmollekopf
Reviewed By: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13735
|
|
|
|
| |
We always run into that when starting a resource.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resourceaccess.
The problem was (as excercised by the last test in resourcecontroltest),
that in this scenario we would:
* trigger a synchronization that starts the resource, and then goes into
a loop trying to connecting (KAsync::wait -> singleshot timer)
* trigger a shutdown that would probe for the socket, not find it, and
thus do nothing.
* exit the testfunction, which somehow stops qtimer processing, meaning
we are stuck in KAsync::wait.
For now this is fixed by simply not probing for the socket.
|
| |
|
| |
|
|
|
|
| |
Could be triggered by running the composerviewtest in kube.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Some services don't have the inbox as part of the subscribed folders,
at least not by default, so we just always enable it.
|
| |
|
|
|
|
|
| |
The tests seem to be simply to slow right now, so let's bump this to
avoid flaky tests.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
shouldn't be visible yet.
Was reproducible in the initial sync of the caldav resource.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This seems to happen sometimes (showed up in tests), and causes
operations to fail.
|
|
|
|
| |
On windows we lack ssl support it seems.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The case we ran into is the following:
* Fetching the full payload and marking all messages of a thread as read
happens simultaneously.
* The local modification to mark as read gets immediately overwritten
when the full payload arrives.
* Eventually the modification gets replayed to the server though (and
the reversal isn't because coming from the source), so on next sync the
situation fixes itself.
To be able to improve this we try to protect local modifications in that
properties that have been modified since baseRevision (which currently
isn't, but should be equal to the last to the server replayed revision)
are not overwritten. This conflict resolution strategy thus always
prefers local modifications. baseRevision is currently set to the
current maximum revision of the store at the time when the resource
creates the modification.
|
|
|
|
|
|
|
| |
If we get a fetchMore right between when the revision was updated and
the incrementalQuery actually running, we ended up loosing the update
because the result provider ended up with a too recent revision after
the additional initial query.
|