| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
Instead of using the offset to skip over old results requires
recalculating them, and resulted in some cases in results being added
multiple times to the model.
By just maintaining the state we can apply the offset directly to the
base-set, and maintain the state in reduction etc. which is necessary to
continue streaming results while making sure we don't report anything
twice.
|
| |
|
| |
|
|
|
|
|
| |
If notifications come in faster than we can process them we'd run into
the assert.
|
|
|
|
|
|
| |
Some filters need to maintain state between runs in order to be able to
emit only what has changed. This now also make reduction work for live
queries.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
To have hierarchical debug output we have to pass around something at
run-time, there is no reasonable alternative. Log::Context provides the
identifier to do just that and largely replaces the debug component
idea.
|
|
|
|
| |
... so we can use that information in fetchMore.
|
|
|
|
|
|
|
|
| |
This allows us to make sure that references are not taken out of
context (the resource).
Because we need to use the type-specific accessors more we also ran into
a problem that we cannot "downcast" a reference with the change
recording still working, for that we have the cast<T>() operator now.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We have to access properties, so we need the mapper anyways, and the
ApplicationDomainType type shouldn't be a large overhead anyways.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
|
|
|
|
|
| |
This allows us to match properties from a subquery.
Unfortunately this also means that DataStoreQuery needs access to all
type implementations to issue the subquery (for potentially another type).
|
| |
|
|
|
|
|
|
|
| |
the continuation.
This happens if Kube is used to look at a folder that is currently being
freshly synchronized, so we continuously get new results.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a single #define as debug area the new system allows for an
identifier for each debug message with the structure component.area.
The component is a dot separated identifier of the runtime component,
such as the process or the plugin.
The area is the code component, and can be as such defined at
compiletime.
The idea of this system is that it becomes possible to i.e. look at the
output of all messages in the query subsystem of a specific resource
(something that happens in the client process, but in the
resource-specific subcomponent).
The new macros are supposed to be less likely to clash with other names,
hence the new names.
|
| |
|
| |
|
|
|
|
| |
Otherwise we reload the same entities over and over.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Sometimes wrong databases are returned for the name, probably related
to threading/incorrect usage of lmdb.
For the time being we recover from that by detecting it and retrying.
|
| |
|
| |
|
|
|
|
| |
We used to count one too many.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
clang-format -i */**{.cpp,.h}
|
|
|
|
|
|
|
|
| |
We skip values we've already seen and only retrieve the new ones.
This currently only properly works in a non-live query and we don't
give the model any feedback when we can't fetch more data anymore.
However, it generally works and we get the desired effect.
|
| |
|
|
|
|
|
| |
We still read all values, but just reported the ones before the limit.
With this we query 1000 out of 50k values in 63ms.
|
| |
|