| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We have to access properties, so we need the mapper anyways, and the
ApplicationDomainType type shouldn't be a large overhead anyways.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
|
|
|
|
|
| |
This allows us to match properties from a subquery.
Unfortunately this also means that DataStoreQuery needs access to all
type implementations to issue the subquery (for potentially another type).
|
| |
|
|
|
|
|
|
|
| |
the continuation.
This happens if Kube is used to look at a folder that is currently being
freshly synchronized, so we continuously get new results.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a single #define as debug area the new system allows for an
identifier for each debug message with the structure component.area.
The component is a dot separated identifier of the runtime component,
such as the process or the plugin.
The area is the code component, and can be as such defined at
compiletime.
The idea of this system is that it becomes possible to i.e. look at the
output of all messages in the query subsystem of a specific resource
(something that happens in the client process, but in the
resource-specific subcomponent).
The new macros are supposed to be less likely to clash with other names,
hence the new names.
|
| |
|
| |
|
|
|
|
| |
Otherwise we reload the same entities over and over.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Sometimes wrong databases are returned for the name, probably related
to threading/incorrect usage of lmdb.
For the time being we recover from that by detecting it and retrying.
|
| |
|
| |
|
|
|
|
| |
We used to count one too many.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
clang-format -i */**{.cpp,.h}
|
|
|
|
|
|
|
|
| |
We skip values we've already seen and only retrieve the new ones.
This currently only properly works in a non-live query and we don't
give the model any feedback when we can't fetch more data anymore.
However, it generally works and we get the desired effect.
|
| |
|
|
|
|
|
| |
We still read all values, but just reported the ones before the limit.
With this we query 1000 out of 50k values in 63ms.
|
| |
|
| |
|
| |
|
|
|
|
| |
sorting
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This can be used to modify each result before reporting it to the
client. Alternatively this could also be done in the DomainTypeAdaptor,
which would perhaps be the cleaner solution...
|
| |
|
| |
|
| |
|
|
|
|
| |
(except for documentation).
|
| |
|
|
|
|
|
|
|
|
|
| |
All database access is now implemented in threads, to avoid
blocking the main thread. The resource communication still resides in
the main thread to keep the coordination simple.
With it comes a test that ensures we don't block the main thread for
too long.
|