| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
A central location for all types to specify what properties are
indexed, and how to query them.
|
|
|
|
|
|
|
|
| |
* Smarter caching. ResourceAccess instances close after a timeout, if not reused.
* Introduced a start command to avoid race condition when sending
commands to a resource that is currently shutting down.
* We resend pending commands after we lost access to the resource
* unexpectedly.
|
| |
|
|
|
|
|
|
|
| |
By turning the columns into an array instead of an object,
we can print the values in the same order as in the definition file.
Previosly the order was random, and even headers and values were
somtimes mixed up.
|
|
|
|
|
|
|
| |
Remote id's need to be resolved while syncing any references.
This is done by the synchronizer by consulting the rid to entity id
mapping. If the referenced entity doesn't exist yet we create a local
id anyways, that we then need to pick up once the actual entity arrives.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
dummyresourcetest
|
|
|
|
|
|
|
|
| |
The QueryRunner object lives for the duration of the query (so just
for the initial query for non-live queries, and for the lifetime of the
result model for live queries).
It's supposed to handle all the threading internally and decouple the
lifetime of the facade.
|
|
|
|
| |
This way it's possible to i.e. repeatedly only run the reading part.
|
|
|
|
|
| |
This just gave a 700% boost to query performance from ~2k
to 14k reads per second...
|
|
|
|
|
| |
That way we don't have to hardcode the parent property,
and we can use the property to express non-tree queries as well.
|
| |
|
| |
|
|
|
|
| |
We're not doing any lifetime management anyways.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
sync has been removed from the query code and is now a separate step
|
| |
|
|
|
|
|
| |
The result model drives the data retrieval and provides the interace
for consumers
|
| |
|
| |
|
|
|
|
| |
... also if there are intermediate revisions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having the asynchronous preprocessor concept with different
pipelines for new/modify/delete we have a single pipeline with
synchronous preprocessors that act upon new/modify/delete.
This keeps the code simpler due to lack of asynchronity and keeps the
new/modify/delete operations together (which at least for the indexing
makes a lot of sense).
Not supporting asynchronity is ok because the tasks done in
preprocessing are not cpu intensive (if they were we had a problem
since they are directly involved in the round-trip time), and the main
cost comes from i/o, meaning we don't gain much by doing multithreading.
Costly tasks (such as full-text indexing) should rather be implemented
as post-processing, since that doesn't increase the round-trip time directly,
and eventually consistent is typically good enough for that.
|
| |
|
| |
|
|
|
|
| |
to a resource
|
| |
|
| |
|
| |
|
|
|
|
|
| |
To avoid unnecessary abstraction layers that don't solve a problem,
and to allow facades to customize how entities are loaded.
|
| |
|
| |
|
|
|
|
|
| |
Now we just have to avoid removing the revision too early from the
resource.
|
|
|
|
| |
So we can replay the change.
|
|
|
|
|
| |
Now we just need to ensure that equality is tested using the
ApplicationDomainType::identifier
|
|
|
|
| |
We can just as well read the latest available revision from storage.
|
|
|
|
|
| |
So far only includes modifications and additions,
removals are not yet stored as separate revisions.
|
| |
|
|
|
|
| |
Cleanup of revisions, and revision for removed entity is yet missing.
|
| |
|
|
|
|
| |
Adding new types definitely needs to become easier.
|
|
|
|
| |
genericfacadebenchmark
|
| |
|
| |
|
| |
|