| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Change revision type from `qint64` to `size_t` for LMDB in a couple of places (LMDB supports `unsigned int` or `size_t` which are `long unsigned int` on my machine)
- Better support for database flags (duplicate, integer keys, integer values for now but is extensible)
- Main databases' keys are now revisions
- Some databases switched to integer keys databases:
- Main databases
- the revision to uid mapping database
- the revision to entity type mapping database
- Refactor the entity type's `typeDatabases` method (if in the future we need to change the main databases' flags again)
- New uid to revision mapping database (`uidsToRevisions`):
- Stores all revisions (not uid to latest revision) because we need it for cleaning old revisions
- Flags are: duplicates + integer values (so findLatest finds the latest revision for the given uid)
~~Problems to fix before merging:~~
All Fixed!
- ~~Sometimes Sink can't read what has just been written to the database (maybe because of transactions race conditions)~~
- ~~Most of the times, this results in Sink not able to find the uid for a given revision by reading the `revisions` database~~
- ~~`pipelinetest`'s `testModifyWithConflict` fails because the local changes are overridden~~
~~The first problem prevents me from running benchmarks~~
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D14974
|
|
|
|
|
|
|
| |
The reason why we didn't notice was probably:
* we only use this table nowadays when we have no db layout.
* The only flag we ever set is the dupsort flag, and the return from a
failed int conversion of 0 is otherwise correct.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch addresses two problems:
* A potential deadlock.
We had the following code inside a separately protected section:
dbiLocker.unlock();
//Here we could loos the readlock
QWriteLocker dbiWriteLocker(&sDbisLock);
If we lost the lock in between the two lines, the second thread that was
now holding a read-lock on sDbisLock could not enter the protected
section, which was a requirement to release the read-lock, and we'd thus
end up in a deadlock. This is solved using tryLock with intermediate
releases of the read-lock, allowing the original thread to finish.
* When failing to validate a dbi for the current transacation we
simply returned an invalid db (which then in this particular case broke
reading of revision uid's and type's), leading to queries not executing
as they should.
Both problems are unfortunately hard to reproduce, the adjusted test
at leaset allowed me to reproduce the deadlock situation sometimes.
To fix this cleanly we should probably just get rid of dynamic dbi
allocation for good.
|
|
|
|
|
|
| |
shouldn't be visible yet.
Was reproducible in the initial sync of the caldav resource.
|
|
|
|
|
|
|
|
|
|
|
| |
issues.
https://phabricator.kde.org/T8723
With 200MB we can both deal with the 200MB files on disk, and we could
even load all of them (the 5 databases the resource uses), into memory.
Once the open problems are resolved we should be able to bump it back to
at least 20GB.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We'll only end up with defunctional processes that may or may not do
anything useful.
|
|
|
|
| |
setting it.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In preparation of the support for ranged queries.
Notes:
Since they are pretty similar, it could be nice to refactor `scan` and `findAllInRange` to use common 3rd function
Test Plan: This is tested in storagetest.cpp
Reviewers: cmollekopf
Tags: #sink
Differential Revision: https://phabricator.kde.org/D13066
|
|
|
|
|
| |
It's possible that we therefore went over the virtual address space
limit on windows which is 128GB.
|
| |
|
| |
|
|
|
|
| |
Or so says the compiler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There can only ever be one transaction using mdb_dbi_open running,
and that transaction must commit or abort before any other transaction
attempts to use mdb_dbi_open.
Use delayed dbi merging with write transactions and a temporary
transaction for read transactions.
We now protect dbi initialization with a mutex and immediately update
the sDbis hash. This assumes that the created dbis are indeed
We can still violate the only one transaction may use mdb_dbi_open rule
if we start a read-only transaction after the write transaction, before
the write transaction commits.
It does not seem to be something we actually do though.
Opening dbis on environment init is further separated out, so we don't
end up in the regular openDatabase codepath at all.
|
|
|
|
| |
Otherwise removal doesn't work on windows due to open file handles.
|
|
|
|
| |
according to the docs.
|
| |
|
| |
|
|
|
|
| |
version when creating it
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There are only a few cases where have to access the list of dbis or
environments, so we can normally get away with just read-locking.
This seems to fix a segfault that was possibly caused be an environment
being reused that has already been freed in another thread. The
read-only lock when initially retrieving the environment seems to fix
that.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
The while loop is executed at least once, so advanced is always true.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Previsouly we would hit the maxreaders limit
|
| |
|
| |
|
|
|
|
|
|
| |
lmdb and sink deal badly with e.g. a string containing a null in the
millde as db name. Thus we try to protect better against it.
This is an actual problem we triggered: https://phabricator.kde.org/T5880
|
| |
|
|
|
|
| |
transaction.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dbis can only be opened by one thread and should then
be shared accross all threads after committing the transaction
to create the dbi.
This requires us to initially open all db's, which in turn requires us
to know the correct flags.
This patch stores the flags to open each db in a separate db,
and then opens up all databases on initial start.
If a new database is created that dbi is as well shared as soon as
the transaction is committed (before the dbi is private to the
transaction).
|
| |
|
| |
|
|
|
|
| |
critical error.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|