summaryrefslogtreecommitdiffstats
path: root/tests
Commit message (Collapse)AuthorAge
* A more stable flagChangeTestChristian Mollekopf2018-03-02
|
* Fixed and tested the upgrade from a database without version.Christian Mollekopf2018-02-28
|
* Properly deal with filtered entities in reduced queries.Christian Mollekopf2018-02-22
| | | | Filtered entities would still end up in the entities list before.
* Deal with removals in reduced queriesChristian Mollekopf2018-02-22
|
* Apply modifications to aggregate valuesChristian Mollekopf2018-02-21
|
* Xapian based fulltext indexingChristian Mollekopf2018-02-11
| | | | | This cuts into the sync performance by about 40%, but gives us fast fulltext searching for all local content.
* Store all BLOB properties inline.Christian Mollekopf2018-02-06
| | | | | | | | | | | | | | | | | | | | | BLOB properties had a couple of intended purposes: * Allow large payloads to be streamed directly to disk, and then be handled by reference. * Allow zero-copy handling. * Keep the database values compact so we can avoid traversing large BLOBS. However, they came at the cost of code-complexity, and we lost all the benefits of our storage layer, such as transactions. Measurements showed, that for email (the intended primary usecase), the overhead is hardly measurable, with most parts performing better, or at least not worse. We additionally also gain file-system independence, which may help on other platforms. The biggest drawback is probably that large payloads need to be written to disk twice, because of the synchronizer queue (once for the queue, once for the actual data).
* CleanupChristian Mollekopf2018-01-24
|
* Track uidvalidity to detect changes behind our back.Christian Mollekopf2018-01-23
|
* Fixed imap testsChristian Mollekopf2018-01-23
| | | | | Adding the mail to cyrus imap somehow broke with cyrus 3.0. We're now creating the mail instead, before trying to sync it.
* We need all parents available, not only oneChristian Mollekopf2018-01-03
|
* Use read-write locks for finer grained control to sDbi and sEnvironmentsChristian Mollekopf2018-01-03
| | | | | | | | | | There are only a few cases where have to access the list of dbis or environments, so we can normally get away with just read-locking. This seems to fix a segfault that was possibly caused be an environment being reused that has already been freed in another thread. The read-only lock when initially retrieving the environment seems to fix that.
* Demonstrate the problem with child indexes entering before parentChristian Mollekopf2018-01-03
| | | | indexes
* Add a working model signal testChristian Mollekopf2018-01-03
|
* Avoid messageId related warningsChristian Mollekopf2018-01-03
|
* Removed broken testsChristian Mollekopf2018-01-03
|
* Removed unused synclistresultChristian Mollekopf2018-01-02
|
* No parent queryChristian Mollekopf2018-01-02
|
* Fixed removal of entityChristian Mollekopf2017-12-29
|
* CleanupChristian Mollekopf2017-12-28
|
* Fixed incremental updates in folder queriesChristian Mollekopf2017-12-28
| | | | | | | | | Incremental additions of children in the tree were filtered due to the parent filter. This broke when we started to maintain state, thus causing the filter in datastorequery containing the parent filter to be carried over. Given that the incremental querying of children currently doesn't really add much value (we don't have trees that are large/deep enough), perhaps we're better off using a different approach.
* Check for errorsChristian Mollekopf2017-11-23
|
* Storage debugging codeChristian Mollekopf2017-11-21
|
* Added timeouts to sinkloadtestChristian Mollekopf2017-11-15
|
* Ensure the flatbuffer file is built before the testsChristian Mollekopf2017-11-14
|
* Fixed use after freeChristian Mollekopf2017-11-14
|
* Fixed warningsChristian Mollekopf2017-11-11
|
* Require valgrind when enabling memcheckChristian Mollekopf2017-11-07
|
* Benchmarks in tests are too fragileChristian Mollekopf2017-11-03
|
* sinkloadtest.pyChristian Mollekopf2017-11-03
|
* No benchmarking in testsChristian Mollekopf2017-10-20
|
* Ensure the test passes reliably.Christian Mollekopf2017-10-17
|
* Initial query testChristian Mollekopf2017-10-17
|
* Use QUICK_TRY_VERIFYChristian Mollekopf2017-10-17
|
* pipelinebenchmarkChristian Mollekopf2017-10-17
|
* storagebenchmarkChristian Mollekopf2017-10-17
|
* dummyresourcebenchmark valuesChristian Mollekopf2017-10-17
|
* Split up dummyresourcewritebenchmark into datasets that we want toChristian Mollekopf2017-10-17
| | | | display.
* QUICK_TRY_VERIFY for quick polling in benchmarksChristian Mollekopf2017-10-16
|
* Updated the information we collect for dummyresourcewritebenchmarkChristian Mollekopf2017-10-16
|
* Don't do too much benchmarking in the testsChristian Mollekopf2017-10-16
|
* Share variance/maxDifference implementationChristian Mollekopf2017-10-16
|
* hawd def for incremental vs nonincremental comparisonChristian Mollekopf2017-10-12
|
* Changed how we record and print the mail query benchmark data.Christian Mollekopf2017-10-12
| | | | | | Each column can represent an individual value, which we can use to record related data. Each row thus represents a new run of the benchmark.
* We are not reliably staying under 500Christian Mollekopf2017-10-12
|
* Don't use QTRY_* in a benchmarkChristian Mollekopf2017-10-11
| | | | It has a backoff timer inside which skews the time measurements.
* hawd json output moduleChristian Mollekopf2017-10-10
|
* No need to make this overly complicated.Christian Mollekopf2017-10-10
|
* Ensure we copy all blobs when copying to another resourceChristian Mollekopf2017-10-09
|
* Optimized the incremental update case.Christian Mollekopf2017-10-08
| | | | | | | | | | | | This brings the incremental closer to a regular query (about 1.5 times as bad instead of 3.5 times). For a comparison look at MailQueryBenchmark::testIncremental() The optimization is built on the assumption that we i.e. get an update with 100 revisions, and thus the optimization applies to the case where we have multiple revisions within that batch that are part of the same reduction. In such a case we can avoid redoing the reduction lookup over and over.