summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* Revert "Fixed warnings"Christian Mollekopf2017-11-12
| | | | | | Doesn't work with CATCH_ERRORS=ON This reverts commit 2bb2a10f5c4010d168b3d26e9937cf26365a0d0c.
* Got rid of the AVOID_BINDING_REBUILD hack.Christian Mollekopf2017-11-12
| | | | | This new solution should provide decent rebuild times without special treatment.
* Fixed warningsChristian Mollekopf2017-11-11
|
* Fixed warningsChristian Mollekopf2017-11-10
|
* Revert "Fixed memoryleak"Christian Mollekopf2017-11-10
| | | | | | Fixing this introduces some crashes. I'll have to revisit this. This reverts commit 679f2d5d7d46b2f098e939883520b707f01b2a36.
* TSANChristian Mollekopf2017-11-10
|
* Fixed use after freeChristian Mollekopf2017-11-09
|
* Fixed memoryleakChristian Mollekopf2017-11-09
|
* Fixed memoryleakChristian Mollekopf2017-11-09
|
* ASAN supportChristian Mollekopf2017-11-09
|
* Require valgrind when enabling memcheckChristian Mollekopf2017-11-07
|
* We rely on not defining template functionsChristian Mollekopf2017-11-03
| | | | ...because we manually insantiate them in the cpp file.
* Fixed warningChristian Mollekopf2017-11-03
|
* Benchmarks in tests are too fragileChristian Mollekopf2017-11-03
|
* sinkloadtest.pyChristian Mollekopf2017-11-03
|
* LivequeryChristian Mollekopf2017-11-03
|
* Ensure we get a return codeChristian Mollekopf2017-11-01
|
* Ensure we get an appropriate exit code when a resource crashes.Christian Mollekopf2017-10-31
|
* Sink clear error messageChristian Mollekopf2017-10-26
|
* Fixed parsing of larger headers.Christian Mollekopf2017-10-26
| | | | | | Just truncating the file is not a good idea. If the headers end up being larger (I just ran into that), then we just fail to parse the headers and miss important stuff like subjects. So let's not.
* No benchmarking in testsChristian Mollekopf2017-10-20
|
* Use LMDB_LIBRARIESChristian Mollekopf2017-10-20
|
* Ensure the test passes reliably.Christian Mollekopf2017-10-17
|
* Initial query testChristian Mollekopf2017-10-17
|
* Use QUICK_TRY_VERIFYChristian Mollekopf2017-10-17
|
* pipelinebenchmarkChristian Mollekopf2017-10-17
|
* storagebenchmarkChristian Mollekopf2017-10-17
|
* dummyresourcebenchmark valuesChristian Mollekopf2017-10-17
|
* Split up dummyresourcewritebenchmark into datasets that we want toChristian Mollekopf2017-10-17
| | | | display.
* QUICK_TRY_VERIFY for quick polling in benchmarksChristian Mollekopf2017-10-16
|
* Updated the information we collect for dummyresourcewritebenchmarkChristian Mollekopf2017-10-16
|
* Don't do too much benchmarking in the testsChristian Mollekopf2017-10-16
|
* Share variance/maxDifference implementationChristian Mollekopf2017-10-16
|
* Fixed mail_query_incremental definitionsChristian Mollekopf2017-10-13
|
* hawd def for incremental vs nonincremental comparisonChristian Mollekopf2017-10-12
|
* Removed no longer used hawd definitionChristian Mollekopf2017-10-12
|
* Changed how we record and print the mail query benchmark data.Christian Mollekopf2017-10-12
| | | | | | Each column can represent an individual value, which we can use to record related data. Each row thus represents a new run of the benchmark.
* We are not reliably staying under 500Christian Mollekopf2017-10-12
|
* Don't use QTRY_* in a benchmarkChristian Mollekopf2017-10-11
| | | | It has a backoff timer inside which skews the time measurements.
* hawd json output moduleChristian Mollekopf2017-10-10
|
* Debug outputChristian Mollekopf2017-10-10
|
* Fixed hawd definition fileChristian Mollekopf2017-10-10
|
* No need to make this overly complicated.Christian Mollekopf2017-10-10
|
* Avoid relying on timeouts in testsChristian Mollekopf2017-10-09
|
* Ensure we copy all blobs when copying to another resourceChristian Mollekopf2017-10-09
|
* Error checking and debug outputChristian Mollekopf2017-10-09
|
* Optimized the incremental update case.Christian Mollekopf2017-10-08
| | | | | | | | | | | | This brings the incremental closer to a regular query (about 1.5 times as bad instead of 3.5 times). For a comparison look at MailQueryBenchmark::testIncremental() The optimization is built on the assumption that we i.e. get an update with 100 revisions, and thus the optimization applies to the case where we have multiple revisions within that batch that are part of the same reduction. In such a case we can avoid redoing the reduction lookup over and over.
* Benchmark cleanupChristian Mollekopf2017-10-08
|
* The variance of a single value is 0Christian Mollekopf2017-10-08
|
* Fixed dummyresource write benchmarkChristian Mollekopf2017-10-06
|