summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--tests/hawd/README66
1 files changed, 66 insertions, 0 deletions
diff --git a/tests/hawd/README b/tests/hawd/README
new file mode 100644
index 0000000..28ad2ee
--- /dev/null
+++ b/tests/hawd/README
@@ -0,0 +1,66 @@
1How Are We Doing? This is a tool to track numbers over time for later
2comparison and charting so as to track the progress of things.
3Think: performance regressions using benchmark numbers over time.
4
5There are two parts to HAWD: the library and the command line tool. Both
6use a hawd.conf file and HAWD dataset definition files.
7
8The path to a hawd.conf file can either be supplied explicitly to the
9HAWD::State class, or HAWD::State will search the directory tree (from
10the current directory up) to find it. hawd.conf is a json file which
11currently knows the following two entries:
12
13 results: path to where results should be stored
14 project: where the project's dataset definition files are
15
16Tilde expansion is supported. It is recommended to include a copy of
17hawd.conf in the source dir's root and have the build system "install"
18a copy of it in the build dir with the proper paths filled in. This makes
19it easiest to run from the build dir and prevents hardcoding too many
20paths into hawd.conf.
21
22A dataset definition file is also a json file and must appear in the path
23pointed to by the project entry in the hawd.conf. The name of the file is
24also the name used to store the dataset on disk. Recognized values in the
25json file include:
26
27 name: the user-visible name of the dataset
28 description: a description of the dataset
29 columns: a json object containing value definitions
30
31A value definition is a json object which allows one to define the type,
32unit and min/max values. An example of a dataset definition file follows:
33
34{
35 "name": "Buffer Creation",
36 "description": "Tests how fast buffer creation is",
37 "columns": {
38 "numBuffers": { "type": "int" },
39 "time": { "type": "int", "unit": "ms", "min": 0, "max": 100 },
40 "ops": { "type": "float", "unit": "ops/ms" }
41 }
42}
43
44The hawd library is used wherever data needs to be stored or fetched from
45a dataset. Most often this involves using the Dataset and Dataset::Row classes
46something like this, where the dataset definition file is in the file at path
47$project/buffer_creation:
48
49 HAWD::State state;
50 HAWD::Dataset dataset("buffer_creation", state);
51 HAWD::Dataset::Row row = dataset.row();
52 row.setValue("numBuffers", count);
53 row.setValue("time", bufferDuration);
54 row.setValue("ops", opsPerMs);
55 dataset.insertRow(row);
56
57That's it! insertRow will return the qin64 key the row was stored under
58so that it can be fetched again with ease if needed with Dataset::row(qint64 key).
59Note that Row objects must always be created by a Dataset object to be used
60with that Dataset, due to internal sanity checking.
61
62The hawd command line allows one to list datasets, check the definitions for errors,
63print tables of data, annotate rows and more. Run hawd on its own to see a list of
64available commands.
65
66//TODO: better documentation of the hawd command line