diff options
-rw-r--r-- | tests/hawd/README | 66 |
1 files changed, 66 insertions, 0 deletions
diff --git a/tests/hawd/README b/tests/hawd/README new file mode 100644 index 0000000..28ad2ee --- /dev/null +++ b/tests/hawd/README | |||
@@ -0,0 +1,66 @@ | |||
1 | How Are We Doing? This is a tool to track numbers over time for later | ||
2 | comparison and charting so as to track the progress of things. | ||
3 | Think: performance regressions using benchmark numbers over time. | ||
4 | |||
5 | There are two parts to HAWD: the library and the command line tool. Both | ||
6 | use a hawd.conf file and HAWD dataset definition files. | ||
7 | |||
8 | The path to a hawd.conf file can either be supplied explicitly to the | ||
9 | HAWD::State class, or HAWD::State will search the directory tree (from | ||
10 | the current directory up) to find it. hawd.conf is a json file which | ||
11 | currently knows the following two entries: | ||
12 | |||
13 | results: path to where results should be stored | ||
14 | project: where the project's dataset definition files are | ||
15 | |||
16 | Tilde expansion is supported. It is recommended to include a copy of | ||
17 | hawd.conf in the source dir's root and have the build system "install" | ||
18 | a copy of it in the build dir with the proper paths filled in. This makes | ||
19 | it easiest to run from the build dir and prevents hardcoding too many | ||
20 | paths into hawd.conf. | ||
21 | |||
22 | A dataset definition file is also a json file and must appear in the path | ||
23 | pointed to by the project entry in the hawd.conf. The name of the file is | ||
24 | also the name used to store the dataset on disk. Recognized values in the | ||
25 | json file include: | ||
26 | |||
27 | name: the user-visible name of the dataset | ||
28 | description: a description of the dataset | ||
29 | columns: a json object containing value definitions | ||
30 | |||
31 | A value definition is a json object which allows one to define the type, | ||
32 | unit and min/max values. An example of a dataset definition file follows: | ||
33 | |||
34 | { | ||
35 | "name": "Buffer Creation", | ||
36 | "description": "Tests how fast buffer creation is", | ||
37 | "columns": { | ||
38 | "numBuffers": { "type": "int" }, | ||
39 | "time": { "type": "int", "unit": "ms", "min": 0, "max": 100 }, | ||
40 | "ops": { "type": "float", "unit": "ops/ms" } | ||
41 | } | ||
42 | } | ||
43 | |||
44 | The hawd library is used wherever data needs to be stored or fetched from | ||
45 | a dataset. Most often this involves using the Dataset and Dataset::Row classes | ||
46 | something like this, where the dataset definition file is in the file at path | ||
47 | $project/buffer_creation: | ||
48 | |||
49 | HAWD::State state; | ||
50 | HAWD::Dataset dataset("buffer_creation", state); | ||
51 | HAWD::Dataset::Row row = dataset.row(); | ||
52 | row.setValue("numBuffers", count); | ||
53 | row.setValue("time", bufferDuration); | ||
54 | row.setValue("ops", opsPerMs); | ||
55 | dataset.insertRow(row); | ||
56 | |||
57 | That's it! insertRow will return the qin64 key the row was stored under | ||
58 | so that it can be fetched again with ease if needed with Dataset::row(qint64 key). | ||
59 | Note that Row objects must always be created by a Dataset object to be used | ||
60 | with that Dataset, due to internal sanity checking. | ||
61 | |||
62 | The hawd command line allows one to list datasets, check the definitions for errors, | ||
63 | print tables of data, annotate rows and more. Run hawd on its own to see a list of | ||
64 | available commands. | ||
65 | |||
66 | //TODO: better documentation of the hawd command line | ||