| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This cuts into the sync performance by about 40%,
but gives us fast fulltext searching for all local content.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
BLOB properties had a couple of intended purposes:
* Allow large payloads to be streamed directly to disk, and then be
handled by reference.
* Allow zero-copy handling.
* Keep the database values compact so we can avoid traversing large
BLOBS.
However, they came at the cost of code-complexity, and we lost all the
benefits of our storage layer, such as transactions.
Measurements showed, that for email (the intended primary usecase),
the overhead is hardly measurable, with most parts performing
better, or at least not worse. We additionally also gain file-system
independence, which may help on other platforms.
The biggest drawback is probably that large payloads need to be written
to disk twice, because of the synchronizer queue (once for the queue,
once for the actual data).
|
|
|
|
|
|
| |
Just truncating the file is not a good idea. If the headers end up being
larger (I just ran into that), then we just fail to parse the headers
and miss important stuff like subjects. So let's not.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Ensure we always have a messageId to work with,
and avoid grouping all non-threaded messages together.
|
|
|
|
|
|
| |
This is really part of the storage, and will help us to cleanly
implement features like moving properties into a temporary place when
reading in a clean way as well.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial refactoring to improve how we deal with the storage.
It does a couple of things:
* Rename Sink::Storage to Sink::Storage::DataStore to free up the
Sink::Storage namespace
* Introduce a Sink::ResourceContext to have a single object that can be
passed around containing everything that is necessary to operate on a
resource. This is a lot better than the multiple separate parameters
that we used to pass around all over the place, while still allowing
for dependency injection for tests.
* Tie storage access together using the new EntityStore that directly
works with ApplicationDomainTypes. This gives us a central place where
main storage, indexes and buffer adaptors are tied together, which
will also give us a place to implement external indexes, such as a
fulltextindex using xapian.
* Use ApplicationDomainTypes as the default way to pass around entities.
Instead of using various ways to pass around entities (buffers,
buffer adaptors, ApplicationDomainTypes), only use a single way.
The old approach was confusing, and was only done as:
* optimization; really shouldn't be necessary and otherwise I'm sure
we can find better ways to optimize ApplicationDomainType itself.
* a way to account for entities that have multiple buffers, a concept
that I no longer deem relevant.
While this commit does the bulk of the work to get there, the following
commits will refactor more stuff to get things back to normal.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a single #define as debug area the new system allows for an
identifier for each debug message with the structure component.area.
The component is a dot separated identifier of the runtime component,
such as the process or the plugin.
The area is the code component, and can be as such defined at
compiletime.
The idea of this system is that it becomes possible to i.e. look at the
output of all messages in the query subsystem of a specific resource
(something that happens in the client process, but in the
resource-specific subcomponent).
The new macros are supposed to be less likely to clash with other names,
hence the new names.
|
| |
|
| |
|
|
|