The resource consists of: * the syncronizer process * a plugin providing the client-api facade * a configuration setting of the filters ## Synchronizer The synchronizer process is responsible for processing all commands, executing synchronizations with the source, and replaying changes to the source. Processing of commands happens in the pipeline which executes all preprocessors ebfore the entity is persisted. The synchronizer process has the following primary components: * Command Queues: Queues that hold all incoming commands. Persisted over reboots. * Command Processor: A processor that empties the command queues by pushing commands through the pipeline. * Listener: Opens a socket and listens for incoming connections. On connection all incoming commands are read and entered into command queues. Control commands (i.e. a sync) don't require persistency and are therefore processed directly. * Synchronization: Handles synchronization to the source, as well as change-replay to the source. The modification commands generated by the synchronization enter the command queue as well. A resource can: * provide a full mirror of the source. * provide metadata for efficient access to the source. In the former case the local mirror is fully functional locally and changes can be replayed to the source once a connection is established again. It the latter case the resource is only functional if a connection to the source is available (which is i.e. not a problem if the source is a local maildir on disk). ## Preprocessors Preprocessors are small processors that are guaranteed to be processed before an new/modified/deleted entity reaches storage. They can therefore be used for various tasks that need to be executed on every entity. Usecases: * Update indexes * Detect spam/scam mail and set appropriate flags * Email filtering to different folders or resources The following kinds of preprocessors exist: * filtering preprocessors that can potentially move an entity to another resource * passive preprocessors, that extract data that is stored externally (i.e. indexers) * flag extractors, that produce data stored with the entity (spam detection) Preprocessors are typically read-only, to i.e. not break signatures of emails. Extra flags that are accessible through the sink domain model, can therefore be stored in the local buffer of each resource. ### Requirements * A preprocessor must work with batch processing. Because batch-processing is vital for efficient writing to the database, all preprocessors have to be included in the batch processing. * Preprocessors need to be fast, since they directly affect how fast a message is processed by the system. ### Design Commands are processed in batches. Each preprocessor thus has the following workflow: * startBatch is called: The preprocessor can do necessary preparation steps to prepare for the batch (like starting a transaction on an external database) * add/modify/remove is called for every command in the batch: The preprocessor executes the desired actions. * endBatch is called: If the preprocessor wrote to an external database it can now commit the transaction. ### Generic Preprocessors Most preprocessors will likely be used by several resources, and are either completely generic, or domain specific (such as only for mail). It is therefore desirable to have default implementations for common preprocessors that are ready to be plugged in. The domain type adaptors provide a generic interface to access most properties of the entities, on top of which generic preprocessors can be implemented. It is that way trivial to i.e. implement a preprocessor that populates a hierarchy index of collections. ### Preprocessors generating additional entities A preprocessor, such as an email threading preprocessors, might generate additional entities (A thread entity is a regular entity, just like the mail that spawned the thread). In such a case the preprocessor must invoke the complete pipeline for the new entity. ## Indexes Most indexes are implemented as preprocessors to guarantee that they are always updated together with the data. Index types: * fixed value indexes (i.e. uid) * Input: key-value pair where key is the indexed property and the value is the uid of the entity * Lookup: by key, value is always zero or more uid's * fixed value where we want to do smaller/greater-than comparisons (like start date) * Input: * Lookup: by key with comparator (greater, equal range) * Result: zero or more uid's * range indexes (like the date range an event affects) * Input: start and end of range and uid of entity * Lookup: by key with comparator. The value denotes start or end of range. * Result: zero or more uid's * group indexes (like tree hierarchies as nested sets) * could be the same as fixed value indexes, which would then just require a recursive query. * Input: * sort indexes (i.e. sorted by date) * Could also be a lookup in the range index (increase date range until sufficient matches are available) ### Default implementations Since only properties of the domain types can be queried, default implementations for commonly used indexes can be provided. These indexes are populated by generic preprocessors that use the domain-type interface to extract properties from individual entites. ### Example index implementations * uid lookup * add: * add uid + entity id to index * update: * remove old uid + entity id from index * add uid + entity id to index * remove: * remove uid + entity id from index * lookup: * query for entity-id by uid * mail folder hierarchy * parent folder uid is a property of the folder * store parent-folder-uid + entity id * lookup: * query for entity-id by uid * mails of mail folder * parent folder uid is a property of the email * store parent-folder-uid + entity id * lookup: * query for entity-id by uid * email threads * Thread objects should be created as dedicated entities * the thread uid * email date sort index * the date of each email is indexed as timestamp * event date range index * the start and end date of each event is indexed as timestamp (floating date-times would change sorting based on current timezone, so the index would have to be refreshed) ### On-demand indexes To avoid building all indexes initially, and assuming not all indexes are necessarily regularly used for the complete data-set, it should be possible to omit updating an index, but marking it as outdated. The index can then be built on demand when the first query requires the index. Building the index on-demand is a matter of replaying the relevant dataset and using the usual indexing methods. This should typically be a process that doesn't take too long, and that provides status information, since it will block the query. The indexes status information can be recorded using the latest revision the index has been updated with. # Pipeline A pipeline is an assembly of a set of preprocessors with a defined order. A modification is always persisted at the end of the pipeline once all preprocessors have been processed. # Synchronization The synchronization can either: * Generate a full diff directly on top of the db. The diffing process can work against a single revision/snapshot (using transactions). It then generates a necessary changeset for the store. * If the source supports incremental changes the changeset can directly be generated from that information. The changeset is then simply inserted in the regular modification queue and processed like all other modifications. The synchronizer has to ensure only changes are replayed to the source that didn't come from it already. This is done by marking changes that don't require changereplay to the source. ## Synchronization Store To track progress of the synchronization, the synchronizer needs to maintain a separate store. It needs to be separate from the main store to properly separate the the synchronization from the Command Processor, which enables the two parts to run concurrently (We can't have two threads writing to the same store). While the synchronization store can contain any useful information for a resource to synchronize a typical example looks like this: * changereplay: Contains the last replayed revision. Used by the change replay to know what has been replayed to the source already. * remoteid.mapping.$BUFFERTYPE: Contains the mapping of a remote identifier to a local identifier. Necessary to track what has already been synchronized, and to replay changes to the remote entity. * localid.mapping.$BUFFERTYPE: Reverse mapping of the remoteid.mapping. The remoteid mapping has to be updated in two places: * New entities that are synchronized immediately get a localid assinged, that is then recorded together with the remoteid. This is required to be able to reference other entities directly in the command queue (i.e. for parent folders). * Entities created by clients get a remoteid assigned during change replay, so the entity can be recognized during the next sync. ## Change Replay To replay local changes to the source the synchronizer replays all revisions of the store and maintains the current replay state in the synchronization store. Changes that already come from the source via synchronizer are not replayed to the source again. # Testing / Inspection Resources have to be tested, which often requires inspections into the current state of the resource. This is difficult in an asynchronous system where the whole backend logic is encapsulated in a separate process without running tests in a vastly different setup from how it will be run in production. To alleviate this inspection commands are introduced. Inspection commands are special commands that the resource processes just like all other commands, and that have the sole purpose of inspecting the current resource state. Because the command is processed with the same mechanism as other commands we can rely on ordering of commands in a way that a prior command is guaranteed to be executed once the inspection command is processed. A typical inspection command could i.e. verify that a file has been created in the expected path after a create command.