| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
| |
This is a simplified progress reporting, since it does not report progress
of ther overcall Job chain, but only of individual tasks, which makes it
only really useful on one-task Jobs.
TODO: propagate subjob progress to the Future user gets copy of
TODO: compound progress reporting (be able to report a progress of the overall
Job chain)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Storing Future and current Job progress directly in Executors means
that we cannot re-execute finished job, or even execute the same Job
multiple times in parallel. To do so, we need to make Executors stateless
and track the state elsewhere.
This change does that by moving the execution state from Executor to Execution
class. Executors now only describe the tasks to execute, while Execution holds
the current state of execution. New Execution is created every time Job::exec()
is called.
Execution holds reference to it's result (Future) and Executor which created
the Execution. This ensures that Executor is not deleted when Job (which owns
Executors) goes out of scope while the execution is still running. At the same
time Future holds reference to relevant Execution, so that the Execution is
deleted when all copies of Future referring result from the respective Execution
are deleted.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It's lifetime is limited to the end of the function, so we have to copy it before.
I switched to QByteArray because it simplifies the code and shouldn't really
cost anything, additionally the implicit sharing makes copies cheap.
This patch incurs the cost of always copying the buffer instead of writing
straight to the socket, but we probably anyways want to keep a copy around,
and if it would indeed be a performance issues (I doubt it), we could still optimize
that copy away.
|
| | |
|
| |
| |
| |
| | |
We indeed have to keep the facade alive, otherwise this starts crashing.
|
|/ |
|
|
|
|
|
|
|
|
|
| |
The only reference to the top executor (the last in chain) is held by the
Job. If the Job goes out of scope after it's executed, the top Executor
is deleted (which in turn deletes the entire Executor chain). By having
the top Executor holding reference to itself during execution we ensure
that the Executor chain is not deleted until the job is finished, even
when the parent Job object is deleted in the meanwhile.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
lifetime tests.
|
| |
|
|
|
|
|
| |
There's a chance that the resource actually wanted to shut-down. Instead
ResourceAccess should only reopen the connection if it still has work to do.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Otherwise the client always restarts the resource because of the lost connection.
We currently require this in tests to be able to delete the db, but eventually
we likely want a "disable akonadi" function that shuts resources down,
and keeps clients from restarting them (e.g. via configuration).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is now possible use KJob-derived jobs with libasync without having to write
lambda wrappers.
auto job = Async::start<ReturnType, MyKJob, MyKJob::result, Args ...)
.then<ReturnType, OtherKJob, OtherKJob::result, PrevKJobReturnType>();
job.exec(arg1, arg2, ...);
The reason for this approach (instead of taking KJob* as an argument is that
we usually want the KJob ctor arguments to depend on result of previous job.
At least in case of Async::start() however it makes sense to support passing
KJob* as an argument (not yet implemented).
In future we should also support custom error handlers.
The KJob integration is build-time optional, but enabled by default (pass
-DWITH_KJOB=FALSE to CMake to disable).
Adds KCoreAddons dependency.
|
|
|
|
|
|
|
|
| |
It is now possible to chain a job that takes no arguments after a job
that returns void.
Unfortunatelly it is not yet possible to disregard return value of a
previous job.
|
|
|
|
|
|
|
|
|
| |
When user gets a Job (from a method call for instance), which is already running
or might have even finished already, they can still append a new Job to the chain
and re-execute it. The Job will internally chain up to the last finished Job, use
it's result and continue from the next Job in the chain. If a Job in the chain is
still running, it will wait for it to finish and pass the result to the next Job
in the chain.
|
| |
|
|
|
|
| |
multiple times
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now it's possible to do something like
Job<int, int> job = createSomeJob();
auto main = Async::start<int>(....).then(job);
Previously the 'job' would have to be wrapped in a ThenTask-like lambda (which
is what we still do internally), but with this new syntax it's possible to append
another job chain to existing chain easilly. This syntax is available for all
task types.
|
|
|
|
|
| |
The initial value can be passed in as argument to Job::exec(), or by another
job in case of job chaining.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
All jobs and executors now accept ErrorHandler argument which will be invoked
on error. Error handling itself has been moved to Executor::exec(), so that we
don't have to copy-paste the error handling code into every Executor implementation.
|
|
|
|
|
|
|
|
|
| |
Sync executors don't pass Async::Future into the user-provided tasks, but
instead work with return values of the task methods, wrapping them into the
Async::Future internally. Sync tasks are of course possible since forever, but
not the API for those tasks is much cleaner, for users don't have to deal with
"future" in synchronous tasks, for instance when synchronously processing results
of an async task before passing the data to another async task.
|
| |
|
| |
|
|
|
|
|
| |
equivalent syntax, but follows the standard idiom we use throughout
the code .. const char *, not char const * (e.g.)
|
|
|
|
|
|
| |
We now hold executors in shared pointers. We cannot easilly delete them, as they
are referenced from two objects (the Job they belong to, and the next job), and
the lifetime of the jobs is unclear.
|
|
|
|
|
|
|
| |
innerJob.exec() starts an async job, so once exec() returns, the innerJob
will go out of scope and will be deleted, which however does not prevent
the QTimer from invoking it's lambda slot, which will crash when dereferencing
a deleted Future.
|
|
|
|
|
|
| |
Error handlers don't have access to the future, so they can't mark it as finished,
so we do it after the error handler is run. This ensures that FutureWatchers will
finish.
|
| |
|
|
|
|
| |
Work for dvratil.
|