std::async
Defined in header
<future>
|
||
(1) | ||
template< class Function, class... Args>
std::future<typename std::result_of<Function(Args...)>::type> |
(since C++11) (until C++14) |
|
template< class Function, class... Args>
std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> |
(since C++14) | |
(2) | ||
template< class Function, class... Args >
std::future<typename std::result_of<Function(Args...)>::type> |
(since C++11) (until C++14) |
|
template< class Function, class... Args >
std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> |
(since C++14) | |
The template function async
runs the function f
asynchronously (potentially in a separate thread which may be part of a thread pool) and returns a std::future that will eventually hold the result of that function call.
f
may be executed in another thread or it may be run synchronously when the resulting std::future is queried for a value.f
with arguments args
according to a specific launch policy policy
:
-
- If the async flag is set (i.e. policy & std::launch::async != 0), then
async
executes the functionf
on a new thread of execution (with all thread-locals initialized) as if spawned by std::thread(f, args...), except that if the functionf
returns a value or throws an exception, it is stored in the shared state accessible through the std::future thatasync
returns to the caller. - If the deferred flag is set (i.e. policy & std::launch::deferred != 0), then
async
convertsargs...
the same way as by std::thread constructor, but does not spawn a new thread of execution. Instead, lazy evaluation is performed: the first call to a non-timed wait function on the std::future thatasync
returned to the caller will causef(args...)
to be executed in the current thread (which does not have to be the thread that originally calledstd::async
). The result or exception is placed in the shared state associated with the future and only then it is made ready. All further accesses to the same std::future will return the result immediately. - If both the std::launch::async and std::launch::deferred flags are set in
policy
, it is up to the implementation whether to perform asynchronous execution or lazy evaluation.
- If the async flag is set (i.e. policy & std::launch::async != 0), then
|
(since C++14) |
In any case, the call to std::async
synchronizes-with (as defined in std::memory_order) the call to f
, and the completion of f
is sequenced-before making the shared state ready. If the async
policy is chosen, the associated thread completion synchronizes-with the successful return from the first function that is waiting on the shared state, or with the return of the last function that releases the shared state, whichever comes first.
Contents |
[edit] Parameters
f | - | Callable object to call
|
||||||
args... | - | parameters to pass to f
|
||||||
policy | - | bitmask value, where individual bits control the allowed methods of execution
|
[edit] Return value
std::future referring to the shared state created by this call to std::async
.
[edit] Exceptions
Throws std::system_error with error condition std::errc::resource_unavailable_try_again if the launch policy equals std::launch::async and the implementation is unable to start a new thread (if the policy is async|deferred
or has additional bits set, it will fall back to deferred or the implementation-defined policies in this case)
[edit] Notes
The implementation may extend the behavior of the first overload of std::async by enabling additional (implementation-defined) bits in the default launch policy.
Examples of implementation-defined launch policies are the sync policy (execute immediately, within the async call) and the task policy (similar to async, but thread-locals are not cleared)
If the std::future
obtained from std::async
has temporary object lifetime (not moved or bound to a variable), the destructor of the std::future will block at the end of the full expression until the asynchronous operation completes, essentially code such as the following synchronous:
std::async(std::launch::async, []{ f(); }); // temporary's dtor waits for f() std::async(std::launch::async, []{ g(); }); // does not start until f() completes
(note that the destructors of std::futures obtained by means other than a call to std::async never block)
[edit] Example
#include <iostream> #include <vector> #include <algorithm> #include <numeric> #include <future> template <typename RAIter> int parallel_sum(RAIter beg, RAIter end) { auto len = end - beg; if(len < 1000) return std::accumulate(beg, end, 0); RAIter mid = beg + len/2; auto handle = std::async(std::launch::async, parallel_sum<RAIter>, mid, end); int sum = parallel_sum(beg, mid); return sum + handle.get(); } int main() { std::vector<int> v(10000, 1); std::cout << "The sum is " << parallel_sum(v.begin(), v.end()) << '\n'; }
Output:
The sum is 10000