Reference
async/main.hpp
The easiest way to get started with an async application is to use the co_main
function with the following signature:
async::main co_main(int argc, char *argv[]);
Declaring co_main
will add a main
function that performs all the necessary steps to run a coroutine
on an event loop.
This allows us to write a very simple asynchronous programs;
async::main co_main(int argc, char *argv[])
{
auto exec = co_await async::this_coro::executor; (1)
asio::steady_timer tim{exec, std::chrono::milliseconds(50)}; (2)
co_await tim.async_wait(async::use_op); (3)
co_return 0;
}
1 | get the executor main running on |
2 | Use it with an asio object |
3 | co_await an async operation |
The main promise will create an asio::signal_set
and uses it for cancellation.
SIGINT
becomes total , while SIGTERM
becomes terminal cancellation.
The cancellation will not be forwarded to detached coroutines. The user will need to take care to end then on cancellation, since the program otherwise doesn’t allow graceful termination. |
Executor
It will also create an asio::io_context
to run on, which you can get through the this_coro::executor
.
It will be assigned to the async::this_thread::get_executor()
.
Memory Resource
It also creates a memory resource that will be used as a default for internal memory allocations.
It will be assigned to the thread_local
to the async::this_thread::get_default_resoruce()
.
Promise
Every coroutine has an internal state, called promise
(not to be confused with the async::promise
).
Depending on the coroutine properties different things can be co_await
-ed, like we used in the example above.
They are implemented through inheritance, and shared among different promise types
The main promise has the following properties.
Specification
-
declaring
co_main
will implicitly declare amain
function -
main
is only present whenco_main
is defined. -
SIGINT
andSIGTERM
will cause cancellation of the internal task.
async/promise.hpp
A promise is an eager coroutine that can co_await
and co_return
values. That is, it cannot use co_yield
.
async::promise<void> delay(std::chrono::milliseconds ms)
{
asio::steady_timer tim{co_await async::this_coro::executor, ms};
co_await tim.async_wait(async::use_op);
}
async::main co_main(int argc, char *argv[])
{
co_await delay(std::chrono::milliseconds(50));
co_return 0;
}
Promises are by default attached.
This means, that a cancellation is sent when the promise
handles goes out of scope.
A promise can be detached by calling detach
or by using the prefix +
operator.
This is a runtime alternative to using detached.
async::promise<void> my_task();
async::main co_main(int argc, char *argv[])
{
+my_task(); (1)
co_await delay(std::chrono::milliseconds(50));
co_return 0;
}
1 | By using + the task gets detached. Without it, the compiler would generate a nodiscard warning. |
Executor
The executor is taken from the thread_local
get_executor function, unless a asio::executor_arg
is used
in any position followed by the executor argument.
async::promise<int> my_gen(asio::executor_arg_t, asio::io_context::executor_type exec_to_use);
Memory Resource
The memory resource is taken from the thread_local
get_default_resource function,
unless a std::allocator_arg
is used in any position followed by a polymorphic_allocator
argument.
async::promise<int> my_gen(std::allocator_arg_t, pmr::polymorphic_allocator<void> alloc);
Outline
template<typename Return>
struct [[nodiscard]] promise
{
promise(promise &&lhs) noexcept;
promise& operator=(promise && lhs) noexcept;
// enable `co_await`. (1)
auto operator co_await ();
// Ignore the return value, i.e. detach it. (2)
void operator +() &&;
// Cancel the promise.
void cancel(asio::cancellation_type ct = asio::cancellation_type::all);
// Check if the result is ready
bool ready() const;
// Check if the promise can be awaited.
explicit operator bool () const; (3)
// Detach or attach
bool attached() const;
void detach();
void attach();
// Get the return value if ready - otherwise throw
Return get();
};
1 | Supports Interrupt Wait |
2 | This allows to spawn promised with a simple +my_task() expression. |
3 | This allows code like while (p) co_await p; |
Promise
The coroutine promise (promise::promise_type
) has the following properties.
async/generator.hpp
A generator is an eager coroutine that can co_await
and co_yield
values to the caller.
async::generator<int> example()
{
printf("In coro 1\n");
co_yield 2;
printf("In coro 3\n");
co_return 4;
}
async::main co_main(int argc, char * argv[])
{
printf("In main 0\n");
auto f = example(); // call and let it run until the first co_yield
printf("In main 1\n");
printf("In main %d\n", co_await f);
printf("In main %d\n", co_await f);
return 0;
}
Which will generate the following output
In main 0 In coro 1 In main 1 In main 2 In coro 3 In main 4
Values can be pushed into the generator, when Push
(the second template parameter) is set to non-void:
async::generator<int, int> example()
{
printf("In coro 1\n");
int i = co_yield 2;
printf("In coro %d\n");
co_return 4;
}
async::main co_main(int argc, char * argv[])
{
printf("In main 0\n");
auto f = example(); // call and let it run until the first co_yield
printf("In main %d\n", co_await f(3)); (1)
return 0;
}
1 | The pushed value gets passed through operator() to the result of co_yield . |
Which will generate the following output
In main 0 In coro 1 In main 2 Pushed 2 In coro 3 In main 4
Lazy
A generator can be turned lazy by awaiting initial.
This co_await
expression will produce the Push
value.
This means the generator will wait until it’s awaited for the first time,
and then process the newly pushed value and resume at the next co_yield.
async::generator<int, int> example()
{
int v = co_await async::this_coro::initial;
printf("In coro %d\n", v);
co_yield 2;
printf("In coro %d\n", v);
co_return 4;
}
async::main co_main(int argc, char * argv[])
{
printf("In main 0\n");
auto f = example(); // call and let it run until the first co_yield
printf("In main 1\n"); // < this is now before the co_await initial
printf("In main %d\n", co_await f(1));
printf("In main %d\n", co_await f(3));
return 0;
}
Which will generate the following output
In main 0 In main 1 In coro 1 In main 2 In coro 3 In main 4
Executor
The executor is taken from the thread_local
get_executor function, unless a asio::executor_arg
is used
in any position followed by the executor argument.
async::generator<int> my_gen(asio::executor_arg_t, asio::io_context::executor_type exec_to_use);
Memory Resource
The memory resource is taken from the thread_local
get_default_resource function,
unless a std::allocator_arg
is used in any position followed by a polymorphic_allocator
argument.
async::generator<int> my_gen(std::allocator_arg_t, pmr::polymorphic_allocator<void> alloc);
Outline
template<typename Yield, typename Push = void>
struct [[nodiscard]] generator
{
// Movable
generator(generator &&lhs) noexcept = default;
generator& operator=(generator &&) noexcept = default;
// True until it co_returns & is co_awaited after (1)
explicit operator bool() const;
// Cancel the generator. (3)
void cancel(asio::cancellation_type ct = asio::cancellation_type::all);
// Check if a value is available
bool ready() const;
// Get the return value. Throws if not ready
.
Yield get();
// Cancel & detach the generator.
~generator();
// an awaitable that results in value of Yield
.
using generator_awaitable = unspecified;
// Present when Push
!= void
generator_awaitable operator()( Push && push);
generator_awaitable operator()(const Push & push);
// Present when Push
== void
, i.e. can co_await
the generator directly.
generator_awaitable operator co_await (); (2)
};
1 | This allows code like while (gen) co_await gen: |
2 | Supports Interrupt Wait |
3 | A cancelled generator maybe be resumable |
Promise
The generator promise has the following properties.
async/task.hpp
A task is a lazy coroutine that can co_await
and co_return
values. That is, it cannot use co_yield
.
async::task<void> delay(std::chrono::milliseconds ms)
{
asio::steady_timer tim{co_await async::this_coro::executor, ms};
co_await tim.async_wait(async::use_op);
}
async::main co_main(int argc, char *argv[])
{
co_await delay(std::chrono::milliseconds(50));
co_return 0;
}
Unlike a promise, a task can be awaited or spawned on another executor than it was created on.
Executor
Since a task
it lazy, it does not need to have an executor on construction.
It rather attempts to take it from the caller or awaiter if present.
Otherwise, it’ll default to the thread_local executor.
Memory Resource
The memory resource is NOT taken from the thread_local
get_default_resource function,
but pmr::get_default_resource(),
unless a `std::allocator_arg
is used in any position followed by a polymorphic_allocator
argument.
async::task<int> my_gen(std::allocator_arg_t, pmr::polymorphic_allocator<void> alloc);
Outline
template<typename Return>
struct [[nodiscard]] task
{
task(task &&lhs) noexcept = default;
task& operator=(task &&) noexcept = default;
// enable `co_await`
auto operator co_await ();
};
Tasks can be used synchronously from a sync function by calling run(my_task()) .
|
Promise
The task promise has the following properties.
use_task
The use_task
completion token can be used to create a task from an async_
function.
This is less efficient than use_op as it needs to allocate a coroutine frame,
but has a simpler return type and supports Interrupt Wait.
async/detached.hpp
A detached is an eager coroutine that can co_await
but not co_return
values.
That is, it cannot be resumed and is usually not awaited.
async::detached delayed_print(std::chrono::milliseconds ms)
{
asio::steady_timer tim{co_await async::this_coro::executor, ms};
co_await tim.async_wait(async::use_op);
printf("Hello world\n");
}
async::main co_main(int argc, char *argv[])
{
delayed_print();
co_return 0;
}
Detached is used to run coroutines in the background easily.
async::detached my_task();
async::main co_main(int argc, char *argv[])
{
my_task(); (1)
co_await delay(std::chrono::milliseconds(50));
co_return 0;
}
1 | Spawn off the detached coro. |
A detached can assign itself a new cancellation source like this:
async::detached my_task(asio::cancellation_slot sl)
{
co_await this_coro::reset_cancellation_source(sl);
// do somework
}
async::main co_main(int argc, char *argv[])
{
asio::cancellation_signal sig;
my_task(sig.slot()); (1)
co_await delay(std::chrono::milliseconds(50));
sig.emit(asio::cancellation_type::all);
co_return 0;
}
Executor
The executor is taken from the thread_local
get_executor function, unless a asio::executor_arg
is used
in any position followed by the executor argument.
async::detached my_gen(asio::executor_arg_t, asio::io_context::executor_type exec_to_use);
Memory Resource
The memory resource is taken from the thread_local
get_default_resource function,
unless a std::allocator_arg
is used in any position followed by a polymorphic_allocator
argument.
async::detached my_gen(std::allocator_arg_t, pmr::polymorphic_allocator<void> alloc);
Promise
The thread detached has the following properties.
async/op.hpp
An async operation is an awaitable wrapping an asio
operation.
E.g. this is an async_operation
with the completion signature void()
.
auto op = asio::post(ctx, async::use_op);
Or the async_operation can be templated like this:
auto op = [&ctx](auto token) {return asio::post(ctx, std::move(token)); };
use_op
The use_op
token is the direct to create an op,
i.e. using async::use_op
as the completion token will create the required awaitable.
It also supports defaults_on
so that async_ops can be awaited without the token:
auto tim = async::use_op.as_default_on(asio::steady_timer{co_await async::this_coro::executor});
co_await tim.async_wait();
Depending on the completion signature the co_await
expression may throw.
Signature | Return type | Exception |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
any exception |
|
|
any exception |
use_op will never complete immediately, i.e. await_ready will always return false, but always suspend the coroutine.
|
Hand coded Operations
Operations are a more advanced implementation of the async/op.hpp feature.
This library makes it easy to create asynchronous operations with an early completion condition, i.e. a condition that avoids suspension of coroutines altogether.
We can for example create a wait_op
that does nothing if the timer is already expired.
struct wait_op : async::op<system::error_code> (1)
{
asio::steady_timer & tim;
wait_op(asio::steady_timer & tim) : tim(tim) {}
bool ready(async::handler<system::error_code> ) (2)
{
if (tim.expiry() < std::chrono::steady_clock::now())
h(system::error_code{});
}
void initiate(async::completion_handler<system::error_code> complete) (3)
{
tim.async_wait(std::move(complete));
}
};
1 | Inherit op with the matching signature await_transform picks it up |
2 | Check if the operation is ready - called from await_ready |
3 | Initiate the async operation if its not ready. |
async/concepts.hpp
Awaitable
An awaitable is an expression that can be used with co_await
.
template<typename Awaitable, typename Promise = void>
concept awaitable_type = requires (Awaitable aw, std::coroutine_handle<Promise> h)
{
{aw.await_ready()} -> std::convertible_to<bool>;
{aw.await_suspend(h)};
{aw.await_resume()};
};
template<typename Awaitable, typename Promise = void>
concept awaitable =
awaitable_type<Awaitable, Promise>
|| requires (Awaitable && aw) { {std::forward<Awaitable>(aw).operator co_await()} -> awaitable_type<Promise>;}
|| requires (Awaitable && aw) { {operator co_await(std::forward<Awaitable>(aw))} -> awaitable_type<Promise>;};
awaitables in this library require that the coroutine promise
return their executor by const reference if they provide one. Otherwise it’ll use this_thread::get_executor() .
|
Enable awaitables
Inheriting enable_awaitables
will enable a coroutine to co_await anything through await_transform
that would be co_await
-able in the absence of any await_transform
.
async/this_coro.hpp
The this_coro
namespace provides utilities to access the internal state of a coroutine promise.
Pseudo-awaitables:
// Awaitable type that returns the executor of the current coroutine.
struct executor_t {}
constexpr executor_t executor;
// Awaitable type that returns the cancellation state of the current coroutine.
struct cancellation_state_t {};
constexpr cancellation_state_t cancellation_state;
// Reset the cancellation state with custom or default filters.
constexpr unspecified reset_cancellation_state();
template<typename Filter>
constexpr unspecified reset_cancellation_state(
Filter && filter);
template<typename InFilter, typename OutFilter>
constexpr unspecified reset_cancellation_state(
InFilter && in_filter,
OutFilter && out_filter);
// get & set the throw_if_cancelled setting.
unspecified throw_if_cancelled();
unspecified throw_if_cancelled(bool value);
// Set the cancellation source in a detached.
unspecified reset_cancellation_source();
unspecified reset_cancellation_source(asio::cancellation_slot slot);
// get the allocator the promise
struct allocator_t {};
constexpr allocator_t allocator;
// get the current cancellation state-type
struct cancelled_t {};
constexpr cancelled_t cancelled;
// set the over-eager mode of a generator
struct initial_t {};
constexpr initial_t initial;
Await Allocator
The allocator of a coroutine supporting enable_await_allocator
can be obtained the following way:
co_await async::this_coro::allocator;
In order to enable this for your own coroutine you can inherit enable_await_allocator
with the CRTP pattern:
struct my_promise : async::enable_await_allocator<my_promise>
{
using allocator_type = __your_allocator_type__;
allocator_type get_allocator();
};
If available the allocator gets used by use_op |
Await Executor
The allocator of a coroutine supporting enable_await_executor
can be obtained the following way:
co_await async::this_coro::executor;
In order to enable this for your own coroutine you can inherit enable_await_executor
with the CRTP pattern:
struct my_promise : async::enable_await_executor<my_promise>
{
using executor_type = __your_executor_type__;
executor_type get_executor();
};
If available the executor gets used by use_op |
Memory resource base
The promise_memory_resource_base
base of a promise will provide a get_allocator
in the promise taken from
either the default resource or one passed following a std::allocator_arg
argument.
Likewise, it will add operator new
overloads so the coroutine uses the same memory resource for its frame allocation.
Throw if cancelled
The promise_throw_if_cancelled_base
provides the basic options to allow operation to enable a coroutines
to turn throw an exception when another actual awaitable is awaited.
co_await async::this_coro::throw_if_cancelled;
Cancellation state
The promise_cancellation_base
provides the basic options to allow operation to enable a coroutines
to have a cancellation_state that is resettable by
reset_cancellation_state
co_await async::this_coro::reset_cancellation_state();
For convenience there is also a short-cut to check the current cancellation status:
asio::cancellation_type ct = (co_await async::this_coro::cancellation_state).cancelled();
asio::cancellation_type ct = co_await async::this_coro::cancelled; // same as above
async/this_thread.hpp
Since everything is single threaded this library provides an executor & default memory-resource for every thread.
namespace boost::async::this_thread
{
pmr::memory_resource* get_default_resource() noexcept; (1)
pmr::memory_resource* set_default_resource(pmr::memory_resource* r) noexcept; (2)
pmr::polymorphic_allocator<void> get_allocator(); (3)
typename asio::io_context::executor_type & get_executor(); (4)
void set_executor(asio::io_context::executor_type exec) noexcept; (5)
}
1 | Get the default resource - will be pmr::get_default_resource unless set |
2 | Set the default resource - returns the previously set one |
3 | Get an allocator wrapping (1) |
4 | Get the executor of the thread - throws if not set |
5 | Set the executor of the current thread. |
The coroutines will use these as defaults, but keep a copy just in case.
The only exception is the initialization of an async-operation, which will use the this_thread::executor to rethrow from. |
async/channel.hpp
Channels can be used to exchange data between different coroutines on a single thread.
Outline
template<typename T>
struct channel
{
// create a channel with a buffer limit, executor & resource.
explicit
channel(std::size_t limit = 0u,
executor executor = this_thread::get_executor(),
pmr::memory_resource * resource = this_thread::get_default_resource());
// movable. moving with active operations is undefined behaviour.
channel(channel && ) noexcept = default;
channel & operator=(channel && lhs) noexcept = delete;
using executor_type = executor;
const executor_type & get_executor();
// Closes the channel
~channel();
bool is_open() const;
// close the operation, will cancel all pending ops, too
void close();
// an awaitable that yields T
using read_op = unspecified;
// an awaitable that yields void
using write_op = unspecified;
// read a value to a channel
read_op read();
// write a value to the channel
write_op write(const T && value);
write_op write(const T & value);
write_op write( T && value);
write_op write( T & value);
// write a value to the channel if T is void
};
Description
Channels are a tool for two coroutines to communicate and synchronize.
const std::size_t buffer_size = 2;
channel<int> ch{exec, buffer_size};
// in coroutine (1)
co_await ch.write(42);
// in coroutine (2)
auto val = co_await ch.read();
1 | Send a value to the channel - will block until it can be sent |
2 | Read a value from the channel - will block until a value is awaitable. |
Both operations maybe be blocking depending on the channel buffer size.
If the buffer size is zero, a read
& write
will need to occur at the same time,
i.e. act as a rendezvous.
If the buffer is not full, the write operation will not suspend the coroutine; likewise if the buffer is not empty, the read operation will not suspend.
If two operations complete at once (as is always the case with an empty buffer), the second operation gets posted to the executor for later completion.
A channel type can be void , in which case write takes no parameter.
|
The channel operations can be cancelled without losing data. This makes them usable with select.
generator<variant2::variant<int, double>> merge(
channel<int> & c1,
channel<double> & c2)
{
while (c1 && c2)
co_yield co_await select(c1, c2);
}
Example
async::promise<void> producer(async::channel<int> & chan)
{
for (int i = 0; i < 4; i++)
co_await chan.write(i);
chan.close();
}
async::main co_main(int argc, char * argv[])
{
async::channel<int> c;
auto p = producer(c);
while (c.is_open())
std::cout << co_await c.read() << std::endl;
co_await p;
co_return 0;
}
Additionally, a channel_reader
is provided to make reading channels more convenient & usable with
BOOST_ASYNC_FOR.
async::main co_main(int argc, char * argv[])
{
async::channel<int> c;
auto p = producer(c);
BOOST_ASYNC_FOR(int value, async::channel_reader(c))
std::cout << value << std::endl;
co_await p;
co_return 0;
}
async/with.hpp
The with
facility provides a way to perform asynchronous tear-down of coroutines.
That is it like an asynchronous destructor call.
struct my_resource
{
async::promise<void> await_exit(std::exception_ptr e);
};
async::promise<void> work(my_resource & res);
async::promise<void> outer()
{
co_await async::with(my_resource(), &work);
}
The teardown can either be done by providing an await_exit
member function or a tag_invoke
function
that returns an awaitable or by providing the teardown as the third argument to with
.
using ws_stream = beast::websocket::stream<asio::ip::tcp::socket>>;
async::promise<ws_stream> connect(urls::url); (1)
async::promise<void> disconnect(ws_stream &ws); (2)
auto teardown(const boost::async::with_exit_tag & wet , ws_stream & ws, std::exception_ptr e)
{
return disconnect(ws);
}
async::promise<void> run_session(ws_stream & ws);
async::main co_main(int argc, char * argv[])
{
co_await async::with(co_await connect(argv[1]), &run_session, &teardown);
co_return 0;
}
1 | Implement websocket connect & websocket initiation |
2 | Implement an orderly shutdown. |
The std::exception_ptr is null if the scope is exited without exception.
NOTE: It’s legal for the exit functions to take the exception_ptr by reference and modify it.
|
async/select.hpp
The select
function can be used to co_await
one awaitable out of a set of them.
It can be called as a variadic function with multiple awaitable or as on a range of awaitables.
async::promise<void> task1();
async::promise<void> task2();
async::promise<void> do_wait()
{
co_await async::select(task1(), task2()); (1)
std::vector<async::promise<void>> aws {task1(), task2()};
co_await async::select(aws); (2)
}
1 | Wait for a variadic set of awaitables |
2 | wait for a vector of awaitables |
The first parameter so select
can be a uniform random bit generator.
extern promise<void> pv1, pv2;
std::vector<promise<void>> pvv;
std::mt1337 rdm{1};
// if everything returns void select returns the index
std::size_t r1 = co_await select(pv1, pv2);
std::size_t r2 = co_await select(rdm, pv1, pv2);
std::size_t r3 = co_await select(pvv);
std::size_t r4 = co_await select(rdm, pvv);
// variant if not everything is void. void become monostate
extern promise<int> pi1, pi2;
variant2::variant<monostate, int, int> r5 = co_await select(pv1, pi1, pi2);
variant2::variant<monostate, int, int> r6 = co_await select(rdm, pv1, pi1, pi2);
// a range returns a pair of the index and the result if non-void
std::vector<promise<int>> piv;
std::pair<std::size_t, int> r7 = co_await select(piv);
std::pair<std::size_t, int> r8 = co_await select(rdm, piv);
Interrupt Wait
When arguments are passed as rvalue reference, the select will attempt to use .interrupt_await
on the awaitable to detach the not completed awaitables. If supported, the Awaitable must complete immediately.
If the select
doesn’t detect the immediate completion, it will send a cancellation.
This means that you can reuse select like this:
async::promise<void> do_wait()
{
auto t1 = task1();
auto t2 = task2();
co_await async::select(t1, t2); (1)
co_await async::select(t1, t2); (2)
}
1 | Wait for the first task to complete |
2 | Wait for the other task to complete |
The select
will invoke the functions of the awaitable
as if used in a co_await
expression
or not evaluate them at all.
left_select
The left_select
functions are like select
but follow a strict left-to-right scan.
This can lead to starvation issues, which is why this is not the recommended default, but can
be useful for prioritization if proper care is taken.
Outline
// Concept for the random number generator.
template<typename G>
concept uniform_random_bit_generator =
requires ( G & g)
{
{typename std::decay_t<G>::result_type() } -> std::unsigned_integral; // is an unsigned integer type
// T Returns the smallest value that G's operator() may return. The value is strictly less than G::max(). The function must be constexpr.
{std::decay_t<G>::min()} -> std::same_as<typename std::decay_t<G>::result_type>;
// T Returns the largest value that G's operator() may return. The value is strictly greater than G::min(). The function must be constexpr.
{std::decay_t<G>::max()} -> std::same_as<typename std::decay_t<G>::result_type>;
{g()} -> std::same_as<typename std::decay_t<G>::result_type>;
} && (std::decay_t<G>::max() > std::decay_t<G>::min());
// Variadic select with a custom random number generator
template<asio::cancellation_type Ct = asio::cancellation_type::all,
uniform_random_bit_generator URBG, awaitable ... Promise>
awaitable select(URBG && g, Promise && ... p);
// Ranged select with a custom random number generator
template<asio::cancellation_type Ct = asio::cancellation_type::all,
uniform_random_bit_generator URBG, range<awaitable> PromiseRange>
awaitable select(URBG && g, PromiseRange && p);
// Variadic select with the default random number generator
template<asio::cancellation_type Ct = asio::cancellation_type::all, awaitable... Promise>
awaitable select(Promise && ... p);
// Ranged select with the default random number generator
template<asio::cancellation_type Ct = asio::cancellation_type::all, range<awaitable>>
awaitable select(PromiseRange && p);
// Variadic left select
template<asio::cancellation_type Ct = asio::cancellation_type::all, awaitable... Promise>
awaitable left_select(Promise && ... p);
// Ranged left select
template<asio::cancellation_type Ct = asio::cancellation_type::all, range<awaitable>>
awaitable left_select(PromiseRange && p);
Selecting an empty range will cause an exception to be thrown. |
async/gather.hpp
The gather
function can be used to co_await
multiple awaitables
at once with cancellations being passed through.
The function will gather all completion and return them as system::result
,
i.e. capture conceptions as values. One awaitable throwing an exception will not cancel the others.
It can be called as a variadic function with multiple Awaitable or as on a range of awaitables.
async::promise<void> task1();
async::promise<void> task2();
async::promise<void> do_gather()
{
co_await async::gather(task1(), task2()); (1)
std::vector<async::promise<void>> aws {task1(), task2()};
co_await async::gather(aws); (2)
}
1 | Wait for a variadic set of awaitables |
2 | Wait for a vector of awaitables |
The gather
will invoke the functions of the awaitable
as if used in a co_await
expression.
extern promise<void> pv1, pv2;
std::tuple<system::result<int>, system::result<int>> r1 = co_await gather(pv1, pv2);
std::vector<promise<void>> pvv;
pmr::vector<system::result<void>> r2 = co_await gather(pvv);
extern promise<int> pi1, pi2;
std::tuple<system::result<monostate>,
system::result<monostate>,
system::result<int>,
system::result<int>> r3 = co_await gather(pv1, pv2, pi1, pi2);
std::vector<promise<int>> piv;
pmr::vector<system::result<int>> r4 = co_await gather(piv);
Outline
// Variadic gather
template<asio::cancellation_type Ct = asio::cancellation_type::all, awaitable... Promise>
awaitable gather(Promise && ... p);
// Ranged gather
template<asio::cancellation_type Ct = asio::cancellation_type::all, range<awaitable>>
awaitable gather(PromiseRange && p);
async/join.hpp
The join
function can be used to co_await
multiple awaitable at once with properly connected cancellations.
The function will gather all completion and return them as values, unless an exception is thrown. If an exception is thrown, all outstanding ops are cancelled (or detached if possible) and the first exception gets rethrown.
void will be returned as variant2::monostate in the tuple, unless all awaitables yield void.
|
It can be called as a variadic function with multiple Awaitable or as on a range of awaitables.
async::promise<void> task1();
async::promise<void> task2();
async::promise<void> do_join()
{
co_await async::join(task1(), task2()); (1)
std::vector<async::promise<void>> aws {task1(), task2()};
co_await async::join(aws); (2)
}
1 | Wait for a variadic set of awaitables |
2 | Wait for a vector of awaitables |
The join
will invoke the functions of the awaitable
as if used in a co_await
expression.
extern promise<void> pv1, pv2;
/* void */ co_await join(pv1, pv2);
std::vector<promise<void>> pvv;
/* void */ co_await join(pvv);
extern promise<int> pi1, pi2;
std::tuple<monostate, monostate, int, int> r1 = co_await join(pv1, pv2, pi1, pi2);
std::vector<promise<int>> piv;
pmr::vector<int> r2 = co_await join(piv);
Outline
// Variadic join
template<asio::cancellation_type Ct = asio::cancellation_type::all, awaitable... Promise>
awaitable join(Promise && ... p);
// Ranged join
template<asio::cancellation_type Ct = asio::cancellation_type::all, range<awaitable>>
awaitable join(PromiseRange && p);
Selecting an on empty range will cause an exception. |
async/wait_group.hpp
The wait_group
function can be used to manage
multiple coroutines of type promise<void>
.
It works out of the box with async/with.hpp, by having the matching await_exit
member.
Essentially, a wait_group
is a dynamic list of
promises that has a select
function (wait_one
),
a gather
function (wait_all
) and will clean up on scope exit.
struct wait_group
{
// create a wait_group
explicit
wait_group(asio::cancellation_type normal_cancel = asio::cancellation_type::none,
asio::cancellation_type exception_cancel = asio::cancellation_type::all);
// insert a task into the group
void push_back(promise<void> p);
// the number of tasks in the group
std::size_t size() const;
// remove completed tasks without waiting (i.e. zombie tasks)
std::size_t reap();
// cancel all tasks
void cancel(asio::cancellation_type ct = asio::cancellation_type::all);
// wait for one task to complete.
wait_one_op wait_one();
// wait for all tasks to complete
wait_op wait();
// wait for all tasks to complete
wait_op operator co_await ();
// when used with with
, this will receive the exception
// and wait for the completion
// if ep
is set, this will use the exception_cancel
level,
// otherwise the normal_cancel
to cancel all promises.
wait_op await_exit(std::exception_ptr ep);
};
async/spawn.hpp
The spawn
functions allow to run task on an asio executor
/execution_context
and consume the result with a completion token.
auto spawn(Context & context, task<T> && t, CompletionToken&& token);
auto spawn(Executor executor, task<T> && t, CompletionToken&& token);
Spawn will dispatch it’s initiartion and post the completion. S That makes it safe to use task to run the task on another executor and consume the result on the current one with use_op.
Example
async::task<int> work();
int main(int argc, char *argv[])
{
asio::io_context ctx{BOOST_ASIO_CONCURRENCY_HINT_1};
auto f = spawn(ctx, work(), asio::use_future);
ctx.run();
return f.get();
}
The caller needs to make sure that the executor is not running on multiple threads
concurrently, e,g, by using a single-threaded asio::io_context or a strand .
|
async/run.hpp
The run
function is similar to spawn but running synchronously.
It will internally setup an execution context and the memory resources.
This can be useful when integrating a piece of async code into a synchronous application.
Outline
// Run the task and return it's value or rethrow any exception.
T run(task<T> t);
Example
async::task<int> work();
int main(int argc, char *argv[])
{
return run(work());
}
async/thread.hpp
The thread type is another way to create an environment that is similar to main
, but doesn’t use a signal_set
.
async::thread my_thread()
{
auto exec = co_await async::this_coro::executor; (1)
asio::steady_timer tim{exec, std::chrono::milliseconds(50)}; (2)
co_await tim.async_wait(async::use_op); (3)
co_return 0;
}
1 | get the executor thread running on |
2 | Use it with an asio object |
3 | co_await an async operation |
To use a thread you can use it like a std::thread
:
int main(int argc, char * argv[])
{
auto thr = my_thread();
thr.join();
return 0;
}
A thread is also an awaitable
(including cancellation).
async::main co_main(int argc, char * argv[])
{
auto thr = my_thread();
co_await thr;
co_return 0;
}
Destructing a detached thread will cause a hard stop (io_context::stop ) and join the thread.
|
Nothing in this library, except for awaiting a async/thread.hpp and async/spawn.hpp, is thread-safe.
If you need to transfer data across threads, you’ll need a thread-safe utility like asio::conrurrenct_channel .
You cannot share any async primitives between threads,
with the sole exception of being able to spawn a task onto another thread’s executor.
|
Executor
It will also create an asio::io_context
to run on, which you can get through the this_coro::executor
.
It will be assigned to the async::this_thread::get_executor()
.
Memory Resource
It also creates a memory resource that will be used as a default for internal memory allocations.
It will be assigned to the thread_local
to the async::this_thread::get_default_resoruce()
.
Outline
struct thread
{
// Send a cancellation signal
void cancel(asio::cancellation_type type = asio::cancellation_type::all);
// Add the functions similar to `std::thread`
void join();
bool joinable() const;
void detach();
// Allow the thread to be awaited
auto operator co_await() &-> detail::thread_awaitable; (1)
auto operator co_await() && -> detail::thread_awaitable; (2)
// Stops the io_context & joins the executor
~thread();
/// Move constructible
thread(thread &&) noexcept = default;
using executor_type = executor;
using id = std::thread::id;
id get_id() const noexcept;
executor_type get_executor() const;
};
1 | Supports Interrupt Wait |
2 | Always forward cancel |
Promise
The thread promise has the following properties.
async/result.hpp
Awaitables can be modified to return system::result
or
std::tuple
instead of using exceptions.
// value only
T res = co_await foo();
// as result
system::result<T, std::exception_ptr> res = co_await async::as_result(foo());
// as tuple
std::tuple<std::exception_ptr, T> res = co_await async::as_tuple(foo());
Awaitables can also provide custom ways to handle results and tuples,
by providing await_resume
overloads using async::as_result_tag
and async::as_tuple_tag
.:
your_result_type await_resume(async::as_result_tag);
your_tuple_type await_resume(async::as_tuple_tag);
This allows an awaitable to provide other error types than std::exception_ptr
,
for example system::error_code
. This is done by op and channel.
// example of an op with result system::error_code, std::size_t
system::result<std::size_t> await_resume(async::as_result_tag);
std::tuple<system::error_code, std::size_t> await_resume(async::as_tuple_tag);
Awaitables are still allowed to throw exceptions, e.g. for critical exceptions such as OOM. |
async/async_for.hpp
For types like generators a BOOST_ASYNC_FOR
macro is provided, to emulate an async for
loop.
async::generator<int> gen();
async::main co_main(int argc, char * argv[])
{
BOOST_ASYNC_FOR(auto i, gen())
printf("Generated value %d\n", i);
co_return 0;
}
async/error.hpp
In order to make errors easier to manage, async provides an error_category
to be used with
boost::system::error_code
.
enum class error
{
moved_from,
detached,
completed_unexpected,
wait_not_ready,
already_awaited,
allocation_failed
};
system::error_category & async_category();
system::error_code make_error_code(error e);
async/config.hpp
The config adder allows to config some implementation details of boost.async.
executor_type
The executor type defaults to boost::asio::any_io_executor
.
You can set it to boost::asio::any_io_executor
by defining BOOST_ASYNC_CUSTOM_EXECUTOR
and adding a boost::async::executor
type yourself.
Alternatively, BOOST_ASYNC_USE_IO_CONTEXT
can be defined
to set the executor to boost::asio::io_context::executor_type
.
pmr
Boost.async can be used with different pmr implementations, defaulting to std::pmr
.
The following macros can be used to configure it:
-
BOOST_ASYNC_USE_STD_PMR
-
BOOST_ASYNC_USE_BOOST_CONTAINER_PMR
-
BOOST_ASYNC_USE_CUSTOM_PMR
If you define BOOST_ASYNC_USE_CUSTOM_PMR
you will need to provide a boost::async::pmr
namespace,
that is a drop-in replacement for std::pmr
.
Alternatively, the pmr
use can be disabled with
-
BOOST_ASYNC_NO_PMR
.
In this case, async will use a non-pmr monotonic resource for the synchronization functions (select, gather and join).
use_op
uses a small-buffer-optimized resource which’s size can be set by defining
BOOST_ASYNC_SBO_BUFFER_SIZE
and defaults to 4096 bytes.
async/leaf.hpp
Async provides integration with boost.leaf. It provides functions similar to leaf that take an awaitables instead of a function object and return an awaitable.
template<awaitable TryAwaitable, typename ... H >
auto try_catch(TryAwaitable && try_coro, H && ... h );
template<awaitable TryAwaitable, typename ... H >
auto try_handle_all(TryAwaitable && try_coro, H && ... h );
template<awaitable TryAwaitable, typename ... H >
auto try_handle_some(TryAwaitable && try_coro, H && ... h );
See the leaf documentation for details.