AsyncGuidesAsynchronous Tasks

Asynchronous Tasks

This guide explains how asynchronous tasks work and how to use them.

Overview

Tasks are the smallest unit of sequential code execution in module Async. Tasks can create other tasks, and Async tracks the parent-child relationship between tasks. When a parent task is stopped, it will also stop all its children tasks. The reactor always starts with one root task.

graph LR R[Reactor] --> WS WS[Web Server Task] --> R1[Request 1 Task] WS --> R2[Request 2 Task] R1 --> Q1[Database Query Task] R1 --> H1[HTTP Client Request Task] R2 --> H2[HTTP Client Request Task] R2 --> H3[HTTP Client Request Task]

How are they different from fibers?

A fiber is a lightweight unit of execution that can be suspended and resumed at specific points. After a fiber is suspended, it can be resumed later at the same point with the same execution state. Because only one fiber can execute at a time, they are often referred to as a mechanism for cooperative concurrency.

A task provides extra functionality on top of fibers. A task behaves like a promise: it either succeeds with a value or fails with an exception. Tasks keep track of their parent-child relationships, and when a parent task is stopped, it will also stop all its children tasks. This makes it easier to create complex programs with many concurrent tasks.

Why does Async manipulate tasks and not fibers?

The class Async::Scheduler actually works directly with fibers for most operations and isn't aware of tasks. However, the reactor does maintain a tree of tasks for the purpose of managing task and reactor life-cycle.

Task Lifecycle

Tasks represent units of work which are executed according to the following state transition diagram:

stateDiagram-v2 [*] --> initialized : Task.new initialized --> running : run running --> failed : unhandled StandardError-derived exception running --> complete : user code finished running --> stopped : stop initialized --> stopped : stop failed --> [*] complete --> [*] stopped --> [*]

Tasks are created in the initialized state, and are run by the reactor. During the execution, a task can either complete successfully, become failed with an unhandled StandardError-derived exception, or be explicitly stopped. In all of these cases, you can wait for a task to complete by using Async::Task#wait.

  1. In the case the task successfully completed, the result will be whatever value was generated by the last expression in the task.
  2. In the case the task failed with an unhandled StandardError-derived exception, waiting on the task will re-raise the exception.
  3. In the case the task was stopped, the result will be nil.

Starting A Task

At any point in your program, you can start a reactor and a root task using the Kernel#Async method:

Async do
	1.upto(3) do |i|
		sleep(i)
		puts "Hello World #{i}"
	end
end

This program prints "Hello World" 3 times. Before printing, it sleeps for 1, then 2, then 3 seconds. The total execution time is 6 seconds because the program executes sequentially.

By using a nested task, we can ensure that each iteration of the loop creates a new task which runs concurrently.

Async do
	1.upto(3) do |i|
		Async do
			sleep(i)
			puts "Hello World #{i}"
		end
	end
end

Instead of taking 6 seconds, this program takes 3 seconds in total. The main loop executes, rapidly creating 3 child tasks, and then each child task sleeps for 1, 2 and 3 seconds respectively before printing "Hello World".

graph LR R[Reactor] --> TT[Initial Task] TT --> H0[Hello World 0 Task] TT --> H1[Hello World 1 Task] TT --> H2[Hello World 2 Task]

By constructing your program correctly, it's easy to implement concurrent map-reduce:

Async do
	# Map (create several concurrent tasks)
	users_size = Async{User.size}
	posts_size = Async{Post.size}
	
	# Reduce (wait for and merge the results)
	average = posts_size.wait / users_size.wait
	puts "#{users_size.wait} users created #{average} posts on average."
end

Performance Considerations

Task creation and execution has been heavily optimised. Do not trade program complexity to avoid creating tasks; the cost will almost always exceed the gain.

Do consider using correct concurrency primatives like class Async::Semaphore, class Async::Barrier, etc, to ensure your program is well-behaved in the presence of large inputs (i.e. don't create an unbounded number of tasks).

Starting a Limited Number of Tasks

When processing potentially unbounded data, you may want to limit the concurrency using class Async::Semaphore.

Async do
	# Create a semaphore with a limit of 2:
	semaphore = Async::Semaphore.new(2)
	
	file.each_line do |line|
		semaphore.async do
			# Only two tasks at most will be allowed to execute concurrently:
			process(line)
		end
	end
end

Waiting for Tasks

Waiting for a single task is trivial: simply invoke Async::Task#wait. To wait for multiple tasks, you may either Async::Task#wait on each in turn, or you may want to use a class Async::Barrier. You can use Async::Barrier#async to create multiple child tasks, and wait for them all to complete using Async::Barrier#wait.

barrier = Async::Barrier.new

Async do
	jobs.each do |job|
		barrier.async do
			# ... process job ...
		end
	end
	
	# Wait for all jobs to complete:
	barrier.wait
end

Waiting for the First N Tasks

Occasionally, you may need to just wait for the first task (or first several tasks) to complete. You can use a combination of class Async::Waiter and class Async::Barrier for controlling this:

waiter = Async::Waiter.new(parent: barrier)

Async do
	jobs.each do |job|
		waiter.async do
			# ... process job ...
		end
	end
	
	# Wait for the first two jobs to complete:
	done = waiter.wait(2)
	
	# You may use the barrier to stop the remaining jobs
	barrier.stop
end

Combining a Barrier with a Semaphore

class Async::Barrier and class Async::Semaphore are designed to be compatible with each other, and with other tasks that nest #async invocations. There are other similar situations where you may want to pass in a parent task, e.g. Async::IO::Endpoint#bind.

barrier = Async::Barrier.new
semaphore = Async::Semaphore.new(2, parent: barrier)

jobs.each do |job|
	semaphore.async(parent: barrier) do
		# ... process job ...
	end
end

# Wait until all jobs are done:
barrier.wait

Stopping a Task

When a task completes execution, it will enter the complete state (or the failed state if it raises an unhandled exception).

There are various situations where you may want to stop a task (Async::Task#stop) before it completes. The most common case is shutting down a server. A more complex example is this: you may fan out multiple (10s, 100s) of requests, wait for a subset to complete (e.g. the first 5 or all those that complete within a given deadline), and then stop (terminate/cancel) the remaining operations.

Using the above program as an example, let's stop all the tasks just after the first one completes.

Async do
	tasks = 3.times.map do |i|
		Async do
			sleep(i)
			puts "Hello World #{i}"
		end
	end

	# Stop all the above tasks:
	tasks.each(&:stop)
end

Stopping all Tasks held in a Barrier

To stop (terminate/cancel) all the tasks held in a barrier:

barrier = Async::Barrier.new

Async do
	tasks = 3.times.map do |i|
		barrier.async do
			sleep(i)
			puts "Hello World #{i}"
		end
	end
	
	barrier.stop
end

Unless your tasks all rescue and suppresses StandardError-derived exceptions, be sure to call (Async::Barrier#stop) to stop the remaining tasks:

barrier = Async::Barrier.new

Async do
	tasks = 3.times.map do |i|
		barrier.async do
			sleep(i)
			puts "Hello World #{i}"
		end
	end
	
	begin
		barrier.wait
	ensure
		barrier.stop
	end
end

Resource Management

In order to ensure your resources are cleaned up correctly, make sure you wrap resources appropriately, e.g.:

Async do
	begin
		socket = connect(remote_address) # May raise Async::Stop

		socket.write(...) # May raise Async::Stop
		socket.read(...) # May raise Async::Stop
	ensure
		socket.close if socket
	end
end

As tasks run synchronously until they yield back to the reactor, you can guarantee this model works correctly. While in theory IO#autoclose allows you to automatically close file descriptors when they go out of scope via the GC, it may produce unpredictable behavour (exhaustion of file descriptors, flushing data at odd times), so it's not recommended.

Exception Handling

class Async::Task captures and logs exceptions. All unhandled exceptions will cause the enclosing task to enter the :failed state. Non-StandardError exceptions are re-raised immediately and will cause the reactor to exit. This ensures that exceptions will always be visible and cause the program to fail appropriately.

require 'async'

task = Async do
	# Exception will be logged and task will be failed.
	raise "Boom"
end

puts task.status # failed
puts task.wait # raises RuntimeError: Boom

Propagating Exceptions

If a task has finished due to an exception, calling Task#wait will re-raise the exception.

require 'async'

Async do
	task = Async do
		raise "Boom"
	end
	
	begin
		task.wait # Re-raises above exception.
	rescue
		puts "It went #{$!}!"
	end
end

Timeouts

You can wrap asynchronous operations in a timeout. This allows you to put an upper bound on how long the enclosed code will run vs. potentially blocking indefinitely. If the enclosed code hasn't completed by the timeout, it will be interrupted with an class Async::TimeoutError exception.

require 'async'

Async do |task|
	task.with_timeout(1) do
		sleep(100)
	rescue Async::TimeoutError
		puts "I timed out 99 seconds early!"
	end
end

Periodic Timers

Sometimes you need to do some recurring work in a loop. Often it's best to measure the periodic delay end-to-start, so that your process always takes a break between iterations and doesn't risk spending 100% of its time on the periodic work. In this case, simply call sleep between iterations:

require 'async'

period = 30

Async do |task|
	loop do
		puts Time.now
		# ... process job ...
		sleep(period)
	end
end

If you need a periodic timer that runs start-to-start, you can keep track of the run_next time using the monotonic clock:

require 'async'

period = 30

Async do |task|
  run_next = Async::Clock.now
	loop do
		run_next += period
		puts Time.now
		# ... process job ...
		if (remaining = run_next - Async::Clock.now) > 0
		  sleep(remaining)
		end
	end
end

Reactor Lifecycle

Generally, the reactor's event loop will not exit until all tasks complete. This is informed by Async::Task#finished? which checks if the current node has completed execution, which also includes all children. However, there is one exception to this rule: tasks flagged as being transient (Async::Node#transient?).

Transient Tasks

Tasks which are flagged as transient are identical to normal tasks, except for one key difference: they do not keep the reactor alive. They are useful for operations which are not directly related to application concurrency, but are instead an implementation detail of the application. For example, a task which is monitoring and maintaining a connection pool, pruning unused connections or possibly ensuring those connections are periodically checked for activity (ping/pong, etc). If all other tasks are completed, and only transient tasks remain at the root of the reactor, the reactor should exit.

How To Create Transient Tasks

Specify the transient option when creating a task:

@pruner = Async(transient: true) do
	loop do	
		sleep(1)
		prune_connection_pool
	end
end

Transient tasks are similar to normal tasks, except for the following differences:

  1. They are not considered by Async::Task#finished?, so they will not keep the reactor alive. Instead, they are stopped (with a class Async::Stop exception) when all other (non-transient) tasks are finished.
  2. As soon as a parent task is finished, any transient child tasks will be moved up to be children of the parent's parent. This ensures that they never keep a sub-tree alive.
  3. Similarly, if you stop a task, any transient child tasks will be moved up the tree as above rather than being stopped.

The purpose of transient tasks is when a task is an implementation detail of an object or instance, rather than a concurrency process. Some examples of transient tasks:

Here is an example that keeps a cache of the current time string since that has only 1-second granularity and you could be handling 1000s of requests per second. The task doing the updating in the background is an implementation detail, so it is marked as transient.

require 'async'
require 'thread/local' # thread-local gem.

class TimeStringCache
	extend Thread::Local # defines `instance` class method that lazy-creates a separate instance per thread
	
	def initialize
		@current_time_string = nil
	end
	
	def current_time_string
		refresh!
		
		return @current_time_string
	end
	
	private
	
	def refresh!
		@refresh ||= Async(transient: true) do
			loop do
				@current_time_string = Time.now.to_s
				sleep(1)
			end
		ensure
			# When the reactor terminates all tasks, `Async::Stop` will be raised from `sleep` and  this code will be invoked. By clearing `@refresh`, we ensure that the task will be recreated if needed again:
			@refresh = nil
		end
	end
end

Async do
	p TimeStringCache.instance.current_time_string
end

Upon existing the top level async block, the @refresh task will be set to nil. Bear in mind, you should not share these resources across threads; doing so would need some form of mutual exclusion.