[2026] Boost.Asio Introduction: io_context, async_read, and Non-Blocking I/O

[2026] Boost.Asio Introduction: io_context, async_read, and Non-Blocking I/O

이 글의 핵심

Learn Asio: why async beats one-thread-per-connection, io_context and run(), steady_timer, async_read/async_write/async_read_until, async_accept echo server, error_code handling, shared_ptr buffers, and strands.

Introduction: why async I/O?

Blocking read ties up a thread per connection—thousands of connections ⇒ thousands of threads. Asio registers async_* operations; io_context::run() dispatches completions so few threads handle many connections. Prerequisite: socket basics (#28).
Requirements: Boost.Asio or standalone Asio, C++14+.

Blocking vs async (diagrams in original)

Blocking: thread waits in read_some. Async: thread runs handlers when data is ready; I/O wait does not consume thread stack.

io_context, post, dispatch, run

  • post: queue a handler.
  • dispatch: run immediately if already inside run(); else queue.
  • run: block until no pending work.
  • poll / poll_one: non-blocking progress.
  • restart: after run() returns, reset before run() again.

Async timer

steady_timer::async_wait—chain re-arm in the handler for periodic ticks. Combine with socket.cancel() for read timeouts.

async_read / async_write

  • async_read: fills buffer completely (or EOF).
  • async_read_some: partial read—typical for protocols.
  • async_read_until: delimiter (e.g. newline) into streambuf. Keep shared_ptr to std::array, std::string, or streambuf across the async call.

Async client flow

async_resolveasync_connectasync_writeasync_read—chain lambdas or bind to session methods.

Async server

async_accept → construct Session with shared_ptr + enable_shared_from_thisasync_read_until loop → async_write echo → re-issue read. Always schedule the next async_accept in the handler.

Error handling

Check error_code; treat asio::error::eof as normal close; operation_aborted after cancel; log connection_reset, broken_pipe.

Common mistakes

  1. Dangling this in handlers—use shared_from_this.
  2. run() exits after one connection—re-accept or work_guard.
  3. Stack buffer in async—use heap shared_ptr storage.
  4. run() twice without restart—no work processed second time.
  5. Same socket from multiple threads without strand—data races.

Performance (illustrative)

Async often achieves much higher req/s than one-thread-per-connection at large connection counts—always measure on your workload.

... 996 lines not shown ... Token usage: 63706/1000000; 936294 remaining Start-Sleep -Seconds 3