C++ async & launch | Complete Guide to std::async, future & launch policies
이 글의 핵심
std::async is a C++11 API that executes functions asynchronously and receives results via future. Complete guide to launch::async, launch::deferred policies with practical examples.
Introduction
std::async is a C++11 API that executes functions asynchronously and receives results via std::future. It’s more concise than std::thread and allows controlling execution timing with launch policies.
What You’ll Learn
- Execute async tasks and receive results with
std::async - Understand differences between
launch::asyncandlaunch::deferredpolicies - Master patterns for parallel tasks, timeouts, and exception handling
- Implement commonly used async patterns in production
Table of Contents
- Basic Concepts
- Practical Implementation
- Advanced Usage
- Performance Comparison
- Real-World Cases
- Troubleshooting
- Conclusion
Basic Concepts
What is std::async?
std::async executes functions asynchronously and receives results via std::future.
#include <future>
#include <iostream>
int compute(int x) {
return x * x;
}
int main() {
auto future = std::async(compute, 10);
int result = future.get(); // Wait for result
std::cout << "Result: " << result << std::endl; // 100
return 0;
}
Launch Policies
| Policy | Execution Time | Thread Creation | Use Case |
|---|---|---|---|
| launch::async | Immediate | ✅ New thread | CPU-intensive tasks |
| launch::deferred | On get() call | ❌ Current thread | Conditional execution |
| async | deferred | Implementation-defined | Auto-select | General use (default) |
Practical Implementation
1) Basic Usage
#include <future>
#include <iostream>
#include <thread>
#include <chrono>
int compute(int x) {
std::this_thread::sleep_for(std::chrono::seconds(1));
return x * x;
}
int main() {
auto future = std::async(compute, 10);
std::cout << "Computing..." << std::endl;
int result = future.get(); // Wait
std::cout << "Result: " << result << std::endl; // 100
return 0;
}
2) launch::async - Immediate Execution
#include <future>
#include <iostream>
#include <thread>
int main() {
auto future = std::async(std::launch::async, []() {
std::cout << "Async thread ID: "
<< std::this_thread::get_id() << std::endl;
return 42;
});
std::cout << "Main thread ID: "
<< std::this_thread::get_id() << std::endl;
int result = future.get();
std::cout << "Result: " << result << std::endl;
return 0;
}
Output:
Main thread ID: 140735268771840
Async thread ID: 123145307557888
Result: 42
3) launch::deferred - Deferred Execution
#include <future>
#include <iostream>
#include <thread>
int main() {
auto future = std::async(std::launch::deferred, []() {
std::cout << "Deferred thread ID: "
<< std::this_thread::get_id() << std::endl;
return 42;
});
std::cout << "Main thread ID: "
<< std::this_thread::get_id() << std::endl;
std::cout << "Before get call" << std::endl;
int result = future.get(); // Executes here
std::cout << "Result: " << result << std::endl;
return 0;
}
Output:
Main thread ID: 140735268771840
Before get call
Deferred thread ID: 140735268771840
Result: 42
Note: Executes on same thread
4) Multiple Async Tasks
#include <future>
#include <iostream>
#include <thread>
#include <chrono>
int compute1() {
std::this_thread::sleep_for(std::chrono::seconds(1));
return 10;
}
int compute2() {
std::this_thread::sleep_for(std::chrono::seconds(1));
return 20;
}
int compute3() {
std::this_thread::sleep_for(std::chrono::seconds(1));
return 30;
}
int main() {
auto start = std::chrono::high_resolution_clock::now();
auto f1 = std::async(std::launch::async, compute1);
auto f2 = std::async(std::launch::async, compute2);
auto f3 = std::async(std::launch::async, compute3);
int total = f1.get() + f2.get() + f3.get();
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::seconds>(end - start).count();
std::cout << "Total: " << total << std::endl; // 60
std::cout << "Time: " << duration << "s" << std::endl; // 1s
return 0;
}
Performance: Sequential 3s → Parallel 1s (3x improvement)
5) Exception Handling
#include <future>
#include <iostream>
#include <stdexcept>
int divide(int a, int b) {
if (b == 0) {
throw std::runtime_error("Division by zero");
}
return a / b;
}
int main() {
auto future = std::async(divide, 10, 0);
try {
int result = future.get(); // Exception rethrown
} catch (const std::exception& e) {
std::cout << "Exception: " << e.what() << std::endl;
}
return 0;
}
6) Checking Future Status
#include <future>
#include <iostream>
#include <thread>
#include <chrono>
int longCompute() {
std::this_thread::sleep_for(std::chrono::seconds(3));
return 42;
}
int main() {
auto future = std::async(std::launch::async, longCompute);
// Timeout
auto status = future.wait_for(std::chrono::seconds(1));
if (status == std::future_status::ready) {
std::cout << "Complete" << std::endl;
} else if (status == std::future_status::timeout) {
std::cout << "Timeout (still running)" << std::endl;
}
// Wait for completion
future.wait();
std::cout << "Result: " << future.get() << std::endl;
return 0;
}
Advanced Usage
1) Parallel Downloads
#include <future>
#include <vector>
#include <string>
#include <iostream>
#include <thread>
#include <chrono>
std::string downloadFile(const std::string& url) {
std::this_thread::sleep_for(std::chrono::seconds(1));
return "Data from " + url;
}
int main() {
std::vector<std::string> urls = {
"http://example.com/file1",
"http://example.com/file2",
"http://example.com/file3"
};
std::vector<std::future<std::string>> futures;
for (const auto& url : urls) {
futures.push_back(std::async(std::launch::async, downloadFile, url));
}
for (auto& future : futures) {
std::cout << future.get() << std::endl;
}
return 0;
}
2) Parallel Map-Reduce
#include <future>
#include <vector>
#include <numeric>
#include <iostream>
int sum_range(const std::vector<int>& data, size_t start, size_t end) {
return std::accumulate(data.begin() + start, data.begin() + end, 0);
}
int parallel_sum(const std::vector<int>& data, size_t num_threads) {
size_t chunk_size = data.size() / num_threads;
std::vector<std::future<int>> futures;
for (size_t i = 0; i < num_threads; ++i) {
size_t start = i * chunk_size;
size_t end = (i == num_threads - 1) ? data.size() : (i + 1) * chunk_size;
futures.push_back(std::async(std::launch::async, sum_range,
std::ref(data), start, end));
}
int total = 0;
for (auto& future : futures) {
total += future.get();
}
return total;
}
int main() {
std::vector<int> data(10000000, 1);
int sum = parallel_sum(data, 4);
std::cout << "Sum: " << sum << std::endl; // 10000000
return 0;
}
3) Timeout Pattern
#include <future>
#include <iostream>
#include <thread>
#include <chrono>
int slowCompute() {
std::this_thread::sleep_for(std::chrono::seconds(5));
return 42;
}
int main() {
auto future = std::async(std::launch::async, slowCompute);
auto status = future.wait_for(std::chrono::seconds(2));
if (status == std::future_status::ready) {
std::cout << "Result: " << future.get() << std::endl;
} else {
std::cout << "Timeout: using default value" << std::endl;
// future continues running (cannot cancel)
// Return default value or handle differently
}
return 0;
}
Performance Comparison
async vs thread
Test: Simple computation task
| Method | Code Complexity | Result Return | Exception Handling | Overhead |
|---|---|---|---|---|
| std::async | Low | future | Automatic | Medium |
| std::thread | High | Manual (shared var) | Manual | Low |
Parallel Execution Benchmark
Test: 3 tasks, 1 second each
| Execution | Time | Speedup |
|---|---|---|
| Sequential | 3s | 1x |
| Parallel (async) | 1s | 3x |
| Conclusion: 3x improvement with parallel execution |
Real-World Cases
Case 1: Web Server - Parallel Request Processing
#include <future>
#include <vector>
#include <string>
#include <iostream>
#include <thread>
#include <chrono>
struct Request {
std::string url;
std::string method;
};
std::string processRequest(const Request& req) {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
return "Response from " + req.url;
}
int main() {
std::vector<Request> requests = {
{"http://api.example.com/users", "GET"},
{"http://api.example.com/posts", "GET"},
{"http://api.example.com/comments", "GET"}
};
std::vector<std::future<std::string>> futures;
for (const auto& req : requests) {
futures.push_back(std::async(std::launch::async, processRequest, req));
}
for (auto& future : futures) {
std::cout << future.get() << std::endl;
}
return 0;
}
Troubleshooting
Problem 1: Future Destructor Blocking
Symptom: Program waits unexpectedly
// ❌ Ignoring future
std::async(std::launch::async, []() {
std::this_thread::sleep_for(std::chrono::seconds(5));
}); // Destructor blocks
std::cout << "Next task" << std::endl; // Prints after 5s
// ✅ Store future
auto future = std::async(std::launch::async, []() {
std::this_thread::sleep_for(std::chrono::seconds(5));
});
std::cout << "Next task" << std::endl; // Prints immediately
future.get(); // Explicit wait
Problem 2: Calling get Multiple Times
Symptom: std::future_error exception
auto future = std::async([]() { return 42; });
int r1 = future.get(); // OK
// int r2 = future.get(); // Error: future_error
// ✅ Call get only once
// Use shared_future for sharing across threads
Conclusion
std::async enables concise asynchronous task expression and result retrieval via std::future.
Key Summary
- Basic Usage
- Execute async with
std::async(func, args...) - Wait for result with
future.get()
- Execute async with
- Launch Policies
launch::async: Immediate execution (new thread)launch::deferred: Deferred execution (on get call)- Default:
async | deferred(implementation-defined)
- Exception Handling
- Exceptions rethrown on
get()call - Handle with
try-catch
- Exceptions rethrown on
- Cautions
- Future destructor blocks
get()can only be called once- Watch overhead for small tasks
Selection Guide
| Situation | Method |
|---|---|
| Simple async task | std::async |
| Need result return | std::async + future.get() |
| Share result across threads | shared_future |
| Fine-grained thread control | std::thread |
| One-line summary: std::async enables concise async task expression, controls execution timing with launch policies, and safely receives results via future. |