ThreadPool

(7 mins to read)

A thread pool maintains multiple threads waiting for tasks to be allocated for concurrent execution by the supervising program.By maintaining a pool of threads, the model increases performance and avoids latency in execution due to frequent creation and destruction of threads for short-lived tasks.

thread-pool

通过提前创建好若干个线程,然后将这些线程统一管理,放在一个容器中。当有异步任务需要执行的时候,从该容器中取得空闲(当前不处于执行异步任务的状态)的线程,将这个异步任务交给这个线程(唤醒)进行处理。当这个任务执行完成之后,该线程不进行销毁,重新回到空闲状态(阻塞),等待新的任务到来。

避免了线程频繁创建和销毁带来的性能开销。

progschj/ThreadPool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
class ThreadPool {
public:
ThreadPool(size_t);
template<class F, class... Args>
auto enqueue(F&& f, Args&&... args)
-> std::future<typename std::result_of<F(Args...)>::type>;
~ThreadPool();
private:
// need to keep track of threads so we can join them
std::vector< std::thread > workers;
// the task queue
std::queue< std::function<void()> > tasks; // 统一为无参数无返回值的函数

// synchronization
std::mutex queue_mutex;
std::condition_variable condition;
bool stop;
};

// the constructor just launches some amount of workers
inline ThreadPool::ThreadPool(size_t threads)
: stop(false)
{
for(size_t i = 0;i<threads;++i)
workers.emplace_back(
[this]
{
for(;;)
{
std::function<void()> task;

{
std::unique_lock<std::mutex> lock(this->queue_mutex);
this->condition.wait(lock,
[this]{ return this->stop || !this->tasks.empty(); });
// 等价于,std::wait会检查并避免假醒情况
// while (!stop_waiting()) {
// wait(lock);
// }
if(this->stop && this->tasks.empty())
return;
task = std::move(this->tasks.front());
this->tasks.pop();
}

task();
}
}
);
}

// add new work item to the pool
template<class F, class... Args>
auto ThreadPool::enqueue(F&& f, Args&&... args)
-> std::future<typename std::result_of<F(Args...)>::type>
{
using return_type = typename std::result_of<F(Args...)>::type;

auto task = std::make_shared< std::packaged_task<return_type()> >( // 这里使用shared_ptr应该是为了保活,否则emplace到tasks中的lambda还没执行,捕获的task地址就被析构调用了
// 而如果用拷贝的话,又不是对应前面提前取到的futrue了
// 此外,shared_ptr的包装使得packaged_task可以被拷贝和移动
// packaged_task本身是不可拷贝,只能移动的,但是std::function又只能接收可拷贝的lambda
std::bind(std::forward<F>(f), std::forward<Args>(args)...)
);

std::future<return_type> res = task->get_future();
{
std::unique_lock<std::mutex> lock(queue_mutex);

// don't allow enqueueing after stopping the pool
if(stop)
throw std::runtime_error("enqueue on stopped ThreadPool");

// packaged_task内部有一个std::promise,在执行完包装的task后会调用promise的set_value,从而future可以通过get拿到结果
tasks.emplace([task](){ (*task)(); }); // 额外用一个lambda包装成无返回值的
}
condition.notify_one();
return res;
}

// the destructor joins all threads
inline ThreadPool::~ThreadPool()
{
{
std::unique_lock<std::mutex> lock(queue_mutex);
stop = true;
}
condition.notify_all();
for(std::thread &worker: workers)
worker.join();
}

另一个类似的C++版本:https://github.com/mtrebi/thread-pool