This implementation of a thread pool features a queue for pending tasks to be executed by threads in the thread pool. Therefore the function PostTask() used to add new tasks never needs to block (i.e. wait for a thread to become available).
It also features a stack for the sleeping threads, allowing it to efficiently manage the thread pool itself.
Each task is to be executed by a single thread in the thread pool.
The function to post a task will either give the task directly to a sleeping thread, or else queue the task to be done later, if no threads are currently available. Therefore it always returns immediately without blocking.
With a limited number of threads, a large number of tasks will tend to be done in sequence. This is ideal for tasks that are only useful when they are completed. Having batched tasks done sequentually allows the user to see things happen, and is preferable to doing all the batched tasks in parallel such that none are completed until the end.
Clients of the thread pool can only add tasks. They can't remove a task that has previously been added.
struct IThreadPool
{
virtual void Close() = 0;
virtual ITaskExecuter* CreateTaskExecuter() = 0;
virtual ssize_t GetNumQueuedTasks() const = 0;
protected:
~IThreadPool() {}
};
void Close()
A thread pool must eventually be closed.
All the task executors created by this thread pool must have been closed before calling this function.
Stops and destroys all threads in the thread pool
WARNING: A created thread pool must be deleted before static uninitialisation begins
ITaskExecuter* CreateTaskExecuter()
Create an ITaskExecuter which is used to actually post tasks to the thread pool.
ssize_t GetNumQueuedTasks() const
Get the current number of queued tasks (i.e. tasks that are pending - which is distinct from tasks that are currently being executed)
IThreadPool* CreateThreadPool(
ssize_t numThreads,
int nPriority = 0)
Create a thread pool with the given number of threads and priority level.