Typically an application using the cxMessage2 library will create a single IoContextPool when it starts and close it when it ends.
The IoContextPool is passed into the functions to create a TcpMsgClient or TcpMsgServer.
To successfully use the cxMessage2 library that's all you need to know! If you are interested in what it's for see the under the hood section below.
The following functions are used to create and close IoContextPool objects:
IoContextPool* CreateIoContextPool(int numThreads)
Header: Ceda/Core/cxMessage2/TcpMsg.h
Create an IoContextPool with its own thread pool. The thread pool will use 'numThreads' threads. Typically the number of threads should be at least as large as the number of processors in the system. More are needed if IoContextPool worker threads are made to block in application code.
If numThreads == 0 then the number of threads is set to the value returned by a call to std::thread::hardware_concurrency().
Never returns nullptr.
void Close(IoContextPool*)
Header: Ceda/Core/cxMessage2/TcpMsg.h
Every successful call to CreateIoContextPool() (i.e. that doesn't return nullptr) must be paired with a call to Close(IoContextPool*).
It is required that the IoContextPool be closed only after all the clients/servers that use it have been closed.
Calling Close(IoContextPool*) without closing the TcpMsgServers and TcpMsgClients that use it will typically block indefinitely.
Close(IoContextPool*) only returns after all the worker threads have been stopped.
IoContextPool manages a fixed size pool of boost asio io_contexts and worker threads. By fixed size we mean that the number of io_contexts and threads cannot be changed over time.
There is one worker thread for each io_context. In other words each io_context is run for the life of the IoContextPool by a single worker thread. For a given socket we ensure all the asynchronous I/O handlers are executed by only one of the io_contexts (and hence only one of the worker threads) for the entire life of the socket.