Using MPI_Init_thread is similar to using MPI_Init in that both functions are designed to initialize the MPI thread environment. However, while both functions require that the main thread call them, there are differences. The main difference between MPI_Init_thread and MPI_Init is the level of thread support. For example, if a single threaded application is calling MPI, MPI_Init_thread will initialize the thread environment to MPI_THREAD_SINGLE, whereas if a multi-threaded program is calling MPI, MPI_Init_thread is likely to initialize the thread environment to MPI_THREAD_MULTIPLE.
However, MPI_Init_thread and mpi_init are different in that both functions are used to start a thread, but the latter function also lets the application specify the thread level. For example, a threaded application calling MPI_Init_thread might specify the thread level to be MPI_THREAD_FUNNELED, whereas a multi-threaded application may specify the thread level to be MPI_THREAD_SERIALIZED. This level promises to never call MPI from another thread, whereas MPI_THREAD_SERIALIZED is not a guarantee and may cause deadlock.
MPI uses a static process model. If a thread is blocking an MPI call, the calling thread is blocked until the call is completed. However, if a thread is not blocking an MPI call, the calling thread continues to execute, although the blocking MPI call can continue to block the thread. The calling thread has a limited time to complete the MPI call, and can block all subsequent MPI calls.
However, a thread that is blocking an MPI call can still continue to execute if another thread executes the call, so it is important to use interthread synchronization to prevent other threads from blocking the call. In addition, threads can issue collective calls on a window handle or file handle, if they are multi-threaded. When a thread executes a collective call, the call is executed according to the order in which the processes call each other. This allows the calling thread to block all subsequent MPI calls, and may result in better performance.
MPI implementations may offer tools for migration, network level fault tolerance, or other features that are not included in the MPI standard. If you are using an MPI implementation, it is important to check the user documentation for that implementation to learn what features it supports. If you do not know what a particular implementation supports, you may want to use a different MPI library that supports those features. You may also be able to select an MPI library based on the level of thread support that the application requires.
The MPI_Init_thread function also does not support the ARGC or ARGV parameters. In the Fortran version of MPI_Init_thread, the argc and argv parameters are not available. However, the IERROR parameter is available. If a parameter is not present, the function will return an error. If the parameter is invalid, the function will return MCO_E_NW_INVALID_PARAMETER.
MPI_Init_thread is equivalent to calling MPI_Init, but can be used to create portable threads. The threads created by the function behave on behalf of the MPI process, and can be cancelled by other threads.