Refactor to create a GpuTaskAssignments object
[gromacs.git] / src / testutils / mpitest.h
blobd4eababfca28d8bd00ca23681024baf3559c1a8c
1 /*
2 * This file is part of the GROMACS molecular simulation package.
4 * Copyright (c) 2016,2019, by the GROMACS development team, led by
5 * Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
6 * and including many others, as listed in the AUTHORS file in the
7 * top-level source directory and at http://www.gromacs.org.
9 * GROMACS is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU Lesser General Public License
11 * as published by the Free Software Foundation; either version 2.1
12 * of the License, or (at your option) any later version.
14 * GROMACS is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * Lesser General Public License for more details.
19 * You should have received a copy of the GNU Lesser General Public
20 * License along with GROMACS; if not, see
21 * http://www.gnu.org/licenses, or write to the Free Software Foundation,
22 * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
24 * If you want to redistribute modifications to GROMACS, please
25 * consider that scientific software is very special. Version
26 * control is crucial - bugs must be traceable. We will be happy to
27 * consider code for inclusion in the official distribution, but
28 * derived work must not be called official GROMACS. Details are found
29 * in the README & COPYING files - if they are missing, get the
30 * official version at http://www.gromacs.org.
32 * To help us fund GROMACS development, we humbly ask that you cite
33 * the research papers on the package. Check out http://www.gromacs.org.
35 /*! \libinternal \file
36 * \brief
37 * Helper functions for MPI tests to make thread-MPI look like real MPI.
39 * \author Teemu Murtola <teemu.murtola@gmail.com>
40 * \inlibraryapi
41 * \ingroup module_testutils
43 #ifndef GMX_TESTUTILS_MPITEST_H
44 #define GMX_TESTUTILS_MPITEST_H
46 #include "config.h"
48 #include <functional>
49 #include <type_traits>
51 #include "gromacs/utility/basenetwork.h"
53 namespace gmx
55 namespace test
58 /*! \brief
59 * Returns the number of MPI ranks to use for an MPI test.
61 * For thread-MPI builds, this will return the requested number of ranks
62 * even before the thread-MPI threads have been started.
64 * \ingroup module_testutils
66 int getNumberOfTestMpiRanks();
67 //! \cond internal
68 /*! \brief
69 * Helper function for GMX_MPI_TEST().
71 * \ingroup module_testutils
73 bool threadMpiTestRunner(std::function<void()> testBody);
74 //! \endcond
76 /*! \brief
77 * Declares that this test is an MPI-enabled unit test.
79 * \param[in] expectedRankCount Expected number of ranks for this test.
80 * The test will fail if run with unsupported number of ranks.
82 * To write unit tests that run under MPI, you need to do a few things:
83 * - Put GMX_MPI_TEST() as the first statement in your test body and
84 * specify the number of ranks this test expects.
85 * - Declare your unit test in CMake with gmx_add_mpi_unit_test().
86 * Note that all tests in the binary should fulfill the conditions above,
87 * and work with the same number of ranks.
88 * TODO: Figure out a mechanism for mixing tests with different rank counts in
89 * the same binary (possibly, also MPI and non-MPI tests).
91 * When you do the above, the following will happen:
92 * - The test will get compiled only if thread-MPI or real MPI is enabled.
93 * - The test will get executed on the number of ranks specified.
94 * If you are using real MPI, the whole test binary is run under MPI and
95 * test execution across the processes is synchronized (GMX_MPI_TEST()
96 * actually has no effect in this case, the synchronization is handled at a
97 * higher level).
98 * If you are using thread-MPI, GMX_MPI_TEST() is required and it
99 * initializes thread-MPI with the specified number of threads and runs the
100 * rest of the test on each of the threads.
102 * You need to be extra careful for variables in the test fixture, if you use
103 * one: when run under thread-MPI, these will be shared across all the ranks,
104 * while under real MPI, these are naturally different for each process.
105 * Local variables in the test body are private to each rank in both cases.
107 * Currently, it is not possible to specify the number of ranks as one, because
108 * that will lead to problems with (at least) thread-MPI, but such tests can be
109 * written as serial tests anyways.
111 * \ingroup module_testutils
113 #if GMX_THREAD_MPI
114 #define GMX_MPI_TEST(expectedRankCount) \
115 do { \
116 ASSERT_EQ(expectedRankCount, ::gmx::test::getNumberOfTestMpiRanks()); \
117 using MyTestClass = std::remove_reference_t<decltype(*this)>; \
118 if (!::gmx::test::threadMpiTestRunner(std::bind(&MyTestClass::TestBody, this))) \
120 return; \
122 } while (0)
123 #else
124 #define GMX_MPI_TEST(expectedRankCount) \
125 ASSERT_EQ(expectedRankCount, ::gmx::test::getNumberOfTestMpiRanks())
126 #endif
128 } // namespace test
129 } // namespace gmx
131 #endif