Add separate constructor to StatePropagatorDataGpu for PME-only rank / PME tests
[gromacs.git] / src / gromacs / mdtypes / state_propagator_data_gpu.h
blob96066d0ac3f948cd0b5320de6276a10d9b10be81
1 /*
2 * This file is part of the GROMACS molecular simulation package.
4 * Copyright (c) 2019, by the GROMACS development team, led by
5 * Mark Abraham, David van der Spoel, Berk Hess, and Erik Lindahl,
6 * and including many others, as listed in the AUTHORS file in the
7 * top-level source directory and at http://www.gromacs.org.
9 * GROMACS is free software; you can redistribute it and/or
10 * modify it under the terms of the GNU Lesser General Public License
11 * as published by the Free Software Foundation; either version 2.1
12 * of the License, or (at your option) any later version.
14 * GROMACS is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
17 * Lesser General Public License for more details.
19 * You should have received a copy of the GNU Lesser General Public
20 * License along with GROMACS; if not, see
21 * http://www.gnu.org/licenses, or write to the Free Software Foundation,
22 * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
24 * If you want to redistribute modifications to GROMACS, please
25 * consider that scientific software is very special. Version
26 * control is crucial - bugs must be traceable. We will be happy to
27 * consider code for inclusion in the official distribution, but
28 * derived work must not be called official GROMACS. Details are found
29 * in the README & COPYING files - if they are missing, get the
30 * official version at http://www.gromacs.org.
32 * To help us fund GROMACS development, we humbly ask that you cite
33 * the research papers on the package. Check out http://www.gromacs.org.
35 /*! \internal \file
37 * \brief Declaration of interfaces for GPU state data propagator object.
39 * This object stores and manages positions, velocities and forces for
40 * all particles in the system on the GPU.
42 * \todo Add cycle counters.
43 * \todo Add synchronization points.
45 * \author Artem Zhmurov <zhmurov@gmail.com>
47 * \inlibraryapi
48 * \ingroup module_mdtypes
50 #ifndef GMX_MDTYPES_STATE_PROPAGATOR_DATA_GPU_H
51 #define GMX_MDTYPES_STATE_PROPAGATOR_DATA_GPU_H
53 #include "gromacs/gpu_utils/devicebuffer_datatype.h"
54 #include "gromacs/gpu_utils/gpu_utils.h"
55 #include "gromacs/math/vectypes.h"
56 #include "gromacs/mdtypes/simulation_workload.h"
57 #include "gromacs/utility/arrayref.h"
58 #include "gromacs/utility/classhelpers.h"
60 class GpuEventSynchronizer;
62 namespace gmx
65 class StatePropagatorDataGpu
67 public:
69 /*! \brief Atom locality indicator: local, non-local, all.
71 * \todo This should be managed by a separate object, since the localities
72 * are used here and in buffer ops.
74 enum class AtomLocality : int
76 Local = 0, //!< Local atoms
77 NonLocal = 1, //!< Non-local atoms
78 All = 2, //!< Both local and non-local atoms
79 Count = 3 //!< The number of atom locality types
82 /*! \brief Constructor
84 * The buffers are reallocated only at the reinit call, the padding is
85 * used there for the coordinates buffer. It is needed for PME and added at
86 * the end of the buffer. It is assumed that if the rank has PME duties on the
87 * GPU, all coordinates are copied to the GPU and hence, for this rank, the
88 * coordinates buffer is not split into local and non-local ranges. For other
89 * ranks, the padding size is zero. This works because only one rank ever does
90 * PME work on the GPU, and if that rank also does PP work that is the only
91 * rank. So all coordinates are always transferred.
93 * In OpenCL, only pmeStream is used since it is the only stream created in
94 * PME context. The local and non-local streams are only needed when buffer
95 * ops are offloaded. This feature is currently not available in OpenCL and
96 * hence these streams are not set in these builds.
98 * \note In CUDA, the update stream is created in the constructor as a temporary
99 * solution, in place until the stream manager is introduced.
100 * Note that this makes it impossible to construct this object in CUDA
101 * builds executing on a host without any CUDA-capable device available.
103 * \note In CUDA, \p deviceContext is unused, hence always nullptr;
104 * all stream arguments can also be nullptr in runs where the
105 * respective streams are not required.
106 * In OpenCL, \p deviceContext needs to be a valid device context.
107 * In OpenCL runs StatePropagatorDataGpu is currently only used
108 * with PME offload, and only on ranks with PME duty. Hence, the
109 * \p pmeStream argument needs to be a valid OpenCL queue object
110 * which must have been created in \p deviceContext.
112 * \todo Make a \p CommandStream visible in the CPU parts of the code so we
113 * will not have to pass a void*.
114 * \todo Make a \p DeviceContext object visible in CPU parts of the code so we
115 * will not have to pass a void*.
117 * \param[in] pmeStream Device PME stream, nullptr allowed.
118 * \param[in] localStream Device NBNXM local stream, nullptr allowed.
119 * \param[in] nonLocalStream Device NBNXM non-local stream, nullptr allowed.
120 * \param[in] deviceContext Device context, nullptr allowed.
121 * \param[in] transferKind H2D/D2H transfer call behavior (synchronous or not).
122 * \param[in] paddingSize Padding size for coordinates buffer.
124 StatePropagatorDataGpu(const void *pmeStream,
125 const void *localStream,
126 const void *nonLocalStream,
127 const void *deviceContext,
128 GpuApiCallBehavior transferKind,
129 int paddingSize);
131 /*! \brief Constructor to use in PME-only rank and in tests.
133 * This constructor should be used if only a coordinate device buffer should be managed
134 * using a single stream. Any operation on force or velocity buffer as well as copy of
135 * non-local coordinates will exit with assertion failure. Note, that the pmeStream can
136 * not be a nullptr and the constructor will exit with an assertion failure.
138 * \todo Currently, unsupported copy operations are blocked by assertion that the stream
139 * not nullptr. This should be improved.
141 * \param[in] pmeStream Device PME stream, nullptr is not allowed.
142 * \param[in] deviceContext Device context, nullptr allowed for non-OpenCL builds.
143 * \param[in] transferKind H2D/D2H transfer call behavior (synchronous or not).
144 * \param[in] paddingSize Padding size for coordinates buffer.
146 StatePropagatorDataGpu(const void *pmeStream,
147 const void *deviceContext,
148 GpuApiCallBehavior transferKind,
149 int paddingSize);
151 //! Move constructor
152 StatePropagatorDataGpu(StatePropagatorDataGpu &&other) noexcept;
153 //! Move assignment
154 StatePropagatorDataGpu &operator=(StatePropagatorDataGpu &&other) noexcept;
155 //! Destructor
156 ~StatePropagatorDataGpu();
158 /*! \brief Set the ranges for local and non-local atoms and reallocates buffers.
160 * \note
161 * The coordinates buffer is (re)allocated, when required by PME, with a padding,
162 * the size of which is set by the constructor. The padding region clearing kernel
163 * is scheduled in the \p pmeStream_ (unlike the coordinates H2D) as only the PME
164 * task uses this padding area.
166 * \param[in] numAtomsLocal Number of atoms in local domain.
167 * \param[in] numAtomsAll Total number of atoms to handle.
169 void reinit(int numAtomsLocal, int numAtomsAll);
171 /*! \brief Returns the range of atoms to be copied based on the copy type (all, local or non-local).
173 * \todo There are at least three versions of the function with this functionality in the code:
174 * this one and two more in NBNXM. These should be unified in a shape of a general function
175 * in DD.
177 * \param[in] atomLocality If all, local or non-local ranges are needed.
179 * \returns Tuple, containing the index of the first atom in the range and the total number of atoms in the range.
181 std::tuple<int, int> getAtomRangesFromAtomLocality(AtomLocality atomLocality);
184 /*! \brief Get the positions buffer on the GPU.
186 * \returns GPU positions buffer.
188 DeviceBuffer<float> getCoordinates();
190 /*! \brief Copy positions to the GPU memory.
192 * \param[in] h_x Positions in the host memory.
193 * \param[in] atomLocality Locality of the particles to copy.
195 void copyCoordinatesToGpu(gmx::ArrayRef<const gmx::RVec> h_x,
196 AtomLocality atomLocality);
198 /*! \brief Get the event synchronizer of the coordinates ready for the consumption on the device.
200 * Returns the event synchronizer which indicates that the coordinates are ready for the
201 * consumption on the device. Takes into account that the producer may be different.
203 * If the update is offloaded, and the current step is not a DD/search step, the returned
204 * synchronizer indicates the completion of GPU update-constraint kernels. Otherwise, on search
205 * steps and if update is not offloaded, the coordinates are provided by the H2D copy and the
206 * returned synchronizer indicates that the copy is complete.
208 * \param[in] atomLocality Locality of the particles to wait for.
209 * \param[in] simulationWork The simulation lifetime flags.
210 * \param[in] stepWork The step lifetime flags.
212 * \returns The event to synchronize the stream that consumes coordinates on device.
214 GpuEventSynchronizer* getCoordinatesReadyOnDeviceEvent(AtomLocality atomLocality,
215 const SimulationWorkload &simulationWork,
216 const StepWorkload &stepWork);
218 /*! \brief Getter for the event synchronizer for the update is done on th GPU
220 * \returns The event to synchronize the stream coordinates wre updated on device.
222 GpuEventSynchronizer* xUpdatedOnDevice();
224 /*! \brief Copy positions from the GPU memory.
226 * \param[in] h_x Positions buffer in the host memory.
227 * \param[in] atomLocality Locality of the particles to copy.
229 void copyCoordinatesFromGpu(gmx::ArrayRef<gmx::RVec> h_x,
230 AtomLocality atomLocality);
232 /*! \brief Wait until coordinates are available on the host.
234 * \param[in] atomLocality Locality of the particles to wait for.
236 void waitCoordinatesReadyOnHost(AtomLocality atomLocality);
239 /*! \brief Get the velocities buffer on the GPU.
241 * \returns GPU velocities buffer.
243 DeviceBuffer<float> getVelocities();
245 /*! \brief Copy velocities to the GPU memory.
247 * \param[in] h_v Velocities in the host memory.
248 * \param[in] atomLocality Locality of the particles to copy.
250 void copyVelocitiesToGpu(gmx::ArrayRef<const gmx::RVec> h_v,
251 AtomLocality atomLocality);
253 /*! \brief Get the event synchronizer for the H2D velocities copy.
255 * \param[in] atomLocality Locality of the particles to wait for.
257 * \returns The event to synchronize the stream that consumes velocities on device.
259 GpuEventSynchronizer* getVelocitiesReadyOnDeviceEvent(AtomLocality atomLocality);
261 /*! \brief Copy velocities from the GPU memory.
263 * \param[in] h_v Velocities buffer in the host memory.
264 * \param[in] atomLocality Locality of the particles to copy.
266 void copyVelocitiesFromGpu(gmx::ArrayRef<gmx::RVec> h_v,
267 AtomLocality atomLocality);
269 /*! \brief Wait until velocities are available on the host.
271 * \param[in] atomLocality Locality of the particles to wait for.
273 void waitVelocitiesReadyOnHost(AtomLocality atomLocality);
276 /*! \brief Get the force buffer on the GPU.
278 * \returns GPU force buffer.
280 DeviceBuffer<float> getForces();
282 /*! \brief Copy forces to the GPU memory.
284 * \param[in] h_f Forces in the host memory.
285 * \param[in] atomLocality Locality of the particles to copy.
287 void copyForcesToGpu(gmx::ArrayRef<const gmx::RVec> h_f,
288 AtomLocality atomLocality);
290 /*! \brief Get the event synchronizer for the forces ready on device.
292 * Returns either of the event synchronizers, depending on the offload scenario
293 * for the current simulation timestep:
294 * 1. The forces are copied to the device (when GPU buffer ops are off)
295 * 2. The forces are reduced on the device (GPU buffer ops are on)
297 * \todo Pass step workload instead of the useGpuFBufferOps boolean.
299 * \param[in] atomLocality Locality of the particles to wait for.
300 * \param[in] useGpuFBufferOps If the force buffer ops are offloaded to the GPU.
302 * \returns The event to synchronize the stream that consumes forces on device.
304 GpuEventSynchronizer* getForcesReadyOnDeviceEvent(AtomLocality atomLocality,
305 bool useGpuFBufferOps);
307 /*! \brief Getter for the event synchronizer for the forces are reduced on the GPU.
309 * \returns The event to mark when forces are reduced on the GPU.
311 GpuEventSynchronizer* fReducedOnDevice();
313 /*! \brief Copy forces from the GPU memory.
315 * \param[in] h_f Forces buffer in the host memory.
316 * \param[in] atomLocality Locality of the particles to copy.
318 void copyForcesFromGpu(gmx::ArrayRef<gmx::RVec> h_f,
319 AtomLocality atomLocality);
321 /*! \brief Wait until forces are available on the host.
323 * \param[in] atomLocality Locality of the particles to wait for.
325 void waitForcesReadyOnHost(AtomLocality atomLocality);
327 /*! \brief Getter for the update stream.
329 * \todo This is temporary here, until the management of this stream is taken over.
331 * \returns The device command stream to use in update-constraints.
333 void* getUpdateStream();
335 /*! \brief Getter for the number of local atoms.
337 * \returns The number of local atoms.
339 int numAtomsLocal();
341 /*! \brief Getter for the total number of atoms.
343 * \returns The total number of atoms.
345 int numAtomsAll();
347 private:
348 class Impl;
349 gmx::PrivateImplPointer<Impl> impl_;
350 GMX_DISALLOW_COPY_AND_ASSIGN(StatePropagatorDataGpu);
353 } // namespace gmx
355 #endif // GMX_MDTYPES_STATE_PROPAGATOR_DATA_GPU_H