4 AMPI has its own target in the build system, which can be specified with
5 your architecture and other options. For example:
7 ./build AMPI net-linux gm -O2
10 Compiling and Linking AMPI Programs
11 -----------------------------------
12 AMPI source files can be compiled with charmc or with the wrappers found
13 in /bin: mpiCC, mpicxx, mpif77, and mppif90. Note that you need to specify
14 a fortran compiler when building charm for fortran compilation to work.
16 To link, use charmc -language ampi. For example:
18 charmc -language ampi -o myexecutable source1.o source2.o
23 AMPI programs can be run with charmrun like any other Charm program. In
24 addition to the number of processes, specified with "+p n", AMPI programs
25 can also specify the number of virtual processors (VPs) with "+vp n". For more
26 information on using virtual processors, consult the AMPI manual.
31 Global and static variables are unusable in virtualized AMPI programs, because
32 a separate copy would be needed for each VP. Therefore, to run with more than
33 1 VP per processor, all globals and statics must be modified to use local
36 AMPI has some known flaws and incompatibilities with other MPI implementations:
37 * MPI_Cancel does not actually cancel pending communication.
38 * Creating MPI_Requests sometimes fails.
39 * MPI_Sendrecv_replace gives incorrect results.
40 * Persistent sends with IRSend don't work.
41 * MPI_TestAll improperly frees requests.
42 * ISend/IRecv do not work when using MPI_LONG_DOUBLE.
43 * MPI_Get_elements returns the expected number of elements instead of the
44 actual number received.
45 * MPI_Unpack gives incorrect results.
46 * Data alignment in user defined types does not match the MPI standard.
47 * Scatter/gather using noncontiguous types gives incorrect results.
48 * Datatypes are not reused, freed, or reference counted.