2 WRF-NMM Model Version 3.1 (April 9, 2009)
4 ----------------------------
5 WRF-NMM PUBLIC DOMAIN NOTICE
6 ----------------------------
8 WRF-NMM was developed at National Centers for
9 Environmental Prediction (NCEP), which is part of
10 NOAA's National Weather Service. As a government
11 entity, NCEP makes no proprietary claims, either
12 statutory or otherwise, to this version and release of
13 WRF-NMM and consider WRF-NMM to be in the public
14 domain for use by any person or entity for any purpose
15 without any fee or charge. NCEP requests that any WRF
16 user include this notice on any partial or full copies
17 of WRF-NMM. WRF-NMM is provided on an "AS IS" basis
18 and any warranties, either express or implied,
19 including but not limited to implied warranties of
20 non-infringement, originality, merchantability and
21 fitness for a particular purpose, are disclaimed. In
22 no event shall NOAA, NWS or NCEP be liable for any
23 damages, whatsoever, whether direct, indirect,
24 consequential or special, that arise out of or in
25 connection with the access, use or performance of
26 WRF-NMM, including infringement actions.
28 ================================================
33 This is the main directory for the WRF Version 3 source code release.
35 - For directions on compiling WRF for NMM, see below or the WRF-NMM Users' Web page (http://www.dtcenter.org/wrf-nmm/users/)
36 - Read the README.namelist file in the run/ directory (or on the WRF-NMM Users' page),
37 and make changes carefully.
39 For questions, send mail to wrfhelp@ucar.edu
44 Version 3.1 is released on April 9, 2009.
46 - For more information on WRF V3.1 release, visit WRF-NMM Users home page
47 http://www.dtcenter.org/wrf-nmm/users/, and read the online User's Guide.
48 - WRF V3 executable will work with V3.0 wrfinput/wrfbdy. As
49 always, rerunning the new programs is recommended.
51 The Online User's Guide has also been updated.
52 ================================================
54 The ./compile script at the top level allows for easy selection of
55 NMM and ARW cores of WRF at compile time.
57 - Specify your WRF-NMM option by setting the appropriate environment variable:
60 setenv WRF_NMM_NEST 1 (if nesting capability is desired)
62 - The Registry files for NMM and ARW are not integrated
63 yet. There are separate versions:
65 Registry/Registry.NMM <-- for NMM
66 Registry/Registry.NMM_NEST <-- for NMM with nesting
67 Registry/Registry.EM <-- for ARW (formerly known as Eulerian Mass)
70 How to configure, compile and run?
71 ----------------------------------
73 - In WRFV3 directory, type:
77 this will create a configure.wrf file that has appropriate compile
78 options for the supported computers. Edit your configure.wrf file as needed.
80 Note: WRF requires netCDF library. If your netCDF library is installed in
81 some odd directory, set environment variable NETCDF before you type
82 'configure'. For example:
84 setenv NETCDF /usr/local/lib32/r4i4
89 - If sucessful, this command will create nmm_real.exe and wrf.exe
90 in directory main/, and the appropriate executables will be linked into
91 the run directories under test/nmm_real, or run/.
93 - cd to the appropriate test or run directory to run "nmm_real.exe" and "wrf.exe".
95 - Place files from WPS (met_nmm.*, geo_nmm_nest*)
96 in the appropriate directory, type
100 to produce wrfbdy_d01 and wrfinput_d01. Then type
106 - If you use mpich, type
108 mpirun -np number-of-processors wrf.exe
110 =============================================================================
112 What is in WRF-NMM V3.1?
116 - The WRF-NMM model is a fully compressible, non-hydrostatic model with a
119 - Supports One-way and two-way static nesting.
121 - The terrain following hybrid pressure sigma vertical coordinate is used.
123 - The grid staggering is the Arakawa E-grid.
125 - The same time step is used for all terms.
128 - Horizontally propagating fast-waves: Forward-backward scheme
129 - Veryically propagating sound waves: Implicit scheme
133 - Horizontal: The Adams-Bashforth scheme
134 - Vertical: The Crank-Nicholson scheme
135 TKE, water species: Forward, flux-corrected (called every two timesteps)/Eulerian, Adams-Bashforth
136 and Crank-Nicholson with monotonization.
140 - Horizontal: Energy and enstrophy conserving,
141 quadratic conservative,second order
143 - Vertical: Quadratic conservative,second order, implicit
145 - Tracers (water species and TKE): upstream, positive definite, conservative antifiltering
146 gradient restoration, optional, see next bullet.
148 - Tracers (water species, TKE, and test tracer rrw): Eulerian with monotonization, coupled with
149 continuity equation, conservative, positive definite, monotone, optional. To turn on/off, set
150 the logical switch "euler" in solve_nmm.F to .true./.false. The monotonization parameter
151 steep in subroutine mono should be in the range 0.96-1.0. For most natural tracers steep=1.
152 should be adequate. Smaller values of steep are recommended for idealizaed tests with very
153 steep gradients. This option is available only with Ferrier microphysics.
155 - Horizontal diffusion: Forward, second order "Smagorinsky-type"
157 - Vertical Diffusion:
158 See "Free atmosphere turbulence above surface layer" section
159 in "Physics" section given in below.
163 - Explicit Microphysics (WRF Single Moment 5 and 6 class /
164 Ferrier (Used operationally at NCEP.)/ Thompson [a new version in 2.2])
166 - Cumulus parameterization (Kain-Fritsch with shallow convection /
167 Betts-Miller-Janjic (Used operationally at NCEP.)/ Grell-Devenyi ensemble
168 / Simplified Arakawa-Schubert)
170 - Free atmosphere turbulence above surface layer: Mellor-Yamada-Janjic (Used operationally at NCEP.)
172 - Planetary boundary layer: YSU / Mellor-Yamada-Janjic (Used operationally at NCEP.)
175 - Surface layer: Similarity theory scheme with viscous sublayers
176 over both solid surfaces and water points (Janjic - Used operatinally at NCEP).
179 - Soil model: Noah land-surface model (4-level - Used operationally at NCEP) /
183 - Longwave radiation: GFDL Scheme (Fels-Schwarzkopf) (Used operationally at NCEP.) / RRTM
184 - Shortwave radiation: GFDL-scheme (Lacis-Hansen) (Used operationally at NCEP.) / Dudhia
186 - Gravity wave drag with mountain wave blocking (Alpert; Kim and Arakawa)
190 - Hierarchical software architecture that insulates scientific code
191 (Model Layer) from computer architecture (Driver Layer)
192 - Multi-level parallelism supporting distributed-memory (MPI)
193 - Active data registry: defines and manages model state fields, I/O,
194 nesting, configuration, and numerous other aspects of WRF through a single file,
197 Easy to extend: forcing and feedback of new fields specified by
198 editing a single table in the Registry
199 Efficient: 5-8% overhead on 64 processes of IBM
200 - Enhanced I/O options:
201 NetCDF and Parallel HDF5 formats
202 Nine auxiliary input and history output streams separately controllable through the
204 Output file names and time-stamps specifiable through namelist
205 - Efficient execution on a range of computing platforms:
206 IBM SP systems, (e.g. NCAR "bluevista","blueice","bluefire" Power5-based system)
210 IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
212 x86_64 (e.g. TACC's "Ranger", NOAA/GSD "wJet" )
213 PGI, Intel, Pathscale, gfortran, g95 compilers supported
214 Sun Solaris (single threaded and SMP)
215 Cray X1, X1e (vector), XT3/4 (Opteron)
216 Mac Intel/ppc, PGI/ifort/g95
220 - RSL_LITE: communication layer, scalable to very large domains, supports nesting.
221 - I/O: NetCDF, parallel NetCDF (Argonne), HDF5, GRIB, raw binary, Quilting (asynchronous I/O)
223 - ESMF Time Management, including exact arithmetic for fractional
224 time steps (no drift).
225 - ESMF integration - WRF can be run as an ESMF component.
226 - Improved documentation, both on-line (web based browsing tools) and in-line
228 (Model Layer) from computer architecture (Driver Layer)
229 - Multi-level parallelism supporting shared-memory (OpenMP), distributed-memory (MPI),
230 and hybrid share/distributed modes of execution
231 - Serial compilation can be used for single-domain runs but not for runs with
232 nesting at this time.
233 - Active data registry: defines and manages model state fields, I/O,
234 configuration, and numerous other aspects of WRF through a single file,
236 - Enhanced I/O options:
237 NetCDF and Parallel HDF5 formats
238 Five auxiliary history output streams separately controllable through the namelist
239 Output file names and time-stamps specifiable through namelist
241 - Testing: Various regression tests are performed on HP/Compaq systems at
242 NCAR/MMM whenever a change is introduced into WRF cores.
244 - Efficient execution on a range of computing platforms:
245 IBM SP systems, (e.g. NCAR "bluevista","blueice" and NCEP's "blue", Power4-based system)
246 HP/Compaq Alpha/OSF workstation, SMP, and MPP systems (e.g. Pittsburgh
247 Supercomputing Center TCS)
250 IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
252 Pentium 3/4 SMP and SMP clusters (NOAA/FSL iJet system)
253 PGI and Intel compilers supported
254 Alpha Linux (NOAA/FSL Jet system)
255 Sun Solaris (single threaded and SMP)
258 Other ports under development:
261 - RSL_LITE: communication layer, scalable to very
263 - ESMF Time Management, including exact arithmetic for fractional
264 time steps (no drift); model start, stop, run length and I/O frequencies are
265 now specified as times and time intervals
266 - Improved documentation, both on-line (web based browsing tools) and in-line
268 --------------------------------------------------------------------------