2 WRF-NMM Model Version 3.2 (March 31, 2010)
4 ----------------------------
5 WRF-NMM PUBLIC DOMAIN NOTICE
6 ----------------------------
8 WRF-NMM was developed at National Centers for
9 Environmental Prediction (NCEP), which is part of
10 NOAA's National Weather Service. As a government
11 entity, NCEP makes no proprietary claims, either
12 statutory or otherwise, to this version and release of
13 WRF-NMM and consider WRF-NMM to be in the public
14 domain for use by any person or entity for any purpose
15 without any fee or charge. NCEP requests that any WRF
16 user include this notice on any partial or full copies
17 of WRF-NMM. WRF-NMM is provided on an "AS IS" basis
18 and any warranties, either express or implied,
19 including but not limited to implied warranties of
20 non-infringement, originality, merchantability and
21 fitness for a particular purpose, are disclaimed. In
22 no event shall NOAA, NWS or NCEP be liable for any
23 damages, whatsoever, whether direct, indirect,
24 consequential or special, that arise out of or in
25 connection with the access, use or performance of
26 WRF-NMM, including infringement actions.
28 ================================================
33 This is the main directory for the WRF Version 3 source code release.
35 - For directions on compiling WRF for NMM, see below or the
36 WRF-NMM Users' Web page (http://www.dtcenter.org/wrf-nmm/users/)
37 - Read the README.namelist file in the run/ directory (or on
38 the WRF-NMM Users' page), and make changes carefully.
40 For questions, send mail to wrfhelp@ucar.edu
45 Version 3.2 is released on March 31, 2010.
47 - For more information on WRF V3.2 release, visit WRF-NMM Users home page
48 http://www.dtcenter.org/wrf-nmm/users/, and read the online User's Guide.
49 - WRF V3 executable will work with V3.1 wrfinput/wrfbdy. As
50 always, rerunning the new programs is recommended.
52 The Online User's Guide has also been updated.
53 ================================================
55 The ./compile script at the top level allows for easy selection of
56 NMM and ARW cores of WRF at compile time.
58 - Specify your WRF-NMM option by setting the appropriate environment variable:
61 setenv WRF_NMM_NEST 1 (if nesting capability is desired)
62 setenv HWRF 1 (if HWRF coupling/physics are desired)
64 - The Registry files for NMM and ARW are not integrated
65 yet. There are separate versions:
67 Registry/Registry.NMM <-- for NMM
68 Registry/Registry.NMM_NEST <-- for NMM with nesting
69 Registry/Registry.EM <-- for ARW (formerly known as Eulerian Mass)
72 How to configure, compile and run?
73 ----------------------------------
75 - In WRFV3 directory, type:
79 this will create a configure.wrf file that has appropriate compile
80 options for the supported computers. Edit your configure.wrf file as needed.
82 Note: WRF requires netCDF library. If your netCDF library is installed in
83 some odd directory, set environment variable NETCDF before you type
84 'configure'. For example:
86 setenv NETCDF /usr/local/lib32/r4i4
91 - If sucessful, this command will create nmm_real.exe and wrf.exe
92 in directory main/, and the appropriate executables will be linked into
93 the run directories under test/nmm_real, or run/.
95 - cd to the appropriate test or run directory to run "nmm_real.exe" and "wrf.exe".
97 - Place files from WPS (met_nmm.*, geo_nmm_nest*)
98 in the appropriate directory, type
102 to produce wrfbdy_d01 and wrfinput_d01. Then type
108 - If you use mpich, type
110 mpirun -np number-of-processors wrf.exe
112 =============================================================================
114 What is in WRF-NMM V3.2?
118 - The WRF-NMM model is a fully compressible, non-hydrostatic model with a
121 - Supports One-way and two-way static and moving nests.
123 - The terrain following hybrid pressure sigma vertical coordinate is used.
125 - The grid staggering is the Arakawa E-grid.
127 - The same time step is used for all terms.
130 - Horizontally propagating fast-waves: Forward-backward scheme
131 - Veryically propagating sound waves: Implicit scheme
135 - Horizontal: The Adams-Bashforth scheme
136 - Vertical: The Crank-Nicholson scheme
137 TKE, water species: Forward, flux-corrected (called every two timesteps)/Eulerian, Adams-Bashforth
138 and Crank-Nicholson with monotonization.
142 - Horizontal: Energy and enstrophy conserving,
143 quadratic conservative,second order
145 - Vertical: Quadratic conservative,second order, implicit
147 - Tracers (water species and TKE): upstream, positive definite, conservative antifiltering
148 gradient restoration, optional, see next bullet.
150 - Tracers (water species, TKE, and test tracer rrw): Eulerian with monotonization, coupled with
151 continuity equation, conservative, positive definite, monotone, optional. To turn on/off, set
152 the logical switch "euler" in solve_nmm.F to .true./.false. The monotonization parameter
153 steep in subroutine mono should be in the range 0.96-1.0. For most natural tracers steep=1.
154 should be adequate. Smaller values of steep are recommended for idealizaed tests with very
155 steep gradients. This option is available only with Ferrier microphysics.
157 - Horizontal diffusion: Forward, second order "Smagorinsky-type"
159 - Vertical Diffusion:
160 See "Free atmosphere turbulence above surface layer" section
161 in "Physics" section given in below.
163 - Added a new highly-conservative passive advection scheme to v3.2
165 Added Operational Hurricane WRF (HWRF) components to v3.2. These enhancements include:
166 - Vortex following moving nest for NMM
167 - Ocean coupling (with POM)
168 - Changes in diffusion coefficients
169 - Modifications/additions to physics schemes (tuned for the tropics)
170 - Updated existing SAS cumulus scheme
171 - Updated existing GFS boundary layer scheme
172 - Added new HWRF microphysics scheme - Added new HWRF radiation scheme
173 Please see the WRF for Hurricanes webpage for more details:
174 http://www.dtcenter.org/HurrWRF/users
179 - Explicit Microphysics: WRF Single Moment 5 and 6 class /
180 Ferrier (Used operationally at NCEP.) / Thompson [a new version in 3.1]
181 / HWRF microphysics: (Used operationally at NCEP for HWRF)
183 - Cumulus parameterization: Kain-Fritsch with shallow convection /
184 Betts-Miller-Janjic (Used operationally at NCEP.)/ Grell-Devenyi ensemble
185 / Simplified Arakawa-Schubert (Used operationally at NCEP for HWRF)
187 - Free atmosphere turbulence above surface layer: Mellor-Yamada-Janjic (Used operationally at NCEP.)
189 - Planetary boundary layer: YSU / Mellor-Yamada-Janjic (Used operationally at NCEP.)
190 / NCEP Global Forecast System scheme (Used operationally at NCEP for HWRF)
191 / GFS / Quasi-Normal Scale Elimination
193 - Surface layer: Similarity theory scheme with viscous sublayers
194 over both solid surfaces and water points (Janjic - Used operatinally at NCEP).
195 / GFS / YSU / Quasi-Normal Scale Elimination / GFDL surface layer (Used operationally at NCEP for HWRF)
197 - Soil model: Noah land-surface model (4-level - Used operationally at NCEP) /
198 RUC LSM (6-level) / GFDL slab model (Used operationally at NCEP for HWRF)
201 - Longwave radiation: GFDL Scheme (Fels-Schwarzkopf) (Used
202 operationally at NCEP.) / Modified GFDL scheme (Used operationally
203 at NCEP for HWRF) / RRTM
204 - Shortwave radiation: GFDL-scheme (Lacis-Hansen) (Used operationally
205 at NCEP.) / Modified GFDL shortwave (Used operationally at NCEP
208 - Gravity wave drag with mountain wave blocking (Alpert; Kim and Arakawa)
210 - Sea Surface temperature updates during long simulations
214 - Hierarchical software architecture that insulates scientific code
215 (Model Layer) from computer architecture (Driver Layer)
216 - Multi-level parallelism supporting distributed-memory (MPI)
217 - Active data registry: defines and manages model state fields, I/O,
218 nesting, configuration, and numerous other aspects of WRF through a single file,
221 Easy to extend: forcing and feedback of new fields specified by
222 editing a single table in the Registry
223 Efficient: 5-8% overhead on 64 processes of IBM
224 - Enhanced I/O options:
225 NetCDF and Parallel HDF5 formats
226 Nine auxiliary input and history output streams separately controllable through the
228 Output file names and time-stamps specifiable through namelist
229 - Efficient execution on a range of computing platforms:
230 IBM SP systems, (e.g. NCAR "bluevista","blueice","bluefire" Power5-based system)
234 IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
236 x86_64 (e.g. TACC's "Ranger", NOAA/GSD "wJet" )
237 PGI, Intel, Pathscale, gfortran, g95 compilers supported
238 Sun Solaris (single threaded and SMP)
239 Cray X1, X1e (vector), XT3/4 (Opteron)
240 Mac Intel/ppc, PGI/ifort/g95
244 - RSL_LITE: communication layer, scalable to very large domains, supports nesting.
245 - I/O: NetCDF, parallel NetCDF (Argonne), HDF5, GRIB, raw binary, Quilting (asynchronous I/O)
247 - ESMF Time Management, including exact arithmetic for fractional
248 time steps (no drift).
249 - ESMF integration - WRF can be run as an ESMF component.
250 - Improved documentation, both on-line (web based browsing tools) and in-line
252 (Model Layer) from computer architecture (Driver Layer)
253 - Multi-level parallelism supporting shared-memory (OpenMP), distributed-memory (MPI),
254 and hybrid share/distributed modes of execution
255 - Serial compilation can be used for single-domain runs but not for runs with
256 nesting at this time.
257 - Active data registry: defines and manages model state fields, I/O,
258 configuration, and numerous other aspects of WRF through a single file,
260 - Enhanced I/O options:
261 NetCDF and Parallel HDF5 formats
262 Five auxiliary history output streams separately controllable through the namelist
263 Output file names and time-stamps specifiable through namelist
265 - Testing: Various regression tests are performed on HP/Compaq systems at
266 NCAR/MMM whenever a change is introduced into WRF cores.
268 - Efficient execution on a range of computing platforms:
269 IBM SP systems, (e.g. NCAR "bluevista","blueice" and NCEP's "blue", Power4-based system)
270 HP/Compaq Alpha/OSF workstation, SMP, and MPP systems (e.g. Pittsburgh
271 Supercomputing Center TCS)
274 IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
276 Pentium 3/4 SMP and SMP clusters (NOAA/FSL iJet system)
277 PGI and Intel compilers supported
278 Alpha Linux (NOAA/FSL Jet system)
279 Sun Solaris (single threaded and SMP)
282 Other ports under development:
285 - RSL_LITE: communication layer, scalable to very
287 - ESMF Time Management, including exact arithmetic for fractional
288 time steps (no drift); model start, stop, run length and I/O frequencies are
289 now specified as times and time intervals
290 - Improved documentation, both on-line (web based browsing tools) and in-line
292 --------------------------------------------------------------------------