Chapter 5: WRF Model

Table of Contents

Introduction

The WRF model is a fully compressible and nonhydrostatic model (with a run-time hydrostatic option). Its vertical coordinate is a terrain-following hydrostatic pressure coordinate. The grid staggering is the Arakawa C-grid. The model uses the Runge-Kutta 2nd and 3rd order time integration schemes, and 2nd to 6th order advection schemes in both the horizontal and vertical. It uses a time-split small step for acoustic and gravity-wave modes. The dynamics conserves scalar variables.

The WRF model code contains an initialization program (either for real-data, real.exe, or idealized data, ideal.exe; see Chapter 4), a numerical integration program (wrf.exe), a program to do one-way nesting (ndown.exe), and a program to do tropical storm bogussing (tc.exe). The WRF model, Version 3, supports a variety of capabilities. These include

 

Other References

 

 

Installing WRF

Before compiling the WRF code on a computer, check to see if the netCDF library is installed. This is because one of the supported WRF I/O options is netCDF, and it is the one commonly used and supported by the post-processing programs. If the netCDF is installed in a directory other than /usr/local/, then find the path, and use the environment variable NETCDF to define where the path is. To do so, type

setenv NETCDF path-to-netcdf-library

Often the netCDF library and its include/ directory are collocated. If this is not the case, create a directory, link both netCDF lib and include directories in this directory, and use the environment variable to set the path to this directory. For example,

netcdf_links/lib -> /netcdf-lib-dir/lib
netcdf_links/include -> /where-include-dir-is/include

setenv NETCDF /directory-where-netcdf_links-is/netcdf_links

If the netCDF library is not available on the computer, it needs to be installed first. NetCDF source code or pre-built binary may be downloaded from, and installation instruction can be found on, the Unidata Web page at http://www.unidata.ucar.edu/.

Hint: for Linux users:

If PGI, Intel, gfortran or g95 compilers are are used on a Linux computer, make sure netCDF is installed using the same compiler. Use the NETCDF environment variable to point to the PGI/Intel/g95 compiled netCDF library.

Hint: If using netCDF-4, make sure that the new capabilities (such as parallel I/O based on HDF5) are not activated at the install time, unless you intend to use the compression capability from netCDF-4 (supported in V3.5. More info below).

The WRF source code tar file can be downloaded from http://www.mmm.ucar.edu/wrf/users/download/get_source.html. Once the tar file is unzipped (gunzip WRFV3.TAR.gz), and untared (tar –xf WRFV3.TAR), it will create a WRFV3/ directory. This contains:

Makefile

Top-level makefile

README

General information about the WRF/ARW core

README_test_cases

Explanation of the test cases

README.NMM

General information for the WRF/NMM core

README.rsl_output

For NMM

Registry/

Directory for WRF Registry files

arch/

Directory where compile options are gathered

clean

script to clean created files and executables

compile

script for compiling the WRF code

configure

script to create the configure.wrf file for compiling

chem/

WRF chemistry, supported by NOAA/GSD

dyn_em/

Directory for ARW dynamics and numerics

dyn_exp/

Directory for a 'toy' dynamic core

dyn_nmm/

Directory for NMM dynamics and numerics, supported by DTC

external/

Directory that contains external packages, such as those for IO, time keeping and MPI

frame/

Directory that contains modules for the WRF framework

inc/

Directory that contains ‘include’ files

main/

Directory for main routines, such as wrf.F, and all executables after compilation

phys/

Directory for all physics modules

run/

Directory where one may run WRF

share/

Directory that contains mostly modules for the WRF mediation layer and WRF I/O

test/

Directory that contains test case directories, may be used to run WRF

tools/

Directory that contains tools for developers

The steps to compile and run the model are:

1.     configure: generate a configuration file for compilation

2.     compile: compile the code

3.     run the model

Go to the WRFV3 (top) directory and type

./configure

and a list of choices for your computer should appear. These choices range from compiling for a single processor job (serial), to using OpenMP shared-memory (smpar) or distributed-memory parallelization (dmpar) options for multiple processors, or a combination of shared-memory and distributed-memory options (dm+sm). When a selection is made, a second choice for compiling nesting will appear. For example, on a Linux computer, the above steps may look like:

> setenv NETCDF /usr/local/netcdf-pgi
> ./configure

checking for perl5... no
checking for perl... found /usr/bin/perl (perl)
Will use NETCDF in dir: /usr/local/netcdf-pgi
PHDF5 not set in environment. Will configure WRF for use without.
$JASPERLIB or $JASPERINC not found in environment, configuring to build without grib2 I/O...
-----------------------------------------------------------------------
Please select from among the following supported platforms.

1.  Linux i486 i586 i686, gfortran compiler with gcc  (serial)
2.  Linux i486 i586 i686, gfortran compiler with gcc  (smpar)
3.  Linux i486 i586 i686, gfortran compiler with gcc  (dmpar)
4.  Linux i486 i586 i686, gfortran compiler with gcc  (dm+sm)
5.  Linux i486 i586 i686, g95 compiler with gcc  (serial)
6.  Linux i486 i586 i686, g95 compiler with gcc  (dmpar)
7.  Linux i486 i586 i686, PGI compiler with gcc  (serial)
8.  Linux i486 i586 i686, PGI compiler with gcc  (smpar)
9.  Linux i486 i586 i686, PGI compiler with gcc  (dmpar)
10.  Linux i486 i586 i686, PGI compiler with gcc  (dm+sm)
11.  Linux x86_64 i486 i586 i686, ifort compiler with icc (non-SGI installations)  (serial)
12.  Linux x86_64 i486 i586 i686, ifort compiler with icc (non-SGI installations)  (smpar)
13.  Linux x86_64 i486 i586 i686, ifort compiler with icc (non-SGI installations)  (dmpar)
14.  Linux x86_64 i486 i586 i686, ifort compiler with icc (non-SGI installations)  (dm+sm)
15.  Linux i486 i586 i686 x86_64, PathScale compiler with pathcc (serial)
16.  Linux i486 i586 i686 x86_64, PathScale compiler with pathcc  (dmpar)

Enter selection [1-16] : 9

Compile for nesting? (0=no nesting, 1=basic, 2=preset moves, 3=vortex following) [default 0]: 1

Enter the appropriate options that are best for your computer and application.

When the return key is hit, a configure.wrf file will be created. Edit compile options/paths, if necessary.

Hint: It is helpful to start with something simple, such as the serial build. If it is successful, move on to build smpar or dmpar code. Remember to type ‘clean –a’ between each build.

Hint: If you anticipate generating a netCDF file that is larger than 2Gb (whether it is a single- or multi-time period data [e.g. model history]) file), you may set the following environment variable to activate the large-file support option from netCDF (in c-shell):

setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1

Hint: If you would like to use parallel netCDF (p-netCDF) developed by Argonne National Lab (http://trac.mcs.anl.gov/projects/parallel-netcdf), you will need to install p-netCDF separately, and use the environment variable PNETCDF to set the path:

setenv PNETCDF path-to-pnetcdf-library

Hint:  Since V3.5, compilation may take a bit longer due to the addition of the CLM4 module.  If you do not intend to use the CLM4 land-surface model option, you can modify your configure.wrf file by removing -DWRF_USE_CLM from ARCH_LOCAL.

To compile the code, type

./compile

and the following choices will appear:

  Usage:
 
compile wrf           compile wrf in run dir (Note, no real.exe, ndown.exe or ideal.exe generated)
 
or choose a test case (see README_test_cases for details):
 
compile em_b_wave
compile em_esmf_exp
(example only)
compile em_grav2d_x
compile em_heldsuarez
compile em_hill2d_x
compile em_les
compile em_quarter_ss
compile em_real
compile em_seabreeze2d_x
compile em_squall2d_x
compile em_squall2d_y
compile em_tropical_cyclone
compile exp_real (example of a toy solver)
compile nmm_real (NMM solver)
 
   compile –h              help message
 

where em stands for the Advanced Research WRF dynamic solver (which currently is the 'Eulerian mass-coordinate' solver). Type one of the above to compile. When you switch from one test case to another, you must type one of the above to recompile. The recompile is necessary to create a new initialization executable  (i.e. real.exe, and ideal.exe - there is a different ideal.exe for each of the idealized test cases), while wrf.exe is the same for all test cases.

If you want to remove all object files (except those in the external/ directory) and executables, type 'clean'.

Type 'clean -a' to remove built files in ALL directories, including configure.wrf (the original configure.wrf will be saved to configure.wrf.backup). This is recommended if you make any mistake during the process, or if you have edited the configure.wrf  or Registry files.

Hint: If you have trouble compiling routines, like solve_em.F, you can try to run the configure script with the optional argument ‘-s’, i.e.

./configure –s

This will configure to compile solve_em.F and a few other routines with reduced optimization.

If you would like to turn off optimization for all the code, say during code development and debugging, you can run the configure script with option ‘-d’:

./configure –d

Beginning with V3.5, the compression function in netCDF4 is supported. This option will typically reduce the file size by more than 50%. It will require netCDF4 to be installed with the option --enable-netcdf-4. Before compiling WRF, you will need to set the environment variable NETCDF4. In a C-shell environment, type setenv NETCDF4 1, followed by ‘configure’ and ‘compile’.

For more detailed information, visit: http://www.mmm.ucar.edu/wrf/users/wrfv3.5/building-netcdf4.html

a. Idealized case

For any 2D test case (labeled in the case names), serial or OpenMP (smpar) compile options must be used.  Additionally, you must only choose the ‘0=no nesting’ option when you configure. For all other cases, you may use serial or parallel (dmpar) and nesting.  Suppose you would like to compile and run the 2-dimensional squall case, type

./compile em_squall2d_x >& compile.log

After a successful compilation, you should have two executables created in the main/ directory: ideal.exe and wrf.exe. These two executables will be linked to the corresponding test/case_name and run/ directories. cd to either directory to run the model.

It is a good practice to save the entire compile output to a file. When the executables are not present, this output is useful to help diagnose the compile errors.

b. Real-data case

For a real-data case, type

./compile em_real >& compile.log &

When the compile is successful, it will create three executables in the main/directory: ndown.exe, real.exe and wrf.exe.

real.exe: for WRF initialization of real data cases
ndown.exe : for one-way nesting
wrf.exe : WRF model integration

Like in the idealized cases, these executables will be linked to the test/em_real and run/ directories. cd to one of these two directories to run the model.

 

 

Running WRF

One may run the model executables in either the run/ directory, or the test/case_name directory. In either case, one should see executables ideal.exe or real.exe (and ndown.exe), and wrf.exe, linked files (mostly for real-data cases), and one or more namelist.input files in the directory.

Hint: If you would like to run the model executables in a different directory, copy or link the files in the test/em_* directory to that directory, and run from there.

a. Idealized case

Suppose the test case em_squall2d_x is compiled. To run, type

cd test/em_squall2d_x

Edit the namelist.input file (see README.namelist in the WRFV3/run/ directory or its Web version) to change length of integration, frequency of output, size of domain, timestep, physics options, and other parameters.

If you see a script in the test case directory, called run_me_first.csh, run this one first by typing:

./run_me_first.csh

This links some physics data files that might be needed to run the case.

*Note:  when running em_fire, you must copy everything from the ‘hill_simple’ directory into your current working directory in order for it to run correctly.

cp hill_simple/* .

To run the initialization program, type

./ideal.exe

This program will typically read an input sounding file located in that directory, and generate an initial condition file wrfinput_d01. All idealized cases do not require a lateral boundary file because of the boundary condition choices they use, such as the periodic option. If the job is run successfully, the last thing it prints should be: wrf: SUCCESS COMPLETE IDEAL INIT’.

To run the model and save the standard output to a file, type

./wrf.exe >& wrf.out &

or for a 3D test case compiled with MPI (dmpar) option,

mpirun –np 4 ./wrf.exe

If  successful, the wrf output file will be written to a file named
wrfout_d01_0001-01-01_00:00:00.

Pairs of rsl.out.* and rsl.error.* files will appear with any MPI runs. These are standard out and error files. Note that the execution command for MPI runs may be different on different machines and for different MPI installation. Check the user manual.

If the model run is successful, the last thing printed in the ‘wrf.out’ or rsl.*.0000 files should be: ‘wrf: SUCCESS COMPLETE WRF’. Output files wrfout_d01_0001-01-01* and wrfrst* should be present in the run directory, depending on how namelist variables are specified for output. The time stamp on these files originates from the start times in the namelist file.

b. Real-data case

To make a real-data case run, cd to the working directory by typing

cd test/em_real (or cd run)

Start with the namelist.input template file in the directory and edit it to match your case.

Running a real-data case requires successfully running the WRF Preprocessing System programs (or WPS). Make sure met_em.* files from WPS are seen in the run directory (either link or copy the files):

cd test/em_real
ls –l ../../../WPS/met_em*
ln –s ../../..WPS/met_em* .

Make sure you edit the following variables in the namelist.input file:

num_metgrid_levels: number of incoming data levels (can be found by using the ncdump command on the met_em.* file)
num_metgrid_soil_levels: number of incoming soil data levels
eta_levels: model eta levels from 1 to 0, if you choose to do so. If not, real will compute a nice set of eta levels. The computed eta levels have 7 half levels in the lowest 1 km or so, and stretches to constant dz.

 

 

Other options for use to assist vertical interpolation are:

use_surface: whether to use surface input data
extrap_type:
vertical extrapolation of non-temperature fields
t_extrap_type:
vertical extrapolation for potential temperature
use_levels_below_ground:
use levels below the input surface level
force_sfc_in_vinterp
: force vertical interpolation to use surface data
lowest_lev_from_sfc: place surface data in the lowest model level
p_top_requested: pressure top used in the model, default is 5000 Pa
interp_type: vertical interpolation method: linear in p(default) or log(p)
lagrange_order: vertical interpolation order, linear (default) or quadratic
zap_close_levels: allow surface data to be used if it is close to a constant pressure level.
smooth_cg_topo: smooth topography on the outer rows and columns in domain 1.
use_tavg_for_tsk: whether to use diurnally-averaged surface temp as skin temp. The diurnally- averaged surface temp can be computed using the WPS utility avg_tsfc.exe. This option can be used when SKINTEMP is not present.

Other minimum set of namelist variables to edit are:

start_*, end_*: start and end times for data processing and model integration
interval_seconds:
input data interval for boundary conditions
time_step:
model time step, and can be set as large as 6*DX (in km)
e_ws, e_sn, e_vert: domain dimensions in west-east, south-north and vertical
dx, dy:
model grid distance in meters

To run the real-data initialization program, compiled using serial or OpenMP (smpar) options, type

./real.exe >& real.out

Successful completion of the job should have ‘real_em: SUCCESS EM_REAL INIT’ printed at the end of the real.out file. It should also produce wrfinput_d01 and wrfbdy_d01 files. In the real data case, both files are required. 

Run the WRF model by typing

./wrf.exe

A successful run should produce one or several output files with names like wrfout_d<domain>_<date> (where <domain> represents domain ID, and <date> represents a date string with the format yyyy-mm-dd_hh:mm:ss. For example, if you start the model at 1200 UTC, January 24 2000, then your first output file should have the name:

wrfout_d01_2000-01-24_12:00:00

The time stamp on the file name is always the first time the output file is written. It is always good to check the times written to the output file by typing:

ncdump -v Times wrfout_d01_2000-01-24_12:00:00

You may have other wrfout files, depending on the namelist options (how often you split the output files by using the namelist option frames_per_outfile). You may also create restart files if you have a restart frequency (restart_interval in the namelist.input file) set within your total integration time. The restart file should have names like

wrfrst_d<domain>_<date>

The time stamp on a restart file is the time at which that restart file is valid.

For DM (distributed memory) parallel systems, some form of the mpirun command will be needed to run the executables. For example, on a Linux cluster, the command to run MPI code, using 4 processors, may look like:

mpirun -np 4 ./real.exe
mpirun -np 4 ./wrf.exe

On some IBMs, the command for a batch job may be:

poe ./real.exe
poe ./wrf.exe

or

mpirun.lsf ./wrf.exe (on NCAR IBM bluefire) 

c. Restart Run

A restart run allows a user to extend a run to a longer simulation period. It is effectively a continuous run made of several shorter runs. Hence the results at the end of one or more restart runs should be identical to a single run without any restart.

 

In order to do a restart run, one must first create a restart file. This is done by setting the namelist variable restart_interval (unit is in minutes) to be equal to or less than the simulation length in the first model run, as specified by run_* variables or start_* and end_* times. When the model reaches the time to write a restart file, a restart file named wrfrst_d<domain>_<date> will be written. The date string represents the time when the restart file is valid.

 

When one starts the restart run, edit the namelist.input file, so that your start_* time will be set to the restart time (which is the time the restart file is written). The other namelist variable one must set is restart, this variable should be set to .true. for a restart run.

 

In summary, these namelists should be modified:

 

start_*, end_*:             start and end times for restart model integration
restart:                   logical to indicate whether the run is a restart or not

If the history and restart intervals are changed in a restart run, and the outcome isn’t what is expected to be, use namelist ‘override_restart_timers = .true.’.

If history output is desired at the time of restart, use namelist ‘write_hist_at_0h_rst = .true.’

Hint: Typically the restart file is a lot bigger in size than the history file, hence one may find that it is even ok to write a single model history output time to a file in netCDF format (frame_per_outfile=1), but it may fail to write a restart file. This is because the basic netCDF file support is only 2Gb. There are two solutions to the problem. The first is to simply set the namelist option io_form_restart = 102 (instead of 2), and this will force the restart file to be written into multiple pieces, one per processor. As long as one restarts the model using the same number of processors, this option works well (and one should restart the model with the same number of processors in any case). The second solution is to recompile the code using the netCDF large file support option (see the section on “Installing WRF” in this chapter).

d. Two-way Nested Runs

A two-way nested run is a run in which multiple domains at different grid resolutions are run simultaneously and communicate with each other: The coarser domain provides boundary values for the nest, and the nest feeds its calculation back to the coarser domain. The model can handle multiple domains at the same nest level (no overlapping nest), and multiple nest levels (telescoping). 

When preparing for a nested run, make sure that the code is compiled with basic nest options (option 1).

Most of options to start a nest run are handled through the namelist. All variables in the namelist.input file that have multiple columns of entries need to be edited with caution. Start with a namelist template. The following are the key namelist variables to modify:

start_*, end_*: start and end simulation times for the nest

input_from_file: whether a nest requires an input file (e.g. wrfinput_d02). This is typically used for a real data case, since the nest input file contains nest topography and land information.

fine_input_stream: which fields from the nest input file are used in nest initialization. The fields to be used are defined in the Registry.EM. Typically they include static fields (such as terrain and landuse), and masked surface fields (such as skin temperature, soil moisture and temperature). Useful for a nest starting at a later time than the coarse domain.

max_dom: the total number of domains to run. For example, if you want to have one coarse domain and one nest, set this variable to 2.

grid_id: domain identifier that is used in the wrfout naming convention. The most coarse grid must have grid_id of 1.

parent_id: used to indicate the parent domain of a nest. grid_id value is used.

i_parent_start/j_parent_start: lower-left corner starting indices of the nest domain in its parent domain. These parameters should be the same as in namelist.wps.

parent_grid_ratio: integer parent-to-nest domain grid size ratio. Typically an odd number ratio is used in real-data applications.

parent_time_step_ratio: integer time-step ratio for the nest domain. It may be different from the parent_grid_ratio, though they are typically set the same.

feedback: this is the key setup to define a two-way nested (or one-way nested) run. When feedback is on, the values of the coarse domain are overwritten by the values of the variables (average of cell values for mass points, and average of the cell-face values for horizontal momentum points) in the nest at the coincident points. For masked fields, only the single point value at the collocating points is fed back. If the parent_grid_ratio is even, an arbitrary choice of  the southwest corner point value is used for feedback. This is the reason it is better to use an odd parent_grid_ratio with this option. When feedback is off , it is equivalent to a one-way nested run, since nest results are not reflected in the parent domain.

smooth_option: this a smoothing option for the parent domain in the area of the nest if feedback is on. Three options are available: 0 = no smoothing; 1 = 1-2-1 smoothing; 2 = smoothing-desmoothing.

 

3-D Idealized Cases

For 3-D idealized cases, no nest input files are required. The key here is the specification of the namelist.input file. What the model does is to interpolate all variables required in the nest from the coarse domain fields. Set

input_from_file = T, F,

 

Real Data Cases

For real-data cases, three input options are supported. The first one is similar to running the idealized cases. That is to have all fields for the nest interpolated from the coarse domain (input_from_file = T, F). The disadvantage of this option is obvious:  one will not benefit from the higher resolution static fields (such as terrain, landuse, and so on).

The second option is to set input_from_file = T for each domain, which means that the nest will have a nest wrfinput file to read in. The limitation of this option is that this only allows the nest to start at the same time as the coarse domain.

The third option is, in addition to setting input_from_file = T for each domain, also set fine_input_stream = 2 for each domain. Why a value of 2? This is based on the Registry setting, which designates certain fields to be read in from the auxiliary input stream number 2. This option allows the nest initialization to use 3-D meteorological fields interpolated from the coarse domain, static fields and masked, and time-varying surface fields from the nest wrfinput; hence it allows a nest to start at a later time than hour 0. Setting fine_input_stream = 0 is equivalent to the second option.

To run real.exe for a nested run, one must first run WPS and create data for all the nests. Suppose WPS is run for a 24 hour period, two-domain nested case starting at 1200 UTC Jan 24 2000. Then the following files should be generated in a WPS directory:

met_em.d01.2000-01-24_12:00:00
met_em.d01.2000-01-24_18:00:00
met_em.d01.2000-01-25_00:00:00
met_em.d01.2000-01-25_06:00:00
met_em.d01.2000-01-25_12:00:00
met_em.d02.2000-01-24_12:00:00

Typically only the first time period of the nest input file is needed to create a nest wrfinput file. Link or move all these files to the run directory.

Edit the namelist.input file and set the correct values for all relevant variables, described on the previous pages (in particular, set max_dom = 2, for the total number of domains to run), as well as physics options. Type the following to run:

./real.exe >& real.out
or
mpirun –np 4 ./real.exe

If successful, this will create all input files for coarse, as well as nested domains. For a two-domain example, these are created:

wrfinput_d01
wrfinput_d02
wrfbdy_d01

To run WRF, type

./wrf.exe
or
mpirun –np 4 ./wrf.exe

If successful, the model should create wrfout files for both domain 1 and 2:

wrfout_d01_2000-01-24_12:00:00
wrfout_d02_2000-01-24_12:00:00

e. One-way Nested Run Using ndown

WRF supports two separate one-way nested options. In this section, one-way nesting is defined as a finer-grid-resolution run, made as a subsequent run after the coarser-grid-resolution run, where the ndown program is run in-between the two simulations. The initial and lateral boundary conditions for this finer-grid run are obtained from the coarse grid run, with input from higher resolution terrestrial fields (e.g. terrain, landuse, etc.), and masked surface fields (such as soil temperature and moisture). The program that performs this task is ndown.exe.

*Note that the use of this program requires the code to be compiled for nesting.

When one-way nesting is used, the coarse-to-fine grid ratio is only restricted to be an integer. An integer less than or equal to 5 is recommended. Frequent output (e.g. hourly) from the coarse grid run is also recommended to provide better boundary specifications.

A caveat with using ndown for one-way nesting is that the microphysics variables are not used for boundary conditions; they are only in the initial conditions. If that is important to you, use the two-way nesting option instead.

Step 1: Make a coarse grid run.

This is no different than any of the single-domain WRF runs, as described above.

Step 2:  Run geogrid.exe (gives geo_em.d01 and geo_em.d02 files) and metgrid.exe for two domains (as if you are making a 2-way nested run).

This will generate WPS output files for domain 1 (met_em.d01.<date>) and domain 2 (met_em.d02.<date>). 

Step 3:  Run real.exe for 2 domains.

The purpose of this step is to ingest higher resolution terrestrial fields and corresponding land-water masked soil fields.

- Copy the met_em* files into the directory from which you will be running real.exe.
- Edit the namelist.input file, changing ‘max_dom = 2’, and making sure columns 1 and 2 are set-up for a 2 domain run, editting the correct start time and grid dimensions.
- Run real.exe.  This will produce a wrfinput_d01 file, a wrfinput_d02 file, and a wrfbdy_d01 file.
- Rename the wrfinput_d02 file to wrfndi_d02.

Step 4: Make the final fine-grid initial and boundary condition files, by running ndown.exe

- Since V3.2, one must add io_form_auxinput2 = 2 in the &time_control section of namelist.input to run ndown.exe successfully. (If one desires to refine the vertical resolution when running ndown, set vert_refine_fact = integer (new in V3.2). There are no other changes required in the namelist or in the procedure. Another way to refine vertical resolution is to use the utility program v_interp (see the chapter for ‘Utilities and Tools’ for details)).

- Run ndown.exe, which uses input from the coarse grid wrfout file(s), and the wrfndi_d02 file generated from Step 3 above. This will produce a wrfinput_d02 and wrfbdy_d02 file.

Note that the program ndown may be run serially or in MPI, depending on the selected compile option.  The ndown program must be built to support nesting, however.  To run the program, type

./ndown.exe
or
mpirun –np 4 ./ndown.exe 

Step 5: Make the fine-grid WRF run

- Rename wrfinput_d02 and wrfbdy_d02 to wrfinput_d01 and wrfbdy_d01, respectively.

- Rename (or move) the original wrfout_d01* files to something else (or another directory) so as to not overwrite them.
- Edit namelist.input,moving all of the fine-grid domain data from column 2 to column 1 so that this run will be for the fine-grid domain only.  Make sure that the time_step is set to comply with the fine-grid domain (typically 6*DX).  It may be beneficial to save namelist.input to something else prior to this step in case you need to repeat this process in the future.  Save the newly-edited namelist as namelist.input.

- The WRF model’s physics options may be modified between runs (the WRF model before ndown and the WRF model after ndown), except generally for the land surface scheme option which has different number of soil depths depending on the scheme.  Users may take advantage of a feature that allows both the initial and lateral boundaries to use the moist and scalar arrays (have_bcs_moist and have_bcs_scalar, respectively).  This option is only to be used during the WRF model run which follows the ndown processing.  With this option, a user must keep the microphysics options the same between forecasts.  The advantage is that the previous WRF model provides realistic lateral boundary tendencies for all of the microphysical variables, instead of a simple “zero inflow” or “zero gradient outflow”.

- Run WRF for this grid.

*Keep in mind that the output from this run will be in the form wrfout_d01* but it will actually be output for domain 2.  It may help to rename these to avoid future confusion.

The figure on the next page summarizes the data flow for a one-way nested run using the program ndown.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

f. Moving-Nested Run

Two types of moving tests are allowed in WRF. In the first option, a user specifies the nest movement in the namelist. The second option is to move the nest automatically, based on an automatic vortex-following algorithm. This option is designed to follow the movement of a well-defined tropical cyclone.

To make the specified moving nested run, select the right nesting compile option (option ‘preset moves’). Note that code compiled with this option will not support static nested runs. To run the model, only the coarse grid input files are required. In this option, the nest initialization is defined from the coarse grid data - no nest input is used. In addition to the namelist options applied to a nested run, the following needs to be added to the namelist section &domains:

num_moves: the total number of moves one can make in a model run. A move of any domain counts against this total. The maximum is currently set to 50, but it can be changed by changing MAX_MOVES in frame/module_driver_constants.F.

move_id: a list of nest IDs, one per move, indicating which domain is to move for a given move.

move_interval: the number of minutes from the beginning of the run until a move is supposed to occur. The nest will move on the next time step after the specified instant of model time has passed.

move_cd_x,move_cd_y: distance in the number of grid points and direction of the nest move (positive numbers indicate moving toward east and north, while negative numbers indicate moving toward west and south).

Parameter max_moves is set to be 50, but can be modified in the source code file frame/module_driver_constants.F, if needed.

To make the automatic moving nested runs, select the ‘vortex-following’ option when configuring. Again note that this compile would only support the auto-moving nest, and will not support the specified moving nested run or static nested run at the same time. Again, no nest input is needed. If one wants to use values other than the default ones, add and edit the following namelist variables in the &domains section:

vortex_interval: how often the vortex position is calculated in minutes (default is 15 minutes).

max_vortex_speed: used with vortex_interval to compute the search radius for the new vortex center position (default is 40 m/sec).

corral_dist: the distance in the number of coarse grid cells that the moving nest is allowed to get near the mother domain boundary (default is 8). This parameter can be used to center the telescoped nests so that all nests are moved together with the storm.

track_level: the pressure level (in Pa) where the vortex is tracked.

time_to_move: the time (in minutes) to move a nest. This option may help with the case when the storm is still too weak to be tracked by the algorithm.

When the automatic moving nest is employed, the model dumps the vortex center location, with minimum mean sea-level pressure and maximum 10-m winds in a standard-out file (e.g. rsl.out.0000). Typing ‘grep ATCF rsl.out.0000’ will produce a list of storm information at a 15-minute interval:

ATCF    2007-08-20_12:00:00            20.37   -81.80     929.7 133.9
ATCF    2007-08-20_12:15:00            20.29   -81.76     929.3      133.2

In both types of moving-nest runs, the initial location of the nest is specified through i_parent_start and j_parent_start in the namelist.input file.

The automatic moving nest works best for a well-developed vortex.

g. Analysis Nudging Runs (Upper-Air and/or Surface)

Prepare input data to WRF as usual using WPS. If nudging is desired in the nest domains, make sure all time periods for all domains are processed in WPS. For surface-analysis nudging (new in Version 3.1), OBSGRID needs to be run after METGRID, and it will output a wrfsfdda_d01 file that the WRF model reads for this option.

 

Set the following options before running real.exe, in addition to others described earlier (see the namelists in examples.namelist in the test/em_real/ directory, for guidance):

 

grid_fdda = 1

grid_sfdda = 1

 

Run real.exe as before, and this will create, in addition to wrfinput_d0* and wrfbdy_d01 files, a file named ‘wrffdda_d0*’. Other grid-nudging namelists are ignored at this stage, but it is good practice to fill them all in before one runs real. In particular, set

 

gfdda_inname   =  “wrffdda_d<domain>”
gfdda_interval =  time interval of input data in minutes
gfdda_end_h    =  end time of grid-nudging in hours

 

sgfdda_inname   =  “wrfsfdda_d<domain>”
sgfdda_interval =  time interval of input data in minutes
sgfdda_end_h    =  end time of surface grid-nudging in hours

 

 

See http://www.mmm.ucar.edu/wrf/users/wrfv3.1/How_to_run_grid_fdda.html and README.grid_fdda in WRFV3/test/em_real/ for more information.

 

Spectral Nudging is a new upper-air nudging option since Version 3.1. This selectively nudges the coarser scales only, but is otherwise set up the same way as grid-nudging. This option also nudges geopotential height. The wave numbers defined here are the number of waves contained in the domain, and the number is the maximum one that is nudged.

 

grid_fdda = 2

xwavenum = 3

ywavenum = 3

h. Observation Nudging Run

In addition to the usual input data preparation using WPS, station observation files are required. See http://www.mmm.ucar.edu/wrf/users/wrfv3.1/How_to_run_obs_fdda.html for instructions. The observation file names expected by WRF are OBS_DOMAIN101 for domain 1, and OBS_DOMAIN201 for domain 2, etc.

 

Observation nudging is activated in the model by the following namelists in &fdda:

 

obs_nudge_opt = 1
fdda_start    = 0 (obs nudging start time in minutes)
fdda_end      = 360 (obs nudging end time in minutes)

 

and in &time_control

 

auxinput11_interval_s = 180, 180, 180, (set the interval to be small enough so

                                                         that all observations will be checked)

 

Look for an example to set other obs nudging namelist variables in the file examples.namelists in test/em_real/ directory. See http://www.mmm.ucar.edu/wrf/users/wrfv3.1/How_to_run_obs_fdda.html and README.obs_fdda in WRFV3/test/em_real/ for more information.

i. Global Run

WRFV3 supports global capability. To make a global run, run WPS, starting with the namelist template namelist.wps.gloabl. Set map_proj = ‘lat-lon’, and grid dimensions e_we and e_sn without setting dx and dy in namelist.wps. The geogrid program will calculate grid distances, and their values can be found in the global attribute section of geo_em.d01.nc file. Type
ncdump –h geo_em.d01.nc to find out the grid distances, which will be needed in filling out WRF’s namelist.input file. Grid distances in x and y directions may be different, but it is best that they are set similarly or the same. WRF and WPS assume the earth is a sphere, and its radius is 6370 km. There are no restrictions on what to use for grid dimensions, but for effective use of the polar filter in WRF, the east-west dimension should be set to 2P*3Q*5R+1 (where P, Q, and R are any integers, including 0).

 

Run the rest of the WPS programs as usual but only for one time period. This is because the domain covers the entire globe, and lateral boundary conditions are no longer needed.

 

Run the program real.exe as usual and for one time period only. The lateral boundary file wrfbdy_d01 is not needed.

 

Copy namelist.input.global to namelist.input, and edit it. Run the model as usual.

Note:  since this is a uncommon option in the model, use it with caution. Not all options have been tested. For example, all filter options have not been tested, and positive-definite options are not working for a lat-lon grid.

As an extension to the global lat-lon grid, the regional domain can also be set using a lat-lon grid. To do so, one needs to set both grid dimensions, and grid distances in degrees. Again geogrid will calculate the grid distance, assuming the earth is a sphere and its radius is 6370 km. Find the grid distance in meters in the netCDF file, and use the value for WRF’s namelist.input file.

j. Using Digital Filter Initialization

Digital filter initialization (DFI) is a new option in V3. It is a way to remove initial model imbalance as, for example, measured by the surface pressure tendency. This might be important when one is interested in the 0 – 6 hour simulation/forecast. It runs a digital filter during a short model integration, backward and forward, and then starts the forecast. In WRF implementation, this is all done in a single job. With the V3.3 release, DFI can be used for multiple domains with concurrent nesting, with feedback disabled.

There is no special requirement for data preparation.

Start with the namelist template namelist.input.dfi. This namelist file contains an extra namelist record for DFI: &dfi_control. Edit it to match your case configuration. For a typical application, the following options are used:

dfi_opt = 3 (Note:  if doing a restart, this must be changed to 0)
dfi_nfilter = 7 (filter option: Dolph)
dfi_cutoff_seconds = 3600 (should not be longer than the filter window)

For time specification, it typically needs to integrate backward for 0.5 to 1 hour, and integrate forward for half of the time.

 

If option dfi_write_filtered_input is set to true, a filtered wrfinput file, wrfinput_initialized_d01, will be produced when you run wrf.

 

In Version 3.2, a constant boundary condition option is introduced for DFI. To use it, set constant_bc = 1 in &bdy_control

 

If a different time step is used for DFI, one may use time_step_dfi to set it.

k. Using sst_update option

The WRF model physics does not predict sea-surface temperature, vegetation fraction, albedo and sea ice. For long simulations, the model provides an alternative to read-in the time-varying data and update these fields. In order to use this option, one must have access to time-varying SST and sea ice fields. Twelve monthly values of vegetation fraction and albedo are available from the geogrid program. Once these fields are processed via WPS, one may activate the following options in the namelist record &time_control before running the program real.exe and wrf.exe:

 

io_form_auxinput4 = 2
auxinput4_inname = “wrflowinp_d<domain>” (created by real.exe)
auxinput4_interval =
360, 360, 360,

 

and in &physics

 

sst_update = 1

l. Using Adaptive Time Stepping

Adaptive time stepping is a way to maximize the time step that the model can use while keeping the model numerically stable. The model time step is adjusted based on the domain-wide horizontal and vertical stability criterion (called the Courant-Friedrichs-Lewy (CFL) condition). The following set of values would typically work well.

 

use_adaptive_time_step = .true.
step_to_output_time = .true. (but nested domains may still be writing output at the desired time. Try to use adjust_output_times = .true. to make up for this.)
target_cfl = 1.2, 1.2, 1.2,
max_step_increase_pct = 5, 51, 51, (a large percentage value for the nest allows the time step for the nest to have more freedom to adjust)
starting_time_step = the actual value or -1 (which means 6*DX at start time)
max_time_step : use fixed values for all domains, e.g. 8*DX

min_time_step : use fixed values for all domains, e.g. 4*DX

adaptation_domain: which domain is driving the adaptive time step

 

Also see the description of these options in the list of namelist on page 5-43.

 

m. Option to stochastically perturb forecast

Since Version 3.3, WRF has an option to stochastically perturb forecasts via a stochastic kinetic-energy backscatter scheme (SKEBS, Shutts, 2005, QJRMS). The scheme introduces temporally and spatially correlated perturbations to the rotational wind components and potential temperature. An application and verification of this scheme to mesoscale ensemble forecast in the mid-latitudes is available Berner et. al,  Mon. Wea. Rev., 139, 1972—1995 (http://journals.ametsoc.org/doi/abs/10.1175/2010MWR3595.1).

 

SKEBS generates perturbation tendency fields ru_tendf_stoch (m/s^2), rv_tendf_stoch (m/s^2), rt_tendf_stoch (K/s^2) for u,v and t, respectively. For new applications we recommend to output and compare the magnitude and spatial patterns of these perturbation fields to the physics tendency fields for the same variables.  Within the scheme, these perturbation fields are then coupled to mass added to physics tendencies of u,v,t. The stochastic perturbations fields for wind and temperature are controlled by the kinetic and potential energy they inject into the flow. The injected energy is expressed as backscattered dissipation rate for streamfunction and temperature respectively.

 

Since the scheme uses Fast Fourier Transforms (FFTs) provided in the library FFTPACK, we recommend the number of gridpoints in each direction to be product of small primes. If the number of gridpoints in one direction is a large prime, computational

cost may increase substantially. Multiple domains are supported by interpolating the forcing from the largest domain for which the scheme is turned on (normally the parent domain) down to all nested domain.

 

At present, default settings for the scheme have been thoroughly tested on synoptic and mesoscale domains over the mid-latitudes and as such offer a starting point that users may find they want to change based on their particular application. Relationships between backscatter amplitudes and perturbation fields for a given variable are not necessarily proportional due to the complexity of the scheme, thus users wishing to adjust default settings strongly advised to read details addressed in the technical document available at http://www.cgd.ucar.edu/~berner/skebs.html, which also contains

version history, derivations, and examples. Other defaults currently hard-coded into the scheme, such as spatial and temporal correlations might need to be changed for other applications.

 

Further documentation is available at http://www.cgd.ucar.edu/~berner/skebs.html

 

Note that the current version should provide bit-reproducible results except for OpenMP. It does not support restarts.

 

This scheme is controlled via the following physics namelist parameters, for each domain separately:

 

 

 

stoch_force_opt

= 0, 0, 0 : No stochastic parameterization
= 1, 1, 1 : use SKEB scheme

stoch_vertstruc_opt

= 0, 0, 0 : Constant vertical structure of random pattern generator

= 1, 1, 1 : Random phase vertical structure random pattern generator

tot_backscat_psi

= Total backscattered dissipation rate for streamfunction; Controls amplitude of rotational wind perturbations (default value is 1.0E-5 m^2/s^3)

tot_backscat_t

= Total backscattered dissipation rate for temperature; Controls amplitude of potential temperature perturbations (default value is 1.0E-6 m^2/m^3)

ens

= Random seed for random number stream (needs to be different for each member in ensemble forecasts)

 

Option to perturb the boundary conditions 

 

This option allows for the addition of perturbations to the boundary tendencies for u and v wind components and potential temperature in WRF stand-alone runs.  Users may provide a pattern or use the pattern generated by SKEBS.

 

The perturb_bdy option runs independently of SKEBS and as such may be run with or without the SKEB scheme, which operates solely on the interior grid.  However, selecting perturb_bdy=1 will require the generation of a domain-size random array, thus computation time may increase. 

 

Selecting perturb_bdy=2 will require the user to provide a pattern.  Arrays are initialized and called: field_u_tend_perturb, field_v_tend_perturb, field_t_tend_perturb.  These arrays will need to be filled with desired pattern in spec_bdytend_perturb in share/module_bc.F or spec_bdy_dry_perturb in dyn_em/module_bc_em.F. 

 

The namelist parameters to control the perturb boundary conditions option are found in the namelist.input file under the &bdy_control section:

          perturb_bdy  = 0 :  No boundary perturbations (default)

                                = 1 :  Use SKEBS pattern for boundary perturbations

                                = 2 :  Use other user provided pattern for boundary perturbations

(From Berner and Smith) 

n. Run-Time IO

With the release of WRF version 3.2, IO decisions may now be updated as a run-time option.  Previously, any modification to the IO (such as which variable is associated with which stream) was handled via the Registry, and changes to the Registry always necessitate a cycle of clean –a, configure, and compile.  This compile-time mechanism is still available and it is how most of the WRF IO is defined.  However, should a user wish to add (or remove) variables from various streams, that capability is available as an option.

 

First, the user lets the WRF model know where the information for the run-time modifications to the IO is located.  This is a text file (my_file_d01.txt), one for each domain, defined in the namelist.input file, located in the time_control namelist record.

 

&time_control

iofields_filename = “my_file_d01.txt”, “my_file_d02.txt”

ignore_iofields_warning = .true.,

/

The contents of the text file associates a stream ID (0 is the default history and input) with a variable, and whether the field is to be added or removed.  The state variables must already be defined in the Registry file.  Following are a few examples:

-:h:0:RAINC,RAINNC

would remove the fields RAINC and RAINNC from the standard history file.

 

+:h:7:RAINC,RAINNC

would add the fields RAINC and RAINNC to an output stream #7.

 

The available options are:

            + or -, add or remove a variable

            0-24, integer, which stream

            i or h, input or history

            field name in the Registry – this is the first string in quotes. Note: do not include

                                                         any spaces in between field names.

 

It is not necessary to remove fields from one stream to insert them in another.  It is OK to have the same field in multiple streams.

The second namelist variable, ignore_iofields_warning, tells the program what to do if it encounters an error in these user-specified files. The default value, .TRUE., is to print a warning message but continue the run.  If set to .FALSE., the program will abort if there are errors in these user-specified files.

Note that any field that can be part of the optional IO (either the input or output streams) must already be declared as a state variable in the Registry.  Care needs to be taken when specifying the names of the variables that are selected for the run-time IO.  The "name" of the variable to use in the text file (defined in the namelist.input file) is the quoted string from the Registry file. Most of the WRF variables have the same string for the name of the variable used inside the WRF source code (column 3 in the Registry file, non-quoted, and not the string to use) and the name of the variable that appears in the netCDF file (column 9 in the Registry file, quoted, and that is the string to use).

 

o. Output Time Series

There is an option to output time series from a model run. To activate the option, a file called “tslist” must be present in the WRF run directory. The tslist file contains a list of locations defined by their latitude and longitude along with a short description and an abbreviation for each location. A sample file looks something like this:

 

#-----------------------------------------------#

# 24 characters for name | pfx |  LAT  |   LON  |

#-----------------------------------------------#

Cape Hallett              hallt -72.330  170.250

McMurdo Station           mcm   -77.851  166.713

 

The first three lines in the file are regarded as header information, and are ignored. Given a tslist file, for each location inside a model domain (either coarse or nested) a file containing time series variables at each model time step will be written with the name pfx.d<domain>.TS, where pfx is the specified prefix for the location in the tslist file. The maximum number of time series locations is controlled by the namelist variable max_ts_locs in the namelist record &domains. The default value is 5. The time series output contains selected variables at the surface, including 2-m temperature, vapor mixing ratio, 10-m wind components, u and v, rotated to the earth coordinate, etc.. More information for time series output can be found in WRFV3/run/README.tslist.

 

Starting in V3.5, in addtion to surface variables, vertical profiles of earth-relative U and V, potential temperature, water vapor, and geopotential height will also be output. The default number of levels in the output is 15, but can be changed with namelist variable max_ts_level.

p. WRF-Hydro

This is a new capability in V3.5. It couples WRF model with hydrology processes (such as routing and channeling). Using WRF-Hydro requires a separate compile by using environment variable WRF_HYDRO. In c-shell environment, do

setenv WRF_HYDRO 1

before doing ‘configure’ and ‘compile’. Once WRF is compiled, copy files from hydro/Run/ directory to your working directory (e.g. test/em_real/). A separately prepared geogrid file is also required. Please refer the following web site for detailed information: http://www.ral.ucar.edu/projects/wrf_hydro/. (From W. Yu)

q. Using IO Quilting

This option allows a few processors to be set aside to be responsible for output only. It can be useful and performance-friendly if the domain size is large, and/or the time taken to write an output time is becoming significant when compared to the time taken to integrate the model in between the output times. There are two variables for setting the option:

 

nio_tasks_per_group:   How many processors to use per IO group for IO quilting.

Typically 1 or 2 processors should be sufficient for this purpose.

nio_groups:                        How many IO groups for IO. Default is 1.

 

*Note:  This option is only used for wrf.exe.  It cannot be used for real or ndown.

 

Examples of namelists for various applications

A few physics options sets (plus model top and the number of vertical levels) are provided here for reference. They may provide a good starting point for testing the model in your application. Also note that other factors will affect the outcome; for example, the domain setup, the distributions of vertical model levels, and input data.

a. 1 – 4 km grid distances, convection-permitting runs for a 1- 3 day run (as used for the NCAR spring real-time convection forecast over the US in 2013):

mp_physics                          = 8,
ra_lw_physics                       = 4,
ra_sw_physics                       = 4,
radt                                = 10,
sf_sfclay_physics                   = 2,
sf_surface_physics                  = 2,
bl_pbl_physics                      = 2,
bldt                                = 0,
cu_physics                          = 0,

ptop_requested                      = 5000,
e_vert                              = 40,

b. 20 – 30 km grid distances, 1- 3 day runs (e.g., NCAR daily real-time runs over the US):

mp_physics                          = 4,
ra_lw_physics                       = 4,
ra_sw_physics                       = 4,
radt                                = 15,
sf_sfclay_physics                   = 1,
sf_surface_physics                  = 2,
bl_pbl_physics                      = 1,
bldt                                = 0,
cu_physics                          = 1,
cudt                                = 5,

ptop_requested                      = 5000,
e_vert                              = 30,

c. Cold region 15 – 45 km grid sizes (e.g. used in NCAR’s Antarctic Mesoscale Prediction System):

mp_physics                          = 4,
ra_lw_physics                       = 4,
ra_sw_physics                       = 2,
radt                                = 15,
sf_sfclay_physics                   = 2,
sf_surface_physics                  = 2,
bl_pbl_physics                      = 2,
bldt                                = 0,
cu_physics                          = 1,
cudt                                = 5,
fractional_seaice                   = 1,
seaice_threshold                    = 0.0,

ptop_requested                      = 1000,
e_vert                              = 44,

d. Hurricane applications (e.g. 36, 12, and 4 km nesting used by NCAR’s real-time hurricane runs in 2012):

mp_physics                          = 6,
ra_lw_physics                       = 4,
ra_sw_physics                       = 4,
radt                                = 10,
sf_sfclay_physics                   = 1,
sf_surface_physics                  = 2,
bl_pbl_physics                      = 1,
bldt                                = 0,
cu_physics                          = 6,
(only on 36/12 km grid)
cudt                                = 0,
isftcflx                            = 2,

ptop_requested                      = 2000,
e_vert                              = 36,

e. Regional climate case at 10 – 30 km grid sizes (e.g. used in NCAR’s regional climate runs):

mp_physics                          = 6,
ra_lw_physics                       = 3,
ra_sw_physics                       = 3,
radt                                = 30,
sf_sfclay_physics                   = 1,
sf_surface_physics                  = 2,
bl_pbl_physics                      = 1,
bldt                                = 0,
cu_physics                          = 1,
cudt                                = 5,
sst_update                          = 1,
tmn_update                          = 1,
sst_skin                            = 1,
bucket_mm                           = 100.0,
bucket_J                            = 1.e9,
ptop_requested                      = 1000,
e_vert                              = 51,

spec_bdy_width                      = 10,
spec_zone                           = 1,
relax_zone                          = 9,
spec_exp                            = 0.33,

 

Check Output

Once a model run is completed, it is good practice to check a couple of things quickly.

If you have run the model on multiple processors using MPI, you should have a number of rsl.out.* and rsl.error.* files. Type ‘tail rsl.out.0000’ to see if you get ‘SUCCESS COMPLETE WRF’. This is a good indication that the model has run successfully.

The namelist options are written to a separate file: namelist.output.

Check the output times written to the wrfout* file by using the netCDF command:

  ncdump –v Times wrfout_d01_yyyy-mm-dd_hh:00:00

Take a look at either the rsl.out.0000 file or other standard-out files. This file logs the times taken to compute for one model time step, and to write one history and restart output file:


Timing for main: time 2006-01-21_23:55:00 on domain  2:    4.91110 elapsed seconds.

Timing for main: time 2006-01-21_23:56:00 on domain  2:    4.73350 elapsed seconds.

Timing for main: time 2006-01-21_23:57:00 on domain  2:    4.72360 elapsed seconds.

Timing for main: time 2006-01-21_23:57:00 on domain  1:   19.55880 elapsed seconds.

and

Timing for Writing wrfout_d02_2006-01-22_00:00:00 for domain 2: 1.17970 elapsed seconds.

Timing for main: time 2006-01-22_00:00:00 on domain 1: 27.66230 elapsed seconds.

Timing for Writing wrfout_d01_2006-01-22_00:00:00 for domain 1: 0.60250 elapsed seconds.

 

If the model did not run to completion, take a look at these standard output/error files too. If the model has become numerically unstable, it may have violated the CFL criterion (for numerical stability). Check whether this is true by typing the following:

 

grep cfl rsl.error.* or grep cfl wrf.out

you might see something like these:

5 points exceeded cfl=2 in domain            1 at time   4.200000 

  MAX AT i,j,k:          123          48          3 cfl,w,d(eta)= 4.165821

21 points exceeded cfl=2 in domain            1 at time   4.200000 

  MAX AT i,j,k:          123          49          4 cfl,w,d(eta)= 10.66290
 
When this happens, consider using the namelist option w_damping, and/or reducing the time step.

 

Trouble Shooting

If the model aborts very quickly, it is likely that either the computer memory is not large enough to run the specific configuration, or the input data have some serious problem. For the first problem, try to type ‘unlimit’ or ‘ulimit -s unlimited’ to see if more memory and/or stack size can be obtained.

For OpenMP (smpar-compiled code), the stack size needs to be set large, but not unlimited. Unlimited stack size may crash the computer.

To check if the input data is the problem, use ncview or another netCDF file browser.

Another frequent error seen is ‘module_configure: initial_config: error reading namelist’. This is an error message from the model complaining about errors and typos in the namelist.input file. Edit the namelist.input file with caution. If unsure, always start with an available template. A namelist record where the namelist read error occurs is provided in the V3 error message, and it should help with identifying the error.

 

Physics and Dynamics Options

Physics Options

WRF offers multiple physics options that can be combined in any way. The options typically range from simple and efficient, to sophisticated and more computationally costly, and from newly developed schemes, to well-tried schemes such as those in current operational models.

The choices vary with each major WRF release, but here we will outline those available in WRF Version 3.

1. Microphysics (mp_physics)

a. Kessler scheme: A warm-rain (i.e. no ice) scheme used commonly in idealized cloud modeling studies (mp_physics = 1).

b. Lin et al. scheme: A sophisticated scheme that has ice, snow and graupel processes, suitable for real-data high-resolution simulations (2).

c. WRF Single-Moment 3-class scheme: A simple, efficient scheme with ice and snow processes suitable for mesoscale grid sizes (3).

d. WRF Single-Moment 5-class scheme: A slightly more sophisticated version of (c) that allows for mixed-phase processes and super-cooled water (4).

e. Eta microphysics: The operational microphysics in NCEP models. A simple efficient scheme with diagnostic mixed-phase processes. For fine resolutions (< 5km) use option (5) and for coarse resolutions use option (95).

f. WRF Single-Moment 6-class scheme: A scheme with ice, snow and graupel processes suitable for high-resolution simulations (6).

g. Goddard microphysics scheme. A scheme with ice, snow and graupel processes suitable for high-resolution simulations (7). New in Version 3.0.

h.  New Thompson et al. scheme: A new scheme with ice, snow and graupel processes suitable for high-resolution simulations (8). This adds rain number concentration and updates the scheme from the one in Version 3.0. New in Version 3.1.

i. Milbrandt-Yau Double-Moment 7-class scheme (9). This scheme includes separate

categories for hail and graupel with double-moment cloud, rain, ice, snow, graupel and hail. New in Version 3.2.

j. Morrison double-moment scheme (10). Double-moment ice, snow, rain and graupel for cloud-resolving simulations. New in Version 3.0.

k. WRF Double-Moment 5-class scheme (14). This scheme has double-moment rain. Cloud and CCN for warm processes, but is otherwise like WSM5. New in Version 3.1.

l. WRF Double-Moment 6-class scheme (16). This scheme has double-moment rain. Cloud and CCN for warm processes, but is otherwise like WSM6. New in Version 3.1.

m. Stony Brook University (Y. Lin) scheme (13). This is a 5-class scheme with riming intensity predicted to account for mixed-phase processes. New in Version 3.3.

n. NSSL 2-moment scheme (17, 18). New since Version 3.4, this is a two-moment scheme for cloud droplets, rain drops, ice crystals, snow, graupel, and hail. It also predicts average graupel particle density, which allows graupel to span the range from frozen drops to low-density graupel. There is an additional option to predict cloud condensation nuclei (CCN, option 18) concentration (intended for idealized simulations). The scheme is intended for cloud-resolving simulations (dx <= 2km) in research applications.  Since V3.5, two more one-moment schemes have been added (19 and 21). Option 19 is a single-moment version of the NSSL scheme, and option 21 is similar to Gilmore et al. (2004).

o. CAM V5.1 2-moment 5-class scheme.

 

2.1 Longwave Radiation (ra_lw_physics)

a. RRTM scheme (ra_lw_physics = 1): Rapid Radiative Transfer Model. An accurate scheme using look-up tables for efficiency. Accounts for multiple bands, and microphysics species. For trace gases, the volume-mixing ratio values for
CO2=330e-6, N2O=0. and CH4=0. in pre-V3.5 code; in V3.5, CO2=379e-6, N2O=319e-9 and CH4=1774e-9. See section 2.3 for time-varying option.

b. GFDL scheme (99): Eta operational radiation scheme. An older multi-band scheme with carbon dioxide, ozone and microphysics effects.

c. CAM scheme (3): from the CAM 3 climate model used in CCSM. Allows for aerosols and trace gases. It uses yearly CO2, and constant N2O (311e-9) and CH4 (1714e-9). See section 2.3 for the time-varying option.

d. RRTMG scheme (4): A new version of RRTM added in Version 3.1. It includes the MCICA method of random cloud overlap. For major trace gases, CO2=379e-6, N2O=319e-9, CH4=1774e-9. See section 2.3 for the time-varying option.

e. New Goddard scheme (5). Efficient, multiple bands, ozone from climatology. It uses constant CO2=337e-6, N2O=320e-9, CH4=1790e-9. New in Version 3.3.

f. Fu-Liou-Gu scheme (7). multiple bands, cloud and cloud fraction effects, ozone profile from climatology and tracer gases. CO2=345e-6. New in Version 3.4.

 

2.2 Shortwave Radiation (ra_sw_physics)

a. Dudhia scheme: Simple downward integration allowing efficiently for clouds and clear-sky absorption and scattering (ra_sw_physics = 1).

b. Goddard shortwave: Two-stream multi-band scheme with ozone from climatology and cloud effects (2).

c. GFDL shortwave: Eta operational scheme. Two-stream multi-band scheme with ozone from climatology and cloud effects (99).

d. CAM scheme: from the CAM 3 climate model used in CCSM. Allows for aerosols and trace gases (3).

e. RRTMG shortwave. A new shortwave scheme with the MCICA method of random cloud overlap (4). New in Version 3.1.

f. New Goddard scheme (5). Efficient, multiple bands, ozone from climatology. New in Version 3.3.

g. Fu-Liou-Gu scheme (7). multiple bands, cloud and cloud fraction effects, ozone profile from climatology, can allow for aerosols. New in Version 3.4.

h. Held-Suarez relaxation. A temperature relaxation scheme designed for idealized tests only (31).

i. Slope and shading effects. slope_rad = 1 modifies surface solar radiation flux according to terrain slope. topo_shad = 1 allows for shadowing of neighboring grid cells. Use only with high-resolution runs with grid size less than a few kilometers. Since Version 3.2, these are available for all shortwave options.

j. swrad_scat: scattering turning parameter for ra_sw_physics = 1. Default value is 1, which is equivalent to 1.e-5 m2/kg. When the value is greater than 1, it increases the scattering.

2.3 Input to radiation options

a. CAM Green House Gases: Provides yearly green house gases from 1765 to 2500. The option is activated by compiling WRF with the macro –DCLWRFGHG added in configure.wrf. Once compiled, CAM, RRTM and RRTMG long-wave schemes will see these gases. Five scenario files are available: from IPCC AR5: CAMtr_volume_mixing_ratio .RCP4.5, CAMtr_volume_mixing_ratio.RCP6, and CAMtr_volume_mixing_ratio.RCP8.5; from IPCC AR4: CAMtr_volume_mixing_ratio.A1B, and CAMtr_volume_mixing_ratio.A2. The default points to the RCP8.5 file. New in Version 3.5.

b. Climatological ozone and aerosol data for RRTMG: The ozone data is adapted from CAM radiation (ra_*_physics=3), and it has latitudinal (2.82 degrees), height and temporal (monthly) variation, as opposed to the default ozone used in the scheme that only varies with height. This is activated by the namelist option o3input = 2. The aerosol data is based on Tegen et al. (1997), which has 6 types: organic carbon, black carbon, sulfate, sea salt, dust and stratospheric aerosol (volcanic ash, which is zero). The data also has spatial (5 degrees in longitude and 4 degrees in latitudes) and temporal (monthly) variations. The option is activated by the namelist option aer_opt = 1. New in Version 3.5.

 

3.1 Surface Layer (sf_sfclay_physics)

a. MM5 similarity: Based on Monin-Obukhov with Carslon-Boland viscous sub-layer and standard similarity functions from look-up tables (sf_sfclay_physics = 1).

b. Eta similarity: Used in Eta model. Based on Monin-Obukhov with Zilitinkevich thermal roughness length and standard similarity functions from look-up tables (2).

c. Pleim-Xiu surface layer. (7). New in Version 3.0.

d. QNSE surface layer. Quasi-Normal Scale Elimination PBL scheme’s surface layer option (4). New in Version 3.1.

e. MYNN surface layer. Nakanishi and Niino PBL’s surface layer scheme (5). New in Version 3.1.

f. TEMF surface layer. Total Energy – Mass Flux surface layer scheme. New in Version 3.3.

g. Revised MM5 surface layer scheme (11): Remove limits and use updated stability functions. New in Version 3.4. (Jimenez et al. MWR 2012).

h. iz0tlnd = 1 (for sf_sfclay_physics = 1 or 2), Chen-Zhang thermal roughness length over land, which depends on vegetation height, 0 = original thermal roughness length in each sfclay option. New in Version 3.2.

3.2 Land Surface (sf_surface_physics)

a. 5-layer thermal diffusion: Soil temperature only scheme, using five layers (sf_surface_physics = 1).

b. Noah Land Surface Model: Unified NCEP/NCAR/AFWA scheme with soil temperature and moisture in four layers, fractional snow cover and frozen soil physics. New modifications are added in Version 3.1 to better represent processes over ice sheets and snow covered area.

c. RUC Land Surface Model: RUC operational scheme with soil temperature and moisture in six layers, multi-layer snow and frozen soil physics (3).

d. Pleim-Xiu Land Surface Model. Two-layer scheme with vegetation and sub-grid tiling (7). New in Version 3.0:  The Pleim-Xiu land surface model (PX LSM; Pleim and Xiu 1995; Xiu and Pleim 2001) was developed and improved over the years to provide realistic ground temperature, soil moisture, and surface sensible and latent heat fluxes in mesoscale meteorological models.  The PX LSM is based on the ISBA model (Noilhan and Planton 1989), and includes a 2-layer force-restore soil temperature and moisture model.  the top layer is taken to be 1 cm thick, and the lower layer is 99 cm.  Grid aggregate vegetation and soil parameters are derived from fractional coverage of land use categories and soil texture types.  There are two indirect nudging schemes that correct biases in 2-m air temperature and moisture by dynamic adjustment of soil moisture (Pleim and Xiu, 2003) and deep soil temperature (Pleim and Gilliam, 2009).

Users should recognize that the PX LSM was primarily developed for retrospective simulation, where surface-based observations are available to inform the indirect soil nudging.  While soil nudging can be disabled using the FDDA namelist.input setting "pxlsm_soil_nudge," little testing has been done in this mode, although some users have reported reasonable results.  Gilliam and Pleim (2010) discuss the implementation in the WRF model and provide typical configurations for retrospective applications.  If soil nudging is activated, modelers must use the Obsgrid objective re-analysis utility to produce a surface nudging file with the naming convention "wrfsfdda_d0*."  Obsgrid takes WPS "met_em*" files and LittleR observation files and produces the "wrfsfdda_d0*" file.  The PX LSM uses 2-m temperature and mixing ratio re-analyses from this file for the deep soil moisture and temperature nudging.  If modelers want to test PX LSM in forecast mode with soil nudging activated, forecasted 2-m temperature and mixing ratio can be used with empty observation files to produce the "wrfsfdda_d0*" files, using Obsgrid, but results will be tied to the governing forecast model.

 f. Noah-MP (multi-physics) Land Surface Model: uses multiple options for key land-atmosphere interaction processes. Noah-MP contains a separate vegetation canopy defined by a canopy top and bottom with leaf physical and radiometric properties used in a two-stream canopy radiation transfer scheme that includes shading effects. Noah-MP contains a multi-layer snow pack with liquid water storage and melt/refreeze capability and a snow-interception model describing loading/unloading, melt/refreeze, and sublimation of the canopy-intercepted snow. Multiple options are available for surface water infiltration and runoff, and groundwater transfer and storage including water table depth to an unconfined aquifer. Horizontal and vertical vegetation density can be prescribed or predicted using prognostic photosynthesis and dynamic vegetation models that allocate carbon to vegetation (leaf, stem, wood and root) and soil carbon pools (fast and slow). New in Version 3.4. (Niu et al. 2011)

g. SSiB Land Surface Model: This is the third generation of the Simplified Simple Biosphere Model (Xue et al. 1991; Sun and Xue, 2001). SSiB is developed for land/atmosphere interaction studies in the climate model. The aerodynamic resistance values in SSiB are determined in terms of vegetation properties, ground conditions and bulk Richardson number according to the modified Monin–Obukhov similarity theory. SSiB-3 includes three snow layers to realistically simulate snow processes, including destructive metamorphism, densification process due to snow load, and snow melting, which substantially enhances the model’s ability for the cold season study. To use this option, ra_lw_physics and ra_sw_physics should be set to either 1, 3, or 4. The second full model level should be set to no larger than 0.982 so that the height of that level is higher than vegetation height. New in Version 3.4.

h. Fractional sea-ice (fractional_seaice = 1). Treat sea-ice as fractional field. Require fractional sea-ice as input data. Data sources may include those from GFS or the National Snow and Ice Data Center (http://nsidc.org/data/seaice/index.html). Use XICE for Vtable entry instead of SEAICE. This option works with sf_sfclay_physics = 1, 2, 5, and 7, and sf_surface_physics = 2, 3, and 7 in the present release. New in Version 3.1.

i. CLM4 (Community Land Model Version 4, Oleson et al. 2010; Lawrence et al. 2010): CLM4 was developed at the National Center for Atmospheric Research with many external collaborators and represents a state-of-the-science land surface process model. It contains sophisticated treatment of biogeophysics, hydrology, biogeochemistry, and dynamic vegetation. In CLM4, the land surface in each model grid cell is characterized into five primary sub-grid land cover types (glacier, lake, wetland, urban, and vegetated). The vegetated sub-grid consists of up to 4 plant functional types (PFTs) that differ in physiology and structure. The WRF input land cover types are translated into the CLM4 PFTs through a look-up table. The CLM4 vertical structure includes a single-layer vegetation canopy, a five-layer snowpack, and a ten-layer soil column. An earlier version of CLM has been quantitatively evaluated within WRF in Jin and Wen (2012; JGR-Atmosphere), Lu and Kueppers (2012; JGR-Atmosphere), and Subin et al. (2011; Earth Interactions) (from Jin). New in Version 3.5.

 

3.3 Urban Surface (sf_urban_physics – replacing old switch ucmcall)

a. Urban canopy model (1): 3-category UCM option with surface effects for roofs, walls, and streets.

b. BEP (2). Building Environment Parameterization: Multi-layer urban canopy model that allows for buildings higher than the lowest model levels. Only works with Noah LSM and Boulac and MYJ PBL options. New in Version 3.1.

c. BEM (3). Building Energy Model. Adds to BEP, building energy budget with heating and cooling systems. Works with same options as BEP. New in Version 3.2.

4. Planetary Boundary layer (bl_pbl_physics)

a. Yonsei University scheme: Non-local-K scheme with explicit entrainment layer and parabolic K profile in unstable mixed layer (bl_pbl_physics = 1).

b. Mellor-Yamada-Janjic scheme: Eta operational scheme. One-dimensional prognostic turbulent kinetic energy scheme with local vertical mixing (2).

c. MRF scheme: Older version of (a) with implicit treatment of entrainment layer as part of non-local-K mixed layer (99).

d. ACM2 PBL: Asymmetric Convective Model with non-local upward mixing and local downward mixing (7). New in Version 3.0.

e. Quasi-Normal Scale Elimination PBL (4). A TKE-prediction option that uses a new theory for stably stratified regions (Available since 3.1). Daytime part uses eddy diffusivity mass-flux method with shallow convection (mfshconv = 1) which is added in Version 3.4.

f. Mellor-Yamada Nakanishi and Niino Level 2.5 PBL (5). Predicts sub-grid TKE terms. New in Version 3.1.

g. Mellor-Yamada Nakanishi and Niino Level 3 PBL (6). Predicts TKE and other second-moment terms. New in Version 3.1.

h. BouLac PBL (8): Bougeault-Lacarrère PBL. A TKE-prediction option. New in Version 3.1. Designed for use with BEP urban model.

i. UW (Bretherton and Park) scheme (9). TKE scheme from CESM climate model. New in Version 3.3.

j. Total Energy - Mass Flux (TEMF) scheme (10). Sub-grid total energy prognostic variable, plus mass-flux type shallow convection. New in Version 3.3.

k. LES PBL: A large-eddy-simulation (LES) boundary layer is available in Version 3. For this, bl_pbl_physic = 0, isfflx = 1, and sf_sfclay_physics and sf_surface_physics are selected. This uses diffusion for vertical mixing and must use diff_opt = 2, and km_opt = 2 or 3, see below. Alternative idealized ways of running the LESPBL are chosen with isfflx = 0 or 2. New in Version 3.0.

l. Grenier-Bretherton-McCaa scheme: This is a TKE scheme. Tested in cloud-topped PBL cases. New in Version 3.5.

m. topo_wind: = 1: Topographic correction for surface winds to represent extra drag from sub-grid topography and enhanced flow at hill tops (Jimenez and Dudhia, JAMC 2012). Works with YSU PBL only. New in Version 3.4. = 2: a simpler terrain variance-related correction. New in Version 3.5.

5. Cumulus Parameterization (cu_physics)

a. Kain-Fritsch scheme: Deep and shallow convection sub-grid scheme using a mass flux approach with downdrafts and CAPE removal time scale (cu_physics = 1).

-  kfeta_trigger = 1 – default trigger; = 2 – moisture-advection modulated trigger function [based on Ma and Tan (2009, Atmospheric Research)]. May improve results in subtropical regions when large-scale forcing is weak.

b. Betts-Miller-Janjic scheme. Operational Eta scheme. Column moist adjustment scheme relaxing towards a well-mixed profile (2).

c. Grell-Devenyi (GD) ensemble scheme: Multi-closure, multi-parameter, ensemble method with typically 144 sub-grid members (moved to option 93 in V3.5).

d. Simplified Arakawa-Schubert (4). Simple mass-flux scheme with quasi-equilibrium closure with shallow mixing scheme (and momentum transport in NMM only). Adapted for ARW in Version 3.3.

e. Grell 3D is an improved version of the GD scheme that may also be used on high resolution (in addition to coarser resolutions) if subsidence spreading (option cugd_avedx) is turned on (5). New in Version 3.0.

f. Tiedtke scheme (U. of Hawaii version) (6). Mass-flux type scheme with CAPE-removal time scale, shallow component and momentum transport. New in Version 3.3.

g. Zhang-McFarlane scheme (7). Mass-flux CAPE-removal type deep convection from CESM climate model with momentum transport. New in Version 3.3.

h. New Simplified Arakawa-Schubert (14). New mass-flux scheme with deep and shallow components and momentum transport. New in Version 3.3.

i. New Simplified Arakawa-Schubert (84, HWRF version). New mass-flux scheme with deep and shallow components and momentum transport. New in Version 3.4.

j. Grell-Freitas (GF) scheme (3): An improved GD scheme that tries to smooth the transition to cloud-resolving scales, as proposed by Arakawa et al. (2004). New in Version 3.5.

k. Old Kain-Fritsch scheme: Deep convection scheme using a mass flux approach with downdrafts and CAPE removal time scale (99).

6. Shallow convection option (shcu_physics)

a. ishallow = 1, shallow convection option on. Works together with Grell 3D scheme (cu_physics = 5) – will move to shcu_physics category in the future.

b. UW (Bretherton and Park) scheme (2). Shallow cumulus option from CESM climate model with momentum transport. New in Version 3.3.

c. GRIMS (Global/Regional Integrated Modeling System) scheme: it represents the shallow convection process by using eddy-diffusion and the pal algorithm, and couples directly to the YSU PBL scheme. New in Version 3.5.

7. Other physics options

a. Options to use for tropical storm and hurricane applications:

- sf_ocean_physics = 1 (renamed from omlcall in previous versions): Simple ocean mixed layer model (1): 1-D ocean mixed layer model following that of Pollard, Rhines and Thompson (1972). Two other namelist options are available to specify the initial mixed layer depth (although one may ingest real mixed layer depth data) (oml_hml0) and a temperature lapse rate below the mixed layer (oml_gamma). Since V3.2, this option works with all sf_surface_physics options.

- sf_ocean_physics = 2: New in V3.5. 3D Price-Weller-Pinkel (PWP) ocean model based on Price et al. (1994). This model predicts horizontal advection, pressure gradient force, as well as mixed layer processes. Only simple initialization via namelist variables ocean_z, ocean_t, and ocean_s is available in V3.5.

- isftcflx: Modify surface bulk drag (Donelan) and enthalpy coefficients to be more in line with recent research results of those for tropical storms and hurricanes. This option also includes dissipative heating term in heat flux. It is only available for sf_sfclay_physics = 1. There are two options for computing enthalpy coefficients: isftcflx = 1: constant Z0q (since V3.2) for heat and moisture; isftcflx = 2 Garratt formulation, slightly different forms for heat and moisture.

b. Other options for long simulations (new in Version 3.1):

- tmn_update: update deep soil temperature (1).

- sst_skin: calculate skin SST based on Zeng and Beljaars (2005) (1)

- bucket_mm: bucket reset value for water equivalent precipitation accumulations (value in mm, -1 = inactive).

- bucket_J: bucket reset value for energy accumulations (value in Joules, -1 = inactive). Only works with CAM and RRTMG radiation (ra_lw_physics = 3 and 4 and ra_sw_physics = 3 and 4) options.

- To drive WRF model with climate data that does not have leap year, there is a compile option to do that. Edit configure.wrf and
add -DNO_LEAP_CALENDAR to the macro ARCH_LOCAL.

c. usemonalb: When set to .true., it uses monthly albedo fields from geogrid, instead of table values

d. no_mp_heating: When set to 1, it turns off latent heating from microphysics. When using this option, cu_physics should be set to 0.

e. gwd_opt: Gravity wave drag option. Can be activated when grid size is greater than 10 km. May be beneficial for simulations longer than 5 days and over a large domain with mountain ranges. It is recommended that this option is used only with unrotated lat/long (e.g. global) or Mercator projections because the input orographic sub-grid asymmetry arrays assume this grid orientation. New in Version 3.1.

f. windturbines_spec (a character string): Wind turbine drag parameterization scheme. It represents sub-grid effects of specified turbines on wind and TKE fields. When set to “none” (default value), the scheme is off. When set to “ideal”, the idealized specification for turbine’s geometry and characteristics are set by namelist variables td_*. When set to a file name (which exists in the run directory), the physical charateristics of the wind farm is described in the file. See README.windturbine in WRFV3/ directory for more detail. New in Version 3.3, and in this version it only works with 2.5 level MYNN PBL option (bl_pbl_physics=5).

8. Physics sensitivity options

a. no_mp_heating: When set to 1, it turns off latent heating from microphysics. When using this option, cu_physics should be set to 0.
b. icloud: When set to 0, it turns off cloud effect on optical depth in shortwave radiation options 1, 4 and longwave radiation option 1, 4.
c. isfflx: When set to 0, it turns off both sensible and latent heat fluxes from the surface. This option works for sf_sfclay_physics = 1, 5, 7, 11.
d. ifsnow: When set to 0, it turns off snow effect in sf_surface_physics = 1.

Diffusion and Damping Options

Diffusion in WRF is categorized under two parameters: the diffusion option and the K option. The diffusion option selects how the derivatives used in diffusion are calculated, and the K option selects how the K coefficients are calculated. Note that when a PBL option is selected, vertical diffusion is done by the PBL scheme, and not by the diffusion scheme. In Version 3, vertical diffusion is also linked to the surface fluxes.

1.1 Diffusion Option (diff_opt)

a. Simple diffusion: Gradients are simply taken along coordinate surfaces (diff_opt = 1).

b. Full diffusion: Gradients use full metric terms to more accurately compute horizontal gradients in sloped coordinates (diff_opt = 2).

1.2 K Option (km_opt)

Note that when using a PBL scheme, only options (a) and (d) below make sense, because (b) and (c) are designed for 3d diffusion.

a. Constant: K is specified by namelist values for horizontal and vertical diffusion (km_opt = 1).

b. 3d TKE: A prognostic equation for turbulent kinetic energy is used, and K is based on TKE (km_opt = 2).

c. 3d Deformation: K is diagnosed from 3d deformation and stability following a Smagorinsky approach (km_opt = 3).

d. 2d Deformation: K for horizontal diffusion is diagnosed from just horizontal deformation. The vertical diffusion is assumed to be done by the PBL scheme (km_opt = 4). 

1.3 6th Order Horizontal Diffusion (diff_6th_opt)

6th-order horizontal hyper diffusion (del^6) on all variables to act as a selective short-wave numerical noise filter. Can be used in conjunction with diff_opt. = 1: simple; = 2: positive definite. Option 2 is recommended.

1.4 Nonlinear Backscatter Anisotropic (NBA) (sfs_opt)

Sub-grid turbulent stress option for momentum in LES applications. New in Version 3.2. sfs_opt = 1 diagnostic sub-grid stress to be used with diff_opt = 2 and km_opt = 2 or 3. sfs_opt =  TKE sub-grid stress to be used with diff_opt = 2 and km_opt = 2. 

 

2. Damping Options

These are independently activated choices.

a. Upper Damping: Either a layer of increased diffusion (damp_opt =1) or a Rayleigh relaxation layer (2) or an implicit gravity-wave damping layer (3, new in Version 3.0), can be added near the model top to control reflection from the upper boundary.

b. Vertical velocity damping (w_damping): For operational robustness, vertical motion can be damped to prevent the model from becoming unstable with locally large vertical velocities. This only affects strong updraft cores, so has very little impact on results otherwise.

c. Divergence Damping (sm_div): Controls horizontally-propagating sound waves.

d. External Mode Damping (em_div): Controls upper-surface (external) waves.

e. Time Off-centering (epssm): Controls vertically-propagating sound waves.

 

Advection Options

a. Horizontal advection orders for momentum (h_mom_adv_order) and scalar (h_sca_adv_order) can be 2ndto 6th, with 5th order being the recommended one.

b. Vertical advection orders for momentum (v_mom_adv_order) and scalar (v_sca_adv_order) can be 2ndand 6th, with 3rd order being the recommended one.

c. Monotonic transport (option 2, new in Version 3.1) and positive-definite advection option (option 1) can be applied to moisture (moist_adv_opt), scalar (scalar_adv_opt), chemistry variables (chem_adv_opt) and tke (tke_adv_opt). Option 1 replaces pd_moist = .true. etc. in previous versions.

d. WENO (weighted essentially non-oscillatory) (option 3 for 5th order WENO; option 4 for 5th order WENO with positive definite limiter): for moisture (moist_adv_opt), scalar (scalar_adv_opt), chemistry variables (chem._adv_opt) and TKE (tke_adv_opt). For momentum, momentum_adv_opt = 3.

Some notes about using monotonic and positive-definite advection options:

 

The positive-definite and monotonic options are available for moisture, scalars, chemical scalers and TKE in the ARW solver.  Both the monotonic and positive-definite transport options conserve scalar mass locally and globally and are consistent with the ARW mass conservation equation. We recommend using the positive-definite option for moisture variables on all real-data simulations.  The monotonic option may be beneficial in chemistry applications and for moisture and scalars in some instances.

 

When using these options there are certain aspects of the ARW integration scheme that should be considered in the simulation configuration.

 

(1) The integration sequence in ARW changes when the positive-definite or monotonic options are used.  When the options are not activated, the timestep tendencies from the physics (excluding microphysics) are used to update the scalar mixing ratio at the same time as the transport (advection). The microphysics is computed, and moisture is updated, based on the transport+physics update.  When the monotonic or positive definite options are activated, the scalar mixing ratio is first updated with the physics tendency, and the new updated values are used as the starting values for the transport scheme.  The microphysics update occurs after the transport update using these latest values as its starting point. It is important to remember that for any scalars, the local and global conservation properties, positive definiteness and monotonicity depend upon each update possessing these properties.

 

(2) Some model filters may not be positive definite.

i.      diff_6th_opt = 1 is not positive definite nor monotonic.  Use diff_6th_opt = 2 if you need this diffusion option (diff_6th_opt = 2 is monotonic and positive-definite).  We have encountered cases where the departures from monotonicity and positive-definiteness have been very noticeable.

ii.     diff_opt = 1 and km_opt = 4 (a commonly-used real-data case mixing option) is not guaranteed to be positive-definite nor monotonic due to the variable eddy diffusivity, K.  We have not observed significant departures from positive-definiteness or monotonicity when this filter is used with these transport options.

iii.   The diffusion option that uses a user-specified constant eddy viscosity is positive definite and monotonic.

iv.   Other filter options that use variable eddy viscosity are not positive definite or monotonic.

 

(3) Most of the model physics are not monotonic nor should they be - they represent sources and sinks in the system.  All should be positive definite, although we have not examined and tested all options for this property.

 

(4) The monotonic option adds significant smoothing to the transport in regions where it is active.  You may want to consider turning off the other model filters for variables using monotonic transport (filters such as the second and sixth order horizontal filters).  At present it is not possible to turn off the filters for the scalars but not for the dynamics using the namelist - one must manually comment out the calls in the solver.  

Other Dynamics Options

a. The model can be run hydrostatically by setting the non_hydrostatic switch to .false.

b. The Coriolis term can be applied to wind perturbation (pert_coriolis = .true.) only (idealized only).

c. For diff_opt = 2 only, vertical diffusion may act on full fields (not just on perturbation from the 1D base profile (mix_full_fields = .true.; idealized only).

Lateral Boundary Condition Options

a.     Periodic (periodic_x / periodic_y): for idealized cases.

b.     Open (open_xs, open_xe, open_ys, open_ye): for idealized cases.

c.     Symmetric (symmetric_xs, symmetric_xe, symmetric_ys, symmetric_ye): for idealized cases.

d.     Specified (specified): for real-data cases. The first row and column are specified with external model values (spec_zone = 1, and it should not change). The rows and columns in relax_zone have values blended from an external model and WRF. The value of relax_zone may be changed, as long as spec_bdy_width = spec_zone + relax_zone. This can be used with periodic_x in tropical channel simulations.

spec_exp: exponential multiplier for the relaxation zone ramp, used with a specified boundary condition. 0. = linear ramp, default; 0.33 = ~3*dx exp decay factor. This may be useful for long simulations.

e.     Nested (nested): for real and idealized cases. 

 

Summary of PBL Physics Options

bl_pbl_physics

Scheme

Reference

Added

1

YSU

Hong, Noh and Dudhia (2006, MWR)

2004

2

MYJ

Janjic (1994, MWR)

2000

3

GFS

Hong and Pan (1996, MWR)

2005

4

QNSE

Sukoriansky, Galperin and Perov (2005, BLM)

2009

5

MYNN2

Nakanishi and Niino (2006, BLM)

2009

6

MYNN3

Nakanishi and Niino (2006, BLM)

2009

7

ACM2

Pleim (2007, JAMC

2008

8

BouLac

Bougeault and Lacarrere (1989, MWR)

2009

9

UW

Bretherton and Park (2009, JC)

2011

10

TEMF

Angevine, Jiang and Mauriten (2010, MWR)

2011

12

GBM

Grenier and Bretherton (2001, MWR)

2013

99

MRF

Hong and Pan (1996, MWR)

2000

 

bl_pbl_

physics

 

Scheme

 

Cores

 

sf_sfclay_

physics

 

Prognostic variables

 

Diagnostic variables

 

Cloud mixing

 

1

 

YSU

 

ARW/ NMM

 

1,11

 

 

 

exch_h

 

QC,QI

 

2

 

MYJ

 

ARW/ NMM

 

2

 

TKE_PBL

 

el_myj, exch_h

 

QC,QI

 

3

 

GFS

(hwrf)

 

NMM

 

3

 

 

 

 

 

QC,QI

 

4

 

QNSE-EDMF

 

ARW/ NMM

 

4

 

TKE_PBL

 

el_pbl, exch_h, exch_m

QC,QI

 

5

 

MYNN2

 

ARW

 

1,2,5,11

 

QKE

 

Tsq, Qsq, Cov, exch_h, exch_m

 

QC

 

6

 

MYNN3

 

ARW

 

1,2,5,11

 

QKE, Tsq, Qsq, Cov

 

exch_h, exch_m

 

QC

 

7

 

ACM2

 

ARW

 

1,7,11

 

 

 

 

 

QC,QI

 

8

 

BouLac

 

ARW

 

1,2,11

 

TKE_PBL

 

el_pbl, exch_h, exch_m, wu_tur, wv_tur, wt_tur, wq_tur

QC

 

9

UW

ARW

1,2,11

TKE_PBL

exch_h, exch_m

QC

10

TEMF

ARW

10

TE_TEMF

*_temf

QC, QI

12

GBM

ARW

1,11

TKE_PBL

el_pbl, exch_tke

 

QC, QI

99

 

MRF

 

ARW/ NMM

 

1,11

 

 

 

 

 

QC, QI

 

 

 

 

Summary of Microphysics Options

mp_physics

Scheme

Reference

Added

 

1

Kessler

Kessler (1969)

2000

2

Lin (Purdue)

Lin, Farley and Orville (1983, JCAM)

2000

3

WSM3

Hong, Dudhia and Chen (2004, MWR)

2004

4

WSM5

Hong, Dudhia and Chen (2004, MWR)

2004

5

Eta (Ferrier)

Rogers, Black, Ferrier, Lin, Parrish and DiMego (2001, web doc)

2000

6

WSM6

Hong and Lim (2006, JKMS)

2004

7

Goddard

Tao, Simpson and McCumber (1989, MWR)

2008

8

Thompson

Thompson, Field, Rasmussen and Hall (2008, MWR)

2009

9

Milbrandt 2-mom

Milbrandt and Yau (2005, JAS)

2010

10

Morrison 2-mom

Morrison, Thompson and Tatarskii (2009, MWR)

2008

13

SBU-YLin

Lin and Colle (2011, MWR)

2011

14

WDM5

Lim and Hong (2010, MWR)

2009

16

WDM6

Lim and Hong (2010, MWR)

2009

17

NSSL 2-mom

Mansell, Ziegler and Bruning (2010, JAS)

2012

18

NSSL 2-mom w/

CCN prediction

Mansell, Ziegler and Bruning (2010, JAS)

2012

19

NSSL 1-mom

 

2013

21

NSSL 1-momlfo

 

2013

 

mp_physics

Scheme

Cores

Mass Variables

Number Variables

1

Kessler

ARW

Qc Qr

 

 

2

Lin (Purdue)

ARW (Chem)

Qc Qr Qi Qs Qg

 

 

3

WSM3

ARW

Qc Qr

 

 

4

WSM5

ARW/NMM

Qc Qr Qi Qs

 

 

5

Eta (Ferrier)

ARW/NMM

Qc Qr Qs (Qt*)

 

 

6

WSM6

ARW/NMM

Qc Qr Qi Qs Qg

 

 

8

Thompson

ARW/NMM

Qc Qr Qi Qs Qg

Ni Nr

9

Milbrandt 2-mom

ARW

Qc Qr Qi Qs Qg Qh

Nc Nr Ni Ns Ng Nh

10

Morrison 2-mom

ARW (Chem)

Qc Qr Qi Qs Qg

Nr Ni Ns Ng

13

SBU-YLin

ARW

Qc Qr Qi Qs

 

14

WDM5

ARW

Qc Qr Qi Qs

Nn** Nc Nr

16

WDM6

ARW

Qc Qr Qi Qs Qg

Nn** Nc Nr

17

NSSL 2-mom

ARW

Qc Qr Qi Qs Qg Qh

Nc Nr Ni Ns Ng Nh

18

NSSL 2-mom

+CCN

ARW

Qc Qr Qi Qs Qg Qh

Nc Nr Ni Ns Ng Nh Nn Vg

19

NSSL 1-mom

ARW

Qc Qr Qi Qs Qg Qh

Vg***

21

NSSL 1-momlfo

ARW

Qc Qr Qi Qs Qg

 

* Advects only total condensates   ** Nn = CCN number  *** Vg: graupel volume

 

Summary of Cumulus Parameterization Options

cu_physics

Scheme

Reference

Added

 

1

Kain-Fritsch

Kain (2004, JAM)

2000

2

Betts-Miller-Janjic

Janjic (1994, MWR; 2000, JAS)

2002

3

Grell-Freitas

Grell et al. (2013)

2013

4

Old Simplied Arakawa-Schubert

Pan and Wu (1995), NMC Office Note 409

 

2005/

2011

5

Grell-3

-

2008

6

Tiedtke

Tiedtke (1989, MWR), Zhang et al. (2011, MWR)

2011

7

Zhang-McFarlane

Zhang and McFarlane (1995, AO)

2011

14

New SAS

Han and Pan (2011, Wea. Forecasting)

2011

84

New SAS (HWRF)

Han and Pan (2011, Wea. Forecasting)

2012

93

Grell-Devenyi

Grell and Devenyi (2002, GRL)

2002

99

Old Kain-Fritsch

Kain and Fritsch (1990, JAS; 1993, Meteo. Monogr.)

2000

 

cu_physics

Scheme

Cores

Moisture Tendencies

Momentum Tendencies

Shallow Convection

1

Kain-Fritsch

ARW / NMM

Qc Qr Qi Qs

no

 

yes

2

BMJ

ARW / NMM

-

no

 

yes

3

GF

ARW

Qc Qi

no

yes

4

OSAS

ARW / NMM

Qc Qi

yes (NMM)

yes (ARW)

5

G3

ARW

Qc Qi

no

yes

6

Tiedtke

ARW / NMM

Qc Qi

yes

yes

7

Zhang-McFarlane

ARW

Qc Qi

yes

no

14

NSAS

ARW

Qc Qr Qi Qs

yes

yes

84

NSAS (HWRF)

NMM

Qc Qi

yes

 

93

GD

ARW

Qc Qi

no

no

99

old KF

ARW

Qc Qr Qi Qs

no

no

 

Summary of Radiation Physics Options

ra_sw_physics

Scheme

Reference

Added

1

Dudhia

Dudhia (1989, JAS)

2000

2

Goddard

Chou and Suarez (1994, NASA Tech Memo)

2000

3

CAM

Collins et al. (2004, NCAR Tech Note)

2006

4

RRTMG

Iacono et al. (2008, JGR)

2009

5

New

Goddard

Chou and Suarez (1999, NASA Tech Memo)

2011

7

FLG

Gu et al. (2011, JGR), Fu and Liou (1992, JAS)

2012

99

GFDL

Fels and Schwarzkopf (1981, JGR)

2004

 

ra_sw_
physics

Scheme

Cores+Chem

Microphysics Interaction

Cloud Fraction

Ozone

1

Dudhia

ARW NMM + Chem(PM2.5)

Qc Qr Qi Qs Qg

1/0

none

2

GSFC

ARW+Chem(τ)

Qc Qi

1/0

5 profiles

3

CAM

ARW

Qc Qi Qs

max-rand overlap

lat/month

4

RRTMG

ARW + Chem (τ), NMM

Qc Qr Qi Qs

max-rand overlap

1 profile or lat/month

5

New Goddard

ARW

Qc Qr Qi Qs Qg

1/0

5 profiles

7

FLG

ARW

Qc Qr Qi Qs Qg

1/0

5 profiles

99

GFDL

ARW NMM

Qc Qr Qi Qs

max-rand overlap

lat/date

 

ra_lw_physics

Scheme

Reference

Added

1

RRTM

Mlawer et al. (1997, JGR)

2000

3

CAM

Collins et al. (2004, NCAR Tech Note)

2006

4

RRTMG

Iacono et al. (2008, JGR)

2009

5

New

Goddard

Chou and Suarez (1999, NASA Tech Memo)

2011

7

FLG

Gu et al. (2011, JGR), Fu and Liou (1992, JAS)

2012

31

Held-Suarez

 

2008

99

GFDL

Fels and Schwarzkopf (1981, JGR)

2004

 

ra_lw_
physics

Scheme

Cores+Chem

Microphysics Interaction

Cloud Fraction

Ozone

GHG

1

RRTM

ARW NMM

Qc Qr Qi Qs Qg

1/0

1 profile

constant or yearly GHG

3

CAM

ARW

Qc Qi Qs

max-rand overlap

lat/month

yearly CO2 or yearly GHG

4

RRTMG

ARW + Chem (τ), NMM

Qc Qr Qi Qs

max-rand overlap

1 profile or lat/month

constant or yearly GHG

5

New Goddard

ARW

Qc Qr Qi Qs Qg

1/0

5 profiles

constant

7

FLG

ARW

Qc Qr Qi Qs Qg

1/0

5 profiles

constant

31

Held-Suarez

ARW

none

none

 

none

99

GFDL

ARW NMM

Qc Qr Qi Qs

max-rand overlap

lat/date

constant


Description of Namelist Variables

The following is a description of the namelist variables. The variables that are a function of nests are indicated by (max_dom) following the variable. Also see the Registry/Registry.EM and run/README.namelist files in the WRFV3/ for more detailed information.

Variable Names

Input Option

Description

 

 

 

&time_control

 

options for time control

run_days

1

run time in days

run_hours

0

run time in hours *note: if it is more than 1 day, you may use both run_days and run_hours or just run_hours.  e.g. if the total run length is 36 hrs, you may set run_days = 1, and run_hours = 12, or run_days = 0, and run_hours = 36

run_minutes

0

run time in minutes

run_seconds

0

run time in seconds

start_year (max_dom)

2012

4 digit year of starting time

start_month (max_dom)

06

2 digit month of starting time

start_day (max_dom)

11

2 digit day of starting time

start_hour (max_dom)

12

2 digit hour of starting time

start_minute (max_dom)

00

2 digit minute of starting time

start_second (max_dom)

00

2 digit second of starting time  *note:  the start time is used to name the first wrfout file.  It also controls the start time for nest domains, and the time to restart

end_year (max_dom)

2012

4 digit year of ending time

end_month (max_dom)

06

2 digit month of ending time

end_day (max_dom)

12

2 digit day of ending time

end_hour (max_dom)

12

2 digit hour of ending time

end_minute (max_dom)

00

2 digit minute of ending time

end_second (max_dom_

00

2 digit second of ending time  *note:  all end times also control when the nest domain integrations end.  All start and end times are used by real.exe.  You may use either run_days/run_hours/etc. or end_year/month/day/hour/etc. to control the length of model integration; but run_days/run_hours takes precedence over the end times.  The program real.exe uses start and end times only

interval_seconds

10800

time interval between the incoming real data, which will be the interval between the lateral boundary condition file (for real only)

input_from_file (max_dom)

.true.

(logical); whether the nested run will have input files for domains other than domain 1

fine_input_stream (max_dom)

 

selected fields from nest input

 

0

(default) all fields from nest input are used

 

2

only nest input specified from input stream 2 (defined in the Registry) are used.  In V3.2, this requires io_form_auxinput2 to be set

history_interval (max_dom)

60

history output file interval in minutes (integer only)

history_interval_d (max_dom)

1

history output file interval in days (integer only); used as an alternative to history_interval

history_interval_h (max_dom)

1

history output file interval in hours (integer only); used as an alternative to history_interval

history_interval_m (max_dom)

1

history output file interval in minutes (integer only); used as an alternative to history_interval and is equivalent to history_interval

history_interval_s (max_dom)

1

history output file interval in seconds (integer only); used as an alternative to history_interval

frames_per_outfile (max_dom)

1

number of output times bulked into each history file; used to split output files into smaller pieces

restart

.false.

(logical); whether this run is a restart

restart_interval

1440

restart output file interval in minutes

override_restart_timers

.false.

(default) uses restart output intervals given by the wrfrst files

 

.true.

uses restart output intervals given by the namelist

write_hist_at_0h_rst

.false.

(default) does not give a history file at the initial time of restart (prevents overwriting original history file at this time)

 

.true.

gives a history file at the initial time of restart

reset_simulation_start

.false.

whether to overwrite the simulation start date with the forecast start time

auxinput1_inname

"met_em.d<domain><date>"

(default); name of input file from WPS

auxinput4_inname

"wrflowinp_d<domain>"

name of input file for lower boundary file; works with sst_update = 1

auxinput4_interval (max_dom)

360

file interval in minutes for lower boundary file; works with sst_update = 1

io_form_auxinput4

2

IO format for wrflowinp files; required for V3.2; works with sst_update = 1

io_form_history

 

the format in which the history output file will be

 

2

netCDF

 

102

split netCDF files, one per processor  *note:  no supported post-processing software for split files

 

1

binary format *note:  no supported post-processing software available

 

4

PHDF5 format *note:  no supported post-processing software available

 

5

GRIB1

 

10

GRIB2

 

11

parallel netCDF

io_form_restart

 

the format in which the restart output files will be

 

2

nedCDF

 

102

split netCDF files, one per processor (must restart with the same number of processors)

io_form_input

 

the format of the input files

 

2

netCDF

 

102

allows the program real.exe to read in split met_em* files, and write split wrfinput files.  No split file for the wrfbdy file.

io_form_boundary

 

the format for the wrfbdy file

 

2

netCDF format

 

4

PHD5 format

 

5

GRIB1 format

 

10

GRIB2 format

 

11

pnetCDF format

io_form_auxinput2

 

IO format for input stream 2 data

 

2

netCDF format

 

4

PHD5 format

 

5

GRIB1 format

 

10

GRIB2 format

 

11

pnetCDF format

diag_print

0

(default) When set to 1 or 2, it allows some simple diagnostic fields to be output

 

1

domain-averaged 3-hourly hydrostatic surface pressure tendency (Dpsfc/Dt), and dry-hydrostatic column pressure tendency (Dmu/Dt) will appear in stdout file.

 

2

in addition to those listed above, domain-averaged rainfall, surface evaporation, and sensible and latent heat fluxes will be output in stdout file.

debug_level

0

giving this a larger value (50, 100, 200, etc.) increases the debugging print-outs when running WRF

auxhist2_outname

"rainfall_d<domain>"

file name to write additional output to a different unit or output stream.. If not specified, auxhist2_d<domain>_<date> will be used.  Also note that to write variables in output other than the history file requires either a change in the Registry.EM_COMMON file, or the use of the option iofields_filename option.

auxhist2_interval (max_dom)

10

the interval in minutes for the output

io_form_auxhist2

 

output format for using auxhist2

 

2

netCDF format

 

4

PHD5 format

 

5

GRIB1 format

 

10

GRIB2 format

 

11

pnetCDF format

frames_per_auxhist2 (max_dom)

1000

how many output times will be in each output file

auxinput11_interval

10

interval in minutes for obs nudging input.  It should be set as the same (or more) frequency as obs_ionf (with the unit of the coarse domain time step)

auxinput11_end_h

6

end of the observation time (in hours), when using the diag_print option

nocolons

.false.

when set to .true. this replaces the colons with underscores in the output file names

write_input

.true.

write input-formatted data as output for 3DVAR application

inputout_interval

180

interval in minutes when using the write_input option

input_outname

"wrf_3dvar_input_d<domain>_<date>"

Output file name from 3DVAR

inputout_begin_y

0

beginning year to write 3DVAR data

inputout_begin_d

0

beginning day to write 3DVAR data

inputout_begin_h

3

beginning hour to write 3DVAR data

inputout_begin_m

0

beginning minute to write 3DVAR data

inputout_begin_s

0

beginning second to write 3DVAR data

inputout_end_y

0

ending year to write 3DVAR data

inputout_end_d

0

ending day to write 3DVAR data

inputout_end_h

12

ending hour to write 3DVAR data

inputout_end_m

0

ending minute to write 3DVAR data

inputout_end_s

0

ending second to write 3DVAR data

 

 

*NOTE:  The above example shows that the input-formatted data are output starting from hour 3 to hour 12 in a 180-min interval.

all_ic_times

.false.

when set to .true., allows you to output a wrfinput file for all time periods

output_diagnostics

0

turned off

 

1

36 surface diagnostic arrays (max/min/mean/std) in the time interval are specified.  The output goes to auxiliary history output stream 3 with default file name 'wrfxtrm_d<domain>_<date>.'  You must also set io_form_auxhist3 =2, auxhist3_interval = 1440, 1440, and frames_per_auxhist3 = 1000, 1000.

nwp_diagnostics

0

turned off

 

1

output 7 history-interval maximum or mean diagnostic fields in wrfout: 10 m surface wind max, max positive and negative w, max helicity in the 2-5 km layer, mean w, max column-integrated graupel

iofileds_filename

"my_iofields_list.txt"

an option to request particular variables to appear in output, if they are not already, or to not appear if they do and you do not want them to.  You must also create a text file (my_iofields_list.txt) in which you will declare the variables to be output.  It will be a single line of text, e.g.:  +:h:7:RAINC,RAINNC or -:h:0:RAINC,RAINNC

ignore_iofields_warning

.true.

tells the model to continue if an error is encountered in the user-specified files

 

.false.

tells the model to abort if an error is encountered in the user-specified files

 

 

 

&domains

 

dimensions, nesting, parameters

time_step

60

time step for integration seconds (recommended 6*dx in km for a typical case)

time_step_fract_num

0

numerator for fractional time step

time_step_fract_den

1

denominator for fractional time step.  E.g., if you want to use 60.3 sec as your time step, set time_step = 60, time_step_fract_num = 3, and time_step_fract_den = 10.

time_step_dfi

60

time step when setting dfi_opt = 1, may be different from the regular time step

max_dom

1

the number of domains over which you are running

s_we (max_dom)

1

start index in x (west-east) direction (leave as is)

e_we (max_dom)

91

end index in x (west_east) direction (staggered dimension)

s_sn (max_dom)

1

start index in y (south-north) direction (leave as is)

e_sn (max_dom)

82

end index in y (south-north) direction (staggered dimension)

s_vert (max_dom)

1

start index in z (vertical) direction (leave as is)

e_vert (max_dom)

30

end index in z (vertical) direction (staggered dimension -- this refers to full levels).  Most variables are on unstaggered levels.  *Note:  Vertical dimensions need to be the same for all nests

dx (max_dom)

30000

grid length in x-direction (in meters)

dy (max_dom)

30000

grid length in y-direction (in meters)

ztop (max_dom)

19000

height in meters; used to define model top for idealized cases

grid_id (max_dom)

1

domain identifier

parent_id (max_dom)

0

ID of the parent domain

i_parent_start (max_dom)

1

the starting lower-left corner i-indice from the parent domain

j_parent_start (max_dom)

1

the starting lower-left corner j_indice from the parent domain

parent_grid_ratio (max_dom)

1

parent-to-nest domain grid size ratio.  *Note:  for real data cases the ratio must be odd; for ideal data cases, the ratio can be even if feedback is set to 0.

parent_time_step_ratio (max_dom)

1

parent-to-nest time step ratio; this can be different from the parent_grid_ratio

feedback

0

no feedback

 

1

feedback from nest to its parent domain

smooth_option

0

no smoothing

 

1

1-2-1 smoothing option for parent domain; used only with feedback=1

 

2

(default) smoothing-desmoothing option for parent domain; used only with feedback=1

hypsometric_opt

2

(default) computes height in program real.exe and pressure in the model (ARW only) by