Chapter 2: Software Installation

Table of Contents

Introduction

The WRF modeling system software installation is fairly straightforward on the ported platforms listed below. The model-component portion of the package is mostly self-contained.  The WRF model does contain the source code to a Fortran interface to ESMF and the source to FFTPACK . Contained within the WRF system is the WRFDA component, which has several external libraries that the user must install (for various observation types and linear algebra solvers).  Similarly, the WPS package, separate from the WRF source code, has additional external libraries that must be built (in support of Grib2 processing).  The one external package that all of the systems require is the netCDF library, which is one of the supported I/O API packages. The netCDF libraries and source code are available from the Unidata homepage at http://www.unidata.ucar.edu (select DOWNLOADS, registration required).

There are three tar files for the WRF code.  The first is the WRF model (including the real and ideal pre-processors).  The second is the WRFDA code. The third tar file is for WRF chemistry.  In order to run the WRF chemistry code, both the WRF model and the chemistry tar file must be combined.

The WRF model has been successfully ported to a number of Unix-based machines. We do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for WRF.

 

Vendor

 

Hardware

 

OS

 

Compiler

Cray

XC30 Intel

Linux

Intel

Cray

XE AMD

Linux

Intel

IBM

Power Series

AIX

vendor

IBM

Intel

Linux

Intel / PGI / gfortran

SGI

IA64 / Opteron

Linux

Intel

COTS*

IA32

Linux

Intel / PGI /

gfortran / g95 /

PathScale

COTS

IA64 / Opteron

Linux

Intel / PGI /

gfortran /

PathScale

Mac

Power Series

Darwin

xlf / g95 / PGI / Intel

Mac

Intel

Darwin

gfortran / PGI / Intel

 

NEC

NEC

Linux

vendor

Fujitsu

FX10 Intel

Linux

vendor

* Commercial Off-The-Shelf systems

The WRF model may be built to run on a single-processor machine, a shared-memory machine (that uses the OpenMP API), a distributed memory machine (with the appropriate MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI). The WRFDA and WPS packages run on the above-listed systems.

Required Compilers and Scripting Languages

The majority of the WRF model, WPS, and WRFDA codes are written in Fortran (what many refer to as Fortran 90). The software layer, RSL, which sits between WRF and WRFDA, and the MPI interface is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing.  There are also ancillary programs that are written in C to perform file parsing and file construction, which are required for default building of the WRF modeling code. Additionally, the WRF build mechanism uses several scripting languages: including perl, Cshell and Bourne shell. The traditional UNIX text/file processing utilities are used: make, m4, sed, and awk. See Chapter 8:  WRF Software (Required Software) for a more detailed listing of the necessary pieces for the WRF build.

Required/Optional Libraries to Download

The only library that is always required is the netCDF package from Unidata (login > Downloads > NetCDF). Most of the WRF post-processing packages assume that the data from the WRF model, the WPS package, or the WRFDA program are using the netCDF libraries. One may also need to add /path-to-netcdf/netcdf/bin to their path so that they may execute netCDF utility commands, such as ncdump. Use a netCDF version that is 3.6.1 or later.  WRF does not currently use any of the additional capabilities that are in the newer versions of netCDF (such as 4.0 and later): compression, chunking, HDF5, etc.

 

 

Note 1: If one wants to compile WRF system components on a Linux or Darwin system that has access to multiple compilers, link the correct external libraries.  For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran.  Even more, the same options when building the netCDF libraries must be used when building the WRF code (32 vs 64 bit, assumptions about underscores in the symbol names, etc.).

 

Note 2: If netCDF-4 is used, be sure that it is installed without activating parallel I/O based on HDF5. The WRF modeling system is able to use either the classic data model from netCDF-3 or the compression options supported in netCDF-4.

If you are going to be running distributed memory WRF jobs, you need a version of MPI. You can pick up a version of mpich, but you might want your system group to install the code. A working installation of MPI is required prior to a build of WRF using distributed memory. Either MPI-1 or MPI-2 are acceptable.  Do you already have an MPI lying around? Try

        which mpif90
       which mpicc
       which mpirun
 

If these are all defined executables in your path, you are probably OK. Make sure your paths are set up to point to the MPI lib, include, and bin directories.  As with the netCDF libraries, you must build MPI consistently with the WRF source code.

 

Note that to output WRF model data in Grib1 format, Todd Hutchinson (WSI) has provided a complete source library that is included with the software release.  However, when trying to link the WPS, the WRF model, and the WRFDA data streams together, always use the netCDF format.

 

 

 

Post-Processing Utilities

The more widely used (and therefore supported) WRF post-processing utilities are:

             NCL (homepage and WRF download)

interactive or command-file driven

UNIX Environment Settings

There are only a few environmental settings that are WRF system related. Most of these are not required, but when things start acting badly, test some out. In Cshell syntax:

            setenv WRF_EM_CORE 1

o   explicitly defines which model core to build

            setenv WRF_NMM_CORE 0

            setenv WRF_DA_CORE 0

o   explicitly defines no data assimilation

            setenv NETCDF /usr/local/netcdf (or wherever you have it stored)

o   all of the WRF components want both the lib and the include directories

            setenv OMP_NUM_THREADS n (where n is the number of procs to use)

            setenv MP_STACK_SIZE 64000000

o   OpenMP blows through the stack size, set it large

o   However, if the model still crashes, it may be a problem of over- specifying stack size. Set stack size sufficiently large, but not unlimited.

o   On some systems, the equivalent parameter could be KMP_STACKSIZE, or OMP_STACKSIZE

            unlimit

o   especially if you are on a small system

 

Building the WRF Code

The WRF code has a fairly complicated build mechanism. It tries to determine the architecture that you are on, and then presents you with options to allow you to select the preferred build method. For example, if you are on a Linux machine, it determines whether this is a 32 or 64 bit machine, and then prompts you for the desired usage of processors (such as serial, shared memory, or distributed memory).  You select from among the available compiling options in the build mechanism.  For example, do not choose a PGI build if you do not have PGI compilers installed on your system.

The WRF code supports a parallel build option, an option that compiles separate source code files in the WRF directories at the same time on separate processors (though those processors need to share memory) via a parallel make.  The purpose of the parallel build option is to be able to speed-up the time required to construct executables.  In practice, users typically see approximately a 2x speed-up, a limit imposed by the various dependencies in the code due to modules and USE association.  To enable the parallel build option, the user sets an environment variable, J.  In csh, to utilize two processors, before the ./compile command, issue the following:

setenv J -j 2

Users may wish to only use a single processor for the build.  In which case:

setenv J -j 1

Users wishing to run the WRF chemistry code must first download the WRF model tar file, and untar it.  Then the chemistry code is untared in the WRFV3 directory (this is the chem directory structure).  Once the source code from the tar files is combined, then users may proceed with the WRF chemistry build.

Building the WPS Code

Building WPS requires that WRFV3 be already built.

If you plan to use Grib2 data, additional libraries for zlib, png, and jasper are required.  Please see details in Chapter 3.

 

Building the WRFDA Code (for 3DVAR)

WRFDA uses the same build mechanism as WRF; thus, this mechanism must be instructed to configure and build the code for WRFDA rather than WRF. Additionally, the paths to libraries needed by WRFDA code must be set, as described in the steps below.

setenv BUFR 1

o   If you intend to use satellite radiance data, the RTM (Radiative Transfer Model) is required. The current RTM versions that WRFDA uses are CRTM v2.0.2 and RTTOV v10. WRFDA can compile with CRTM only, or RTTOV only, or both CRTM and RTTOV together

To compile WRFDA with CRTM: setenv CRTM 1

(Note: the latest available CRTM, version 2.0.2, is included in this release version and it will be compiled automatically when the appropriate environmental variable is set. Users do not need to download and install the CRTM).

To compile WRFDA with RTTOV: RTTOV still must be downloaded    (http://research.metoffice.gov.uk/research/interproj/nwpsaf/rtm/rtm_rttov10.html) and  installed using the same compiler that will be used to build WRFDA, since the library produced by one compiler may not be compatible with code compiled with another. Then, the necessary environment variable should be set with

setenv RTTOV ${path_for_RTTOV}  

o   If you intend to use gfortran and intel compilers, the following environmental setting is needed to read BUFR format radiance data

For Csh:

gfortran:setenv GFORTRAN_CONVERT_UNIT "little_endian:94-99" ifort   :setenv F_UFMTENDIAN "little:94-99"

For Bash:

gfortran:export GFORTRAN_CONVERT_UNIT="little_endian:94-99" ifort   :export F_UFMTENDIAN="little:94-99"

(Note: To WRFDAV3.2.1 or earlier version users, please refer to http://www.mmm.ucar.edu/wrf/users/wrfda/Docs/readBufr.htm )

Building the WRFDA Code (for 4DVAR)

Building WRFDA 4DVAR requires that WRFPLUSV3.4 be already built.