c. Setting up WRF-Var

 

In this part of the tutorial you will compile the WRF-Var code.

 

i. Setting up required libraries

The WRF-Var software relies on several external libraries in order to perform its functions. These libraries are not distributed with the WRF or WRF-Var code, and, consequently, they must be installed separately before WRF-Var can be compiled. Besides the NetCDF libraries, which are used by the WRF model and the WPS as well, WRF-Var requires the BLAS (for Basic Linear Algebra Subprograms) and LAPACK (for Linear Algebra PACKage) libraries for computation, plus the NCEP BUFR library for reading BUFR-formatted observations. Here, we describe the basic procedure for downloading and installing these libraries on a Linux system with the PGI compilers, with notes on how the procedure might be modified for other systems or compilers. After compiling each of these libraries, it is important that the user sets environment variables to tell the WRF-Var build system where to locate the libraries, as described in the steps below.

To simplify the process of installing the libraries, we recommend that a common directory be created, within which all of the libraries will be installed. In the steps below, we will assume that such a directory has been created, and we will refer to this directory using the environment variable $VARLIB, which can be set, for example, in the csh shell with the command

            setenv VARLIB [full path of the library installation directory]

Installing BLAS:

The BLAS code may be downloaded from http://netlib.org/blas/ by clicking on the link for the file "blas.tgz". Having downloaded the source code to $VARLIB, the libraries may be installed as follows. First, we unpack the BLAS source code and change to the newly-created BLAS directory:

cd $VARLIB

tar xzvf blas.tgz

cd BLAS

Within the BLAS directory, we compile all of the Fortran source files. Here, it is important to use the same Fortran compiler as will be used to compile WRF-Var; in our example, this is the pgf90 compiler. Also, it is crucial that Fortran real values be represented using 8 bytes, since that is the convention that will be used in WRF-Var. To specify that all real values default to using 8 bytes of storage, we use the "-r8" flag for pgf90. For other compilers, consult the table at the end of this section or check the compiler documentation to find the equivalent flag.

pgf90 -c -O2 -r8 *.f

Having compiled all of the Fortran source into object (*.o) files, we create a library from them with the UNIX "ar" command.

ar -ru libblas.a *.o

Finally, we remove all of the object files, which are no longer needed, and set the BLAS environment variable to tell the WRF-Var build system where to find our BLAS library.

rm *.o

cd ..

setenv BLAS $VARLIB/BLAS

Building LAPACK:

The LAPACK source code may be downloaded from http://netlib.org/lapack/ by clicking on the link to "lapack.tgz", which is found a short way down the page. After downloading the lapack.tgz file to $VARLIB, we unpack the source code and prepare to build the libraries with the following commands.

cd $VARLIB

tar xzvf lapack.tgz

cd lapack-3.1.1

Before building the LAPACK libraries, we need to properly setup the make.inc file. Several templates are provided for different system types in the INSTALL directory, and, in our example, we can use the make.inc.LINUX file as a basis to be modified to match our Fortran compiler and compiler options. We copy INSTALL/make.inc.LINUX to the make.inc file in the top LAPACK directory before setting several variables. As with the BLAS libraries, it is crucial that LAPACK be compiled to use 8-byte real values!

cp INSTALL/make.inc.LINUX make.inc

[Edit make.inc, changing both FORTRAN and LOADER to pgf90 and changing OPTS to -O2 -r8 ]

After setting up the make.inc file, we can build the LAPACK libraries by issuing the command

make lib

On some systems, the "table of contents" for the libraries may not be correctly created; to ensure that the LAPACK libraries have a proper table of contents, we issue the commands

ranlib lapack_LINUX.a

ranlib tmglib_LINUX.a

Then, we can link the libraries created by the LAPACK makefile to the library names that are used when compiling WRF-Var; essentially, the file names should not be specific to the system that they were compiled on (in our case, Linux).

ln -fs lapack_LINUX.a liblapack.a

ln -fs tmglib_LINUX.a libtmg.a

Having built the libraries, we can remove the object files, which are no longer needed, and set the LAPACK environment variable, which will tell the WRF-Var build system where to find our LAPACK libraries.

rm SRC/*.o

cd ..

setenv LAPACK $VARLIB/lapack-3.1.1

Building BUFR:

Begin by downloading the BUFR source code from http://www.nco.ncep.noaa.gov/sib/decoders/BUFRLIB/ by clicking on the link "Download NCEP BUFRLIB Software". After saving the file to the $VARLIB directory, we create a directory BUFR within $VARLIB before unpacking the BUFRLIB.tar file, since the BUFRLIB.tar file does not create its own subdirectory.

cd $VARLIB

mkdir BUFR

mv BUFRLIB.tar BUFR

cd BUFR

tar xvf BUFRLIB.tar

After unpacking the BUFR source code in the BUFR subdirectory, we compile all of the Fortran files with the same Fortran compiler that will be used to build WRF-Var; as with the BLAS and LAPACK libraries, it is important that we specify 8-byte real values using, in the case of pgf90, the "-r8" flag.

pgf90 -c -O2 -r8 *.f

Similarly, we compile all of the C files using the same C compiler that will be used to with WRF-Var. In order that the C subroutines be given names that will be compatible with subroutine calls from Fortran code,  we must specify "-DUNDERSCORE" for the pgf90 compiler; the specific flag to be given when compiling C routines will vary depending on which Fortran compiler is used, and at the end of this section, we provide a table specifying which flag should be used in place of "-DUNDERSCORE" for different compilers.

pgcc -c -O2 -DUNDERSCORE *.c

Having compiled all of the source code, we create a library from the object (*.o) files using the "ar" command.

ar -ru libbufr.a *.o

Finally, since the object files are now no longer needed, we can remove them to save space. Then, we must set the BUFR environment variable to point the WRF-Var build system to the location where the BUFR library has just been installed.

rm *.o

cd ..

setenv BUFR $VARLIB/BUFR

ii. Compiling WRF-Var

Once the BUFR, BLAS and LAPACK libraries have been completely installed, and the environment variables $BUFR, $BLAS, and $LAPACK have been set to point to the locations of the libraries, we can configure and compile WRF-Var. Since WRF-Var uses the same build system as the WRF model, this build system must be instructed explicitly to compile WRF-Var (if this is not done, the build system will try unsuccessfully to compile WRF-Var with the WRF Registry and build options). We instruct the build system with the environment variable $WRF_DA_CORE, setting it to 1.

setenv WRF_DA_CORE 1

Now, the configuration script can be run. This script will present a set of compilation options, one of which must be chosen. Generally, there are several options for each combination of Fortran and C compilers, and since the configure script does not check whether all of the listed compilers are actually installed, it is important to choose only among the options for compilers that are available.

./configure

After choosing a configuration option, the script will prompt for a choice of nesting options. Since WRF-Var does not deal with nested domains, we recommend simply choosing the default option, 0 (no nesting) in the case of a "serial" or "smpar" option, or 1 (basic) in the case of a "dmpar" or "sm+dm" option.

            [ Choose the default option when prompted ]

We recommend running WRF-Var in a serial (or single-processor) mode first, but later, if you want you can run WRF-Var on distributed memory machines by recompiling it appropriately.

Having configured the build system to use an available Fortran and C compiler (and parallelism option, e.g., distributed-memory parallelism), we can build the WRF-Var executables by running the compile script with the argument "all_wrfvar".

./compile all_wrfvar

(or, to create a log of the compilation for later reference, ./compile all_wrfvar >& compile.log &)

Successful compilation of will produce several executables in the “var/da” directory including “da_wrfvar.exe”. You can list these executables by issuing the command (from the main WRFV3directory)

            ls -l var/da/*exe

Hopefully, you'll see the following executables:

da_wrfvar.exe

da_advance_time.exe

da_verif_obs.exe

da_verif_anal.exe

da_update_bc.exe

da_tune_obs_desroziers.exe

da_tune_obs_hollingsworth1.exe

da_tune_obs_hollingsworth2.exe

da_rad_diags.exe

gen_be_stage4_global.exe -> ../gen_be/gen_be_stage4_global.exe

gen_be_stage2.exe -> ../gen_be/gen_be_stage2.exe

gen_be_stage1_1dvar.exe ->../gen_be/gen_be_stage1_1dvar.exe

gen_be_stage1.exe -> ../gen_be/gen_be_stage1.exe

gen_be_ep1.exe -> ../gen_be/gen_be_ep1.exe

gen_be_ensrf.exe -> ../gen_be/gen_be_ensrf.exe

gen_be_diags_read.exe ->../gen_be/gen_be_diags_read.exe

gen_be_cov2d.exe -> ../gen_be/gen_be_cov2d.exe

gen_be_stage4_regional.exe ->../gen_be/gen_be_stage4_regional.exe

gen_be_stage3.exe -> ../gen_be/gen_be_stage3.exe

gen_be_stage2a.exe -> ../gen_be/gen_be_stage2a.exe

gen_be_stage2_1dvar.exe ->../gen_be/gen_be_stage2_1dvar.exe

gen_be_stage0_wrf.exe -> ../gen_be/gen_be_stage0_wrf.exe

gen_be_etkf.exe -> ../gen_be/gen_be_etkf.exe

gen_be_ep2.exe -> ../gen_be/gen_be_ep2.exe

gen_be_ensmean.exe -> ../gen_be/gen_be_ensmean.exe

gen_be_diags.exe -> ../gen_be/gen_be_diags.exe

gen_be_cov3d.exe -> ../gen_be/gen_be_cov3d.exe

After successful compilation of WRF-Var you are ready to run WRF-Var for the test case. The WRF-Var system is run through a suitable wrapper script, which activates various standard scripts residing in “WRFV3/var/scripts” directory. 

Thus, to run any case like the tutorial test case in this tutorial exercise, the user should write a suitable wrapper script and execute this script. For example, the wrapper script for running “con200”, the tutorial test case is located at:

            WRFV3/var/scripts/wrappers/da_run_suite_wrapper_con200.ksh

You need to modify this wrapper script for various job details defined via different environment variables in this wrapper script. The user should also be able to run WRF-Var with namelist options other than its pre-set default values, which are defined in WRFV3/Registry/Registry.wrfvar file. It is done by suitably defining the desired environment variables appropriate for your own application. Examples include changing data directory, source code location, experiment run area, east-west, south-north and vertical dimension of your domain etc. All non-default namelist options, which user wanted to set via wrapper script will show up in the WRF-Var namelist (namelist.input) file, which is created in the run directory ($RUN_DIR) defined via the wrapper script.

Note: As a rule any WRF-Var namelist option should always be set in wrapper script using uppercase letters preceded by “NL” string. For example for “con200” case the grid size dimensions in x-direction is 200000 m and the corresponding WRF-Var namelist variable name is “dx”. This is specified in wrapper script as “export NL_DX=200000”.

WRF-Var system uses WRF framework to define and perform parallel, I/O functions. This is fairly transparent in the WRF-Var code. At run time, WRF-Var requires one to specify the grid dimensions at run-time. This is communicated to WRF-Var system via the wrapper script like for this case as follows.

export NL_E_WE=45

export NL_E_SN=45

export NL_E_VERT=28

Thus user needs to change these parameters to run their own case.

Note: If this grid dimensions do not mach with the dimensions written in the first guess (FG) input file, WRF-Var will abort with proper diagnostic message.

Having compiled WRF-Var and familiarized yourself with the setting of various environment variables in wrapper script, it’s time to run your test case.

iii. What next?

Having compiled WRF-Var and familiarized yourself with the script and namelist files, it's time to run WRF-Var!

d) Run WRF-Var “con200” Case Study


Notes:

1. When compiling the C source files for the BUFR library, the correct flag (technically, a pre-processor variable) must be specified so that the C routines will have internal names that are compatible with the names used in Fortran calls to these routines. The following table summarizes the flag that should be specified in place of "-DUNDERSCORE" for supported Fortran and C compilers.

Fortran compiler

C compiler

Required flags when compiling C source files in BUFRLIB

Required flags compiling Fortran source files in BUFRLIB

pgf90

pgcc, gcc

-DUNDERSCORE

-r8

g95, gfortran

gcc

-DUNDERSCORE

-r8 -fno-second-underscore

ifort

icc, gcc

-DUNDERSCORE

-r8

xlf90

xlc

(none)

-qrealsize=8

 2. The configure script of the WRF-Var build system will present several options for each possible compiler set. These options permit different types of parallelism to be employed when running WRF-Var, and the table below summarizes the meaning of each option.

Option

Meaning

Notes

serial

No parallelism ("serial" code)

WRF-Var runs can only use a single processor

smpar

Shared Memory parallelism

WRF-Var runs constrained to a single node

dmpar

Distributed Memory parallelism

WRF-Var runs can span multiple nodes.

MPI must be installed.

sm+dm

Hybrid, Shared Memory and Distributed Memory parallelism

Runs can span multiple nodes; uses threads within a node and MPI between nodes.

MPI must be installed.

 


Trouble Shooting:

1. If you have questions, ask wrfhelp@ucar.edu


 

Return to WRF-Var TutorialPage