Frequently Asked Questions

1. WRF QUESTIONS


1.1 WRF Installation
  • How do I ensure that my compilers and netCDF are compatible and ready to compile WRF?

  • Why do I get the following error message when building WRF: ERROR: Undefined symbol nf_put_vara_real?

  • Why do I get the following error message when compiling WRF using OpenMPI: error: expected declatation specifiers or '...' before 'MPI_Comm'?

  • When I compile WRF/WPS, there is an error saying undefined reference and redefined bi_endian. What is wrong?

  • When I compile WRF, in the linking phase, there are some undefined references to 'curl_version_into.' How do I fix this problem?

  • I keep getting the error "can't copen module file module_ext_internal..." What is the problem?

  • How do I turn off optimization and add debugging options in WRF?

  • What is the difference between Serial, SMPar and DMPar compiles of WRF?

  • Do I need to build wps with MPI/OMP mode?


  • 1.2 WRF Physics

    1.2.1 Surface and PBL Schemes

  • What are the physical meanings of the variables used in LANDUSE.TBL?

  • Is the category of SCT_DOM the same as the category of LU_INDEX?

  • Is tsk (surface temperature) used outside the LSM other than to diagnose the 2m temperature?

  • How are VEGPARM.TBL and LANDUSE.TBL used in WRF?

  • Surface roughness is given in both LANDUSE.TBL and VEGPARM.TBL. Which one is used in WRF?

  • How does WRF produce the 10m wind speed?

  • I got the error message "Warning ZR + 2m is larger than the 1st WRF level." What does this mean?

  • How will the SST updates in the parent domain affect the moving nest? Does wrf update the SST througout the nest as it moves, or does it just change the SST along the boundaries as the nest moves?

  • How can I revise initial SST?

  • When and how do I apply for tmn_update?

  • How do I incorporate a separate SEAICE field by using a small seaice_threshold to set ice properly on smaller inland water bodies?

  • How do I turn off surface fluxes in WRF for a sensitivity test?

  • When I turn on the no_mp_heating option, how can I get rid of latent heating from cumulus?

  • Where can I get more details about the UCM sheme in WRF?

  • Where are the surface exchange coefficients computes? What is the name of the variable that represents the surface exchange coefficient for heat and moisture the model is actually using?

  • How can I output the values of the vertical eddy viscosity and the vertical thermal diffusivity at each time-step if I activate the PBL scheme?

  • How can I get PBL tendencies?

  • Does WRF calculate the surface energy budget?

  • What is the TKE variable that represents TKE computed from the PBL scheme?

  • If the water vapor mixing ratio is too low at the surface analysis of the input data, how can I solve this problem?

  • Why is SNOWC always zero in my wrfout file?

  • How can I change the land cover in my innermost domain?

  • Where in the code or namelist do I need to make changes in order to create a dry (adiabatic) model run? Is there a way to reduce the q values by a certain percentage of its original value?

  • If our input data do not have SKINTEMP, what field can I use to avoid the 'mandatory SKINTEMP not found' error?

  • When and how do I use the sst_update option?

  • How do I use the sst_skin option?

  • If my input data has no surface information, how can I fill the surface level data with the lowest model level data?

  • What is the physical meaning of RSMIN in the NOAH LSM?

  • How does WRF calculate albedo? Does it use values provided in LANDUSE.TBL?

  • Can I run both an ideal and real case LES simulation? For the real case, is there anything special I need to do?

  • How does the LSM in WRF handle the seaice process?

  • back to top



    1.2.2 Radiation, Cumulus and Microphysics Schemes

  • How can I uncouple coupled variables, such as rqvblten and rqvcuten?

  • How do I estimate the heating profile?

  • How do I calculate dBZ if I run WRF with microphysical schemes?

  • Can I get information about the height at the bottom of precipitation clouds, as well as convective precipitation clouds, in the WRF output?

  • How can I make a "fake dry run" in WRF?

  • Why do the eta model heights vary over time?

  • How is vertical velocity defined in WRF?

  • wrf.exe fails with a segmentation fault. The model prints error messages like: WOULD GO OFF TOP: KF_ETA_PARA I,J,DPTHMX,DPMIN. What does this mean?

  • What is the physical meaning of "mp_zero_out?"

  • Is CUDT used for all the cumulus schemes?

  • back to top



    1.2.3 FDDA and OBS Nudging

  • How often is FDDA nudging done?

  • How do I set fdda_start and fdda_end for an observational nudge? How about the variable obs_ionf?

  • How does the moving nest work to track storms?

  • Can I use both grid-nudge and obs-nudge at the same time, over the same domain?

  • Can I apply FDDA for a one-way nested run?

  • How do I set obs_twindo and what is its impact?

  • How do I define the wave number for spectral nudging?

  • back to top



    1.2.4 Miscellaneous Questions

  • How can I find pressure and height at each WRF vertical level?

  • What is the difference between a present moving nest and vortex-following?

  • How can I check whether my domain setting is appropriate?

  • What are the variables, such as U_BXS/U_BXE/U_BYS/U_BYE etc., in wrfbdy?

  • When using the moving-nest option, I have a nested domain that is not specified to move, but it moved anyway during the model run. Why did this happen?

  • Why can I not run tc.exe?

  • Is there a typical number for the spin-up time, or does it change from case to case?

  • What is the error "TBOUND exceeds table limit?"

  • What value should be used for the radiation timestep?

  • Why do I see several netCDF complaints when I set debug to a higher level?

  • How do I output a multi-time wrfinput file?

  • How can I make the sea-breeze case run in parallel (Open-MP) on a shared-memory cluster? At present it only runs on one thread.

  • back to top


    1.3 Run-Time Problems

  • How many processors should I use to run wrf.exe?

  • If I want to see how a particular program executes, can I add a 'stop' statement into this program?

  • What do variables like ims, jms, kms, ips, jps, kps, ... mean in rsl files?

  • Running the held-suarez idealized case with a nested domain, I found that when domain two is initialized, the grid of domain one (XLONG and XLAT) changes in the region where Domain 2 is located. Why does this happen?

  • What are the reasonable lowest eta levels for running WRF?

  • How do I use auxiliary files to output variables other than those in wrfout?

  • What is the objective in setting quilting in WRF?

  • What is the most common reason for a segmentation fault?

  • What is the format for output of a time series variable?

  • I am getting CFL errors due to strong vertical velocity. How can I get past these CFL errors?

  • I got the following error message when running real.exe: fram/module_domain.f: Failed to allocate grid%xkmhd(sm31:em31, sm32:em32, sm33:em33). What are the possible reasons?

  • I have shared libraries installed correctly but the computer cannot find them at run-time. How can I find shared libraries at run time?

  • How can I output some local variables in specific physics?

  • How can I turn off the coriolis force via the namelist when I am trying to run an idealized case?

  • I tried to write a wrfout file to GRIB2 format and got a 'glibc detected corrupted memory' error. How do I fix this?

  • How can I fix the problem of negative moisture variables?

  • Why are my wrfout and/or wrfrst files not generated when the model runs normally, without any errors?

  • In my namelist, the options for printing obs information are set to true, but when I check the rsl files, I cannot find any information about the observational nudging.

  • How can I change auto-decomposition but use my own option for domain decomposition?

  • I got "error in the grid%tmn" when I ran real.exe, but it finishes ok. Show I ignore this or not?

  • How can I define a time step that is smaller than 1 second?

  • back to top



    1.4 Adding New Features in WRF

  • How can I create a new ideal case?

  • How do I add a new variable in tslist?

  • I am trying to add the variables GMT, JULYR, and JULDAY to the SFLX call within the Noah Driver (module_sf_noahdrv.F). How can I do that?

  • How do I output Registry variables if they are not included in wrfout files?

  • How can I output some new single variables?

  • How do I output the 2d fields every hour or so, but the 3d fields once every 12 hours?

  • How can I introduce new variables into the program "metgrid" and later use them in the "real" program?

  • How can I estimate run time?

  • back to top


    2. WPS QUESTIONS

    2.1 Input Data

  • If my data is Earth relative, do I need to convert them to WRF grid-relative coordinates before I can use these data as input for WRF?

  • How can I get RH when the input data does not provide RH at high levels?

  • Why does landuse_5m data have 24 in the z-dimension, as indicated by its index file? The USGS land use types have values from 1 to 24.

  • How representative of present-day land use is WPS's input datasets?

  • If my input data has a landmask that is different from the one generated by the geogrid program, how do I know which one should be used?

  • How do I handle a mismatch between landsea mask and soil data?

  • If I am using the is_wind_grid_rel logical flag in the WPS source code, and I want to process the earth-relative wind fields in WPS, do I need to change in the WPS source code, is_wind_grid_rel, from all .true. values to all .false. values?

  • When I use RUC 13km data, real.exe complains that "tmn" is zero and, as a consequence, surface temperatures quickly drop in the model runs. How do I fix this?

  • What types of map projections are supported by WPS?

  • Where can I find servers that provide wrfinput data to use in WRF?

  • back to top



    2.2 Revision of Landuse Type / Terrain Height

  • Is there any way to change desert land to agriculture land over a specific area?

  • How can I change the land use type inside codes?

  • Why does plotgrids.exe not compile?

  • How can I set the entire domain to be water?

  • How do I fix the inland lakes problem?

  • How can I insert 12-month LAI data into geogrid?

  • How can I implement soil initialization from different source data?

  • How can I combine data from different sources for WPS?

  • 2.3 Intermediate Format Data

  • Can you give more details about the intermediate format data?

  • Is the map projection type "regular_ll" equal to the geometric lat lon?

  • back to top

     

    1. General WRF QUESTIONS

    1.1 WRF Installation

    Q. How do I ensure that my compilers, libraries, and netCDF are compatible and ready to compile WRF?

    A. We have put together a script that will do several tests on your compilers, libraries, and netCDF. This script currently only allows you to use Version 3.4.1 of WRF and WPS. If you want to compile other versions, you should only use this script to test the compatibility of your compilers and netCDF. Please download the tar file here. You will need to unpack the tar file and then follow the instructions in README.compile.

    Q. Why do I get the following error message when building WRF?

    ld: 0711-317 ERROR: Undefined symbol: .nf_put_vara_real
    ld: 0711-317 ERROR: Undefined symbol: .nf_get_vara_real
    ld: 0711-317 ERROR: Undefined symbol: .nf_put_vara_double
    ld: 0711-317 ERROR: Undefined symbol: .nf_get_vara_double
    ld: 0711-317 ERROR: Undefined symbol: .nf_put_vara_int
    ld: 0711-317 ERROR: Undefined symbol: .nf_get_vara_int

    A. These errors indicate that the compiler used to install netCDF is not the same as the one used to compile WRF. Reinstalling netCDF with the same compiler will solve the problem.

    Q. Why do I get the following error message when compiling WRF with OpenMPI?

    mpicc -DFSEEKO64_OK -w -O3 -c -DLANDREAD_STUB -DDM_PARALLEL -DMAX_HISTORY=25 -c buf_for_proc.c
    In file included from buf_for_proc.c:63:0:
    /usr/include/mpi.h:1083:25: error: expected identifier or ‘(’ before ‘int’
    /usr/include/mpi.h:1097:25: error: ‘MPI_Comm’ redeclared as different kind of symbol
    In file included from buf_for_proc.c:63:0:
    /usr/include/mpi.h:319:37: note: previous declaration of ‘MPI_Comm’ was here
    In file included from buf_for_proc.c:63:0:
    /usr/include/mpi.h:1099:34: error: expected declaration specifiers or ‘...’ before ‘MPI_Comm’
    /usr/include/mpi.h:1100:38: error: expected declaration specifiers or ‘...’ before ‘MPI_Comm’
    /usr/include/mpi.h:1102:44: error: expected declaration specifiers or ‘...’ before ‘MPI_Comm’

    A. Please add -DMPI2_SUPPORT on the DM_CC line in configure.wrf. Do a clean -a first.

    Q. When I compile WRF/WPS, there is an error saying undefined reference and redefined big_endian. What is wrong?

    A.You need to add the compile option to ensure data is written out in big_endian. Check configure.wps to see what option is used for the Fortran compiler. It could be -byteswapio for PGI compiler, '-convert big_endian' for ifort, and '-fendian=big' for g95.

    Q. When I compile WRF, in the linking phase there are some undefined references to 'curl_version_info.' How do I fix this problem?

    A.When installing netCDF4, you must do it without activating any of the new netCDF functions (such as use of HDF5). To do so, simply reconfigure netcdf with '--disable-netcdf-4'.

    Q. I keep getting the error “can’t open module file module_ext_internal..." What is the problem?

    A. Please do the following:

    1. type 'clean -a' first to get the code back to the original state
    2. type 'configure' and select a compile option
    3. type compile -j 1 em_real >& compile.log


    The option -j 1 will compile serially, one routine at a time.

    Q. How do I turn off optimization and add debugging options in WRF?

    A. type: ./configure -d to turn off optimization.
    For the PGI compiler to catch any errant memory accesses, you will need to add the flags
    -Mbounds -Mchkptr to catch out-of-bounds array accesses and uses of null pointers.

    Manually add (in configure.wrf): FCOPTIM         = -Mbounds -Mchkptr  #        -O2 -fast

    The debug option that can be used on the IBM with the xlf compiler:
    FCBASEOPTS      =       -w -qspill=20000 -qmaxmem=32767 $(FCDEBUG) $(FORMAT_FREE) $(BYTESWAPIO) -qinitauto=7FF7FFFF -qsigtrap -C

    Q. What is the difference between Serial, SMPar and DMPar compiles of WRF?

    A. Serial is for single CPUs, SMPar is for multi-core/Multi CPUs and DMPar is for Clusters. "SMPar" means "Shared-Memory Parallelism." In practice what happens is that OpenMP directives are enabled and the resulting binary will only run within a single shared-memory system. "DMPar" means "Distributed-Memory Parallelism," which in turn means MPI will be used in the build.  The resulting binary will run within and across multiple nodes of a distributed-memory system, better known these days as a cluster. You can also configure for a build that includes both SMPar and DMPar. The resulting binary can be run hybrid, meaning using OpenMP for parallelism within nodes (total or partial) and MPI across nodes. Getting a hybrid build to have its OpenMP threads and MPI tasks placed properly on the processors can be tricky, though.

    Q. Do I need to build WPS with MPI/OMP mode?

    A.No you do not need to. Even for multiple domains, WPS runs so quickly that you can just use serial mode unless you process data over multi-domains with hundreds of grid points. WPS actually does not support OpenMP compiling.

    back to top



    1.2 WRF Physics

    1.2.1 Surface and PBL schemes

    A. ALBD: Surface albedo     
    SLMO: Surface moisture availability    
    SFEM: Surface emissivity     
    SFZ0: Surface roughness     
    THERIN: Surface thermal inertia     
    SFHC: Soil heat capacity  
    A. No, SCT_DOM represents dominant soil categories, while LU_INDEX represents dominant vegetation types. SCB_DOM is the field of dominant soil categories from bottom layer of the soil data, and SCT_DOM is the field of dominant soil categories in the top layer. Most schemes use the top layer value.
    A. Yes, TSK is used in PBL and radiation codes. The 2 m fields (T and Q) do not affect model-level fields: they are diagnosed from model-level fields and surface (skin level) fields which come from the land model. In WRF, the surface or 2 m diagnostics are computed in module_sf_sfcdiags.F, which is called from module_surface_driver.F after Noal LSM.
    A. LANUSE.TBL was used by all LSM options prior to version 3.1. VEGPARM.TBL is used by Noah and RUC LSM. PX LSM uses data defined in module_sf_pxlsm_data.F. SSIB uses data defined in module_sf_ssib.F. The model reads fields from LANDUSE.TBL first, but they may be overwritten by values in VEGPARM.TBL if there is an overlap.  
    A. The roughness was given by the table value from LANDUSE.TBL in versions of WRF prior to 3.1. Since 3.1, it has been calculated based on vegetation fraction, and min and max values defined in VEGEPARM.TBL for the Noah LSM. Look for details in phys/module_sf_noahlsm.F. The simple slab model (sf_surface_physics=1) still uses the values defined in LANDUSE.TBL.
    A. Generally it computes 10m wind based on Monin-Obukhov similarity theory. For MYJ PBL, please see phys/module_sf_myjsfc.F. For the MM5 surface layer option, see phys/module_sf_sfclay.F. The ARW Tech note should have the published papers listed if you would like to read more.
    A. The error means that your lowest model level is too low to accommodate the urban canopy model. The ZR value is set in URBPARM.TBL. If your lowest level (0.9975, middle of 1 and 0.995) is lower than 10 + 2 m, the code will complain. The solution is either to reduce ZR in the TBL file, or raise your lowest model level.  
    A. In general you can not use the sst_update option in a moving nest, since you can not have sst pre-processed for the moving domain. If you have sst updated in fixed domains, it can only affect the nest domains in the leading edges of the moving domain where sst from the coarse domain will be interpolated. It cannot influence the center part of the moving domain at the sst update time. Once the nest moves over to area previously occupied by the coarse domain, the updated sst will have some impact. Since V3.4, there is a way to do so.
    A. You need to make a small change in dyn_em/module_initialize_real.F. Look for this loop:

          DO j = jts, MIN(jde-1,jte)
             DO i = its, MIN(ide-1,ite)
                IF ( ( grid%landmask(i,j) .LT. 0.5 ) .AND. ( flag_sst .EQ. 1 ) .AND. &
                     ( grid%sst(i,j) .GT. 170. ) .AND. ( grid%sst(i,j) .LT. 400. ) ) THEN
                   grid%tsk(i,j) = grid%sst(i,j)
                ENDIF
             END DO
          END DO

    and add your sst change here. This will work if your met_em files contain an SST field, or if your netcdf file global attribute contains FLAG_SST = 1. 

    One can also use NCL or read_wrf_nc.f to modify gridded data.

    A. The option tmn_update is added to improve long time integration. This option allows the lower boundary condition for the Noah LSM to be updated from the averaged forecast skin temp in the past 150 days (lagday option). When this option is turned off, the climatological deep soil temp is used, which means you have the same lower boundary for the Noah LSM every year.
    A. By 'small', it means that the seaice_threshold should be smaller than any of the SST values so that when you have good seaice input, seaice_threshold will not be effected. Using seaice_threshold is an alternative to set seaice directly because not all datasets provide seaice.
    A. To run the model without fluxes, you can simply set the namelist variable isfflx = 0. Please note this options only works for surface physics options 1, 5, 7, 11..
    A. When you turn off mp heating, we usually recommend that you turn off the cumulus scheme as well - otherwise it doesn't make physical sense.
    A. Please refer to these documents on the following pages and see if they are helpful:

    http://www.ral.ucar.edu/research/land/technology/urban.php

    http://www.rap.ucar.edu/research/land/technology/lsm.php  
    A. The exchange coefficients are computed in the surface layer and PBL schemes. For example, in the MYJ PBL, it is calculated in myjsfc (routine: phys/module_sf_myjsfc.F), and for YSU PBL, it is in sfclay (or phys/module_sf_sfclay.F). The variables names are chs, chs2, cqs2.  
    A. The vertical thermal diffusivity in MYJ, YSU, QNSE, MYNN, Boulac, QNSE comes out in exch_h. WRF does not output the eddy viscosity. It could be extracted from the MYJPBL in a similar way to exch_h with some coding similar to exch_h in that module and a new array. AKMK is a local array in the MYJPBL that is proportional to the eddy viscosity.The XKMV and XKHV arrays are used by other PBL options, not MYJ.
    A.   RUBLTEN, RVBLTEN, RTHBLTEN, RQVBLTEN, RQCBLTEN, RQIBLTEN are coupled tendencies of wind, temperature, vapor and cloud due to PBL parameterization.
    A. Since WRF V3.1, there is a calculation for the surface energy budget for the Noah LSM. You can find the variable, noahres, and its calculation in phys/module_sf_noahdrv.F.  
    A. The tke field name is tke_pbl or grid%tke_pbl since WRFV3.3. In previous versions, tke is grid%tke_myj. grid%tke_1 in solve_em represents another tke field which you would only invoke when the grid size is very small (e.g. LES range). The physics calls have been moved out of solve_em.F itself, and are now in module_first_rk_step_part1.F. Solve_em calls first_rk_step_part1.
    A. It is possible to fix by re-running metgrid with edited METGRID.TBL. This file is located in the metgrid/ directory. Search for RH, and edit the following line in the section to say:

            fill_lev=200100:const(91)

    Alternatively, when you run real.exe, add the following line in the &domains section of the namelist:

     use_surface = .false.,

    The problem is that the surface moisture field is filled with value 0, and when doing vertical interpolation, that affects the lowest model level. Either of the above can help with the problem. We will be looking for a more permanent fix in the future.
    A. snowc is a flag value ( either 1 or 0) to indicate whether there is snow cover.
    A. If you are interested in changing the inner-most nest, make the change in wrfinput_d0X (where X is your inner nest domain ID). When you run wrf.exe, you should use input_from_file = .t., .t., .t., so that the model uses data from the nest domain input. If you made changes in geogrid files (geo_en.d0x.nc), set surface_input_source=3 before running program REAL.
    A. To turn off latent heating, there is a namelist switch: no_mp_heating = 1. This needs to be used with cu_physics  = 0. If you would like to turn off surface sensible and latent heat fluxes, set isfflx = 0. This may not work with all surface options, but it works for some of the common ones.
    A. There are several ways you can do this.

    1. Use SKINTEMP from a regular GFS forecastt. This means you will need to run a separate ungrib job just for SKINTEMP from that dataset.

    2. Create a Vtable that will ungrib the surface temp (your first field listed below), and name it 'SKINTEMP' for metgrid.

    3. If you are using V3.1.1 or a later version, SKINTEMP is no longer mandatory. You may run the utility program avg_tsfc.exe for multiple times, and create an averaged surface temp field: TAVGSFC. Add this field to the &metgrid section in namelist.wps:
    &metgrid
     fg_name         = 'FILE'
     constants_name  = './TAVGSFC'

    Also edit metgrid/METGRID.TBL. Find the section for SKINTEMP and remove 'mandatroy=yes'. This should allow you to finish metgrid.exe, and later when you run real.exe, it will use this field instead of SKINTEMP. It is ok to do any of these since SKINTEMP is not critical to the model, and it will be modified quickly during model integration.
    A. The WRF model physics do not predict sea-surface temperature, vegetation fraction, albedo and sea ice. For long simulations, the model provides an alternative to read-in the data and update these fields. In order to use this option, one must have access to time-varying SST and sea ice fields. Twelve monthly values of vegetation fraction and albedo are available from the geogrid program. Once these fields are processed via WPS, one may activate the following options before running programs real.exe and wrf.exe:

    sst_update = 1
    auxinput4_inname = “wrflowinp_d<domain>” (created by real.exe)
    auxinput4_interval = 360, 360, 360,

    A. The sst_skin option needs to be used together with the sst_update option prior to version V3.4. In other words, one must set sst_update = 1 in order to the use sst_skin option. To properly use the sst_update option, one needs to have access to a time-varying SST field. If you have it, then you can set sst_update = 1 when you run real, which produces a wrflowinp_d* file for use in the model. If you do not think you need to have sst updated during your 40 - 50 day run, you can modify the code (phys/module_surface_driver.F) and move the call to ssk_skin_update outside the sst_update conditional ( this is necessary only for versions prior to WRFV3.4).
    A. If your input data does not have any surface fields (e.g. 2 m T, 10m U, V, etc.), you can simply edit metgrid/METGRID.TBL, and add a line like the following for TT, QV, UU and VV:

             fill_lev=200100:TT(X)

    where X will be the number of the lowest model level in your input data. For example, you can set

    fill_lev=200100:TT(100000)

    In this example, 1000mb temperature is filled in as surface temperature. In porgram REAL, you should then set use_surface = .false.

    A. RC = RSMIN / (LAI*STRESS functions) represents a combination of increasing the LEAF minimum resistance (Rcmin) by STRESS functions, and then dividing by LAI to obtain CANOPY resistance; so for a dense canopy, its CANOPY resistance is smaller than a single LEAF resistance.
    A. If you map data to the 24-category USGS data, then yes, the albedo will be given at the model start time. Since the initial value is dependent on the landuse type; however, the Noah LSM (since V3.1) does not use the albedo (or any other land properties) from LANDUSE.TBL any more. It uses values from VEGEPARM.TBL, and albedo is computed based on vegetation fraction, and bounded by the max and min values given in VEGEPARM.TBL.
    A. WRF can do both real and ideal LES cases; however, for ideal cases you need to compile the ideal LES, select periodic BC and use single profiles of wind, temperature, mixing ratio or dewpoint temperature/ RH ( please check), etc as input. In the real LES, you need to be cautious about case selection. It is computationally intensive, as your time scales are very small (may be a second or less depending on the resolution you run it). To have some realistic turbulence, you need to go down to very high resolution, like 30-60 m. When you run LES, you are explicitly resolving eddies that carry most enery (ie, you resolve a part of the turbulence spectrum, which otherwise is not resolved in the mesoscale model). Some of the PBL schemes are derived from the LES studies. It is a good idea to use YSU in the outer domain when you run LES in the inner domain. In the upper levels you need to have maximum 500 m resolution, whcih means many vertical layers, and increasing the compute time. You could use PBL in coarser domains and LES in the finer ones.
    A. LSM modules in WRF will update ice temperature, skin temperature, snowpack water content, snow depth and all surface energy items. However, it does not include any physics for seaice generation, disappearance, and thickness change.

    back to top


    1.2.2 Radiation, Cumulus and Microphysics schemes

    A. They should be devided by (mu+mub)
    Q. How do I estimate the heating profile ?
    A. You need to add up RTHCUTEN (from cumulus), RTHRATEN (from radiation), RTHBLTEN (from PBL parameterization), RTHNDGDTEN (if nudging is applied) and H_DIABATIC (from microphysics)
    A. Please see module_mp_radar.F in WRFV3.4.1.
    A. If you are using cu_physics option 1 or 2, then it is possible to output HTOP and HBOT for convective clouds. These are level numbers and one has to match them with the height values in post processing. These variables are not currently in the wrfout file. To output these variables, you need to edit Registry/Registry.EM ( or Regiustry.EM_COMM since V3.4) and change the lines for HTOP and HBOT to

    state    real    HTOP            ij     misc        1         -      rh        "HTOP"                 "TOP OF CONVECTION LEVEL"         ""
    state    real    HBOT            ij     misc        1         -      rh        "HBOT"                 "BOT OF CONVECTION LEVEL"         ""

    Type 'clean -a', and recompile. Note also that these do not include information from explicit precipitation, though it may not matter much.
    A. There is an option similar to IFDRY in MM5 (fake dry), that is no_mp_heating (&physics). You will also need to set cu_physics = 0. This option turns off heating from microphysics, but water vapor etc. will be advected in the model and a cloud will be produced.
    A. The height of eta levels will change during model integration since the eta levels are defined with dry hydrostatic pressure. As the pressure changes from time step to time step, the height of the levels will change. Each model time, (PH+PHB)/9.81 should give you the estimate of the height levels. If you have many levels near the surface and you are near topography, this could severely limit the time step you are using (hence cfl appears).
    A. w is the true vertical velocity by its conventional definition (dz/dt). WRF has another variable ww that is used for advection and accounts for the coordinate.
    A. This error is generated specifically by the KFeta cumulus scheme. It occurs because the presumed parcel path for deep convection exceeds the model top. This can happen because the low level instability is too large (SST too large and low level lapse rates too steep). It can also happen if the model top is too low (200 mb or sometimes 100 mb). This can also be a problem if the model is unstable and the vertical velocity is too large due to CFL condition violations (these warnings should appear in the rsl output files). A couple of things to try:

    1. If the model stops only after a few time steps, it is likely that the input data may have a problem. Check the input data, including TSK, TSLB.

    2. if you see many CFL points, decrease the time step from 6 x Dx to 4 x Dx

    A. If you set mp_zero_out = 1, it checks all of the microphysics species except vapor. If the value is set to 2, then it checks everything including vapor. You also need to set mp_zero_out_thresh for the value it checks against, but we really recommend using the positive-definite option (pd_moist = .true. for code prior to v3.1, and moist_adv_opt = 1 for v3.1 and later code). This option should keep these variables positive-definite.
    A. cudt should be set to zero for all cumulus scheme except KFETA .

    cudt is used independently from adaptive time stepping. GD and G3 schemes do not use it.

    back to top


    1.2.3 Nudging

    A. Nudging is done at every model time step. The data is interpolated to every model time step, and used to nudge. Please note that gfdda_interval is the input data interval, not nudging interval.
    Q. How do I set fdda_start and fdda_end for an observational nudge? How about the variable obs_ionf?

    A. fdda_start and fdda_end are starting and ending times for nudging. For example, you can set fdda_start =0 and fdda_end = 1440 for 24-hr nudging. The namelist variable obs_ionf is actually the frequency in timesteps for reading obs input and calculating innovation. A compatible setting for auxinput11_interval_s would be to simply set it to the same value as the namelist variable time_step. This simply acts as an "alarm time," so that WRF will check at multiples of this time interval to see if a read is needed (per obs_ionf). The read will be done at the obs_ionf frequency. Because there are probably not many studies done at this kind of model resolution with nudging, you may have to do some experimentation. The general concept is that one should only assimilate data that represents the weather on the scales that can be resolved by the model. For example, 1-min data may contain information about turbulence, which the model will not resolve. Hence forcing it into the model will only cause problems. It will also depend on what type of data you are assimilating. If it is only point data, you may not be very successful in assimilating it.   If the coverage is sufficiently large, then the chance for success is greater.
    A. A moving nest can only move 1 coarse grid point at a time, whether it is a specified or an automatic move. This capability is only useful for tracking tropical storms, and when used for racking tropical storms, one should use the automatic moving nest option. You may change the maximum move from 50 to 1000 when the specified move is used. This is done in frame/module_driver_constants.F.
    A. Yes you can, but using FDDA over a fine grid domain may result in smooth fields, and without solid physical basis ( e.e in PBL). Typically it is better to use analysis nudging over coarser domains ( dx>30km) and obs nudging for finer domains.
    A. No, you cannot because the “ndown” program does not support the option to generate a wrffdda file.
    A. For example, say you have an observation file with 12 hour interval radiosonde soundings. If the nudging time is 1200 UTC , and you set the obs_twindo =0.666667, then WRF will use data in the time window 11.33-12.67 UTC to nudge.
    A. The wave number defined here is simply (domain width in x or y)/(wave length to be nudged), so if you would like to nudge any waves longer than 1000 km in your 6000 km domain, then your wave number would be 6.

    back to top


    1.2.4 Miscellaneous Questions

    A. They can be computed from : ( P + PB ) * 0.01 (for pressure at half-sigma level)
    and ( PH + PHB ) / 9.81 (for geopotential height at full level).The model levels are not at constant pressure or height surfaces because they follow terrain.
    A. Both have moving nest capability. The preset moves requires the user to know a point where the feature of interest will be moving, and to specify a set of increments to the nest location that will allow the nest to stay centered over that feature.The vortex-following option makes the nest automatically adjust its position as the vortex moves. 
    A. One of the things you could check is the range of your map scale factors (MAPFAC_M, for example). A good setup should minimize the range of values. If you have values much larger than 1, that will limit your time step more.
    A. X/Y means array in the x/y-direction, S/E means starting and ending rows.
    These are values of U at the valid boundary time. Where there is a letter T in the field name, that means the change of the field in the time period up to the next valid boundary time. The following site has a document regarding the lateral boundary, which may be helpful:

    http://www.mmm.ucar.edu/wrf/WG2/software_2.0/index.html
    A. The nested domain may be forced to move if its child domain (that are specified for moving) moves too close to its boundary. The namelist variable corral_dist is used to set the max number of grid points allowed between a nested domain boundary and it's parent's boundary. The default value is 8 parent grid points.
    A. This executable file must be compiled serially and with the no-nesting option, prior to V3.4, and the serial option after V3.4.
    A. Typically the model requires anywhere between 6 - 12 hours to fully spin up. It will depend on the grid distances and time steps. The finer the grid size, the more time steps you would have in a given time window, hence faster spin-up. You may calculate the kinetic energy spectra at various model times to see how the model spins up (see Skamarock, W. C., 2004: Evaluating Mesoscale NWP Models Using Kinetic Energy Spectra. Mon. Wea., Rev., 132, 3019-3032). You may also compute d(surface pressure)/ dt to see how the values settle down in time by turning on the option diag_print. One way to shorten the spin up time is to use digital filter initialization.
    A. These messages indicate that the skin temperature at some grid point(s) exceeds 340 K, which is physically possible in hot desert regions in summer (e.g. Sahara and Arabian deserts). If part of your model domain covers one of these deserts and you are running WRF in the summer season, it is normal for these messages to occur in the course of a successful run. They do not necessarily indicate that the model is becoming unstable. If the reported values frequently exceed 345 K, you may want to increase the frequency of radiation calls (set radt to a smaller value). If you see these messages during a run in which skin temperature above 340 K is implausible (e.g. winter season, or the domain does not cover any desert regions) then something is going wrong. In my case, WRF was blowing up for a reason completely unrelated to the TBOUND messages.
    A. It should coincide with the finest domain resolution (1 minute per km dx), but it usually is not necessary to go below 5 minutes. All domains should use the same value, so that radiation forcing is applied at the same time for all domains.
    A. The concerns about netcdf complaints are not an issue. It is not an error. If you set debug_level = 0, it should go away.
    A. Simply add “all_ic_times = .true., ” in namelist.input (& time_control).
    A. If you compiled wrf.exe with the OpenMP option (smpar), try to use the following options in the &domains section of the namelist to run it in OpenMP:

     tile_sz_x = 20,
     tile_sz_y = 3

    for the 2d in x case.

    back to top



    1.3 WRF Run-Time Problems

    A. Each decomposed domain should have no less than 15 x 15 points per processor. Increasing the number of processors beyond this will not improve things because the workload is so small that any improvement will be over-shadowed by the increase in communications. In addition, if too many processors are used, then some dimensions in the decomposed domains may be too small for running.
    A. The proper way to stop the model would be to use WRF calls to wrf_error_fatal. If you grep it in, say, the directory phys/, you will see how it is used. This should kill the model on all processors.
    A. The 'im', 'jm' indices are for decomposed domains that include a communication zone, and 'ip', 'jp' are physical sizes of the decomposed domains. More details can be helpful on the following page

    http://www.mmm.ucar.edu/wrf/users/tutorial/tutorial_presentation_winter.htm      

    A. Part of the explanation may be that when you run an idealized case, you don't have input for the nest; hence the model interpolates everything from the coarse domain to the nest, including latitude and longitude. If you set feedback = 1, it will cause nest data to overwrite the coarse domain in the area of the nest, including XLAT and XLONG. This is necessary since the calculated values from the nest are replacing coarse domain values (because of radiation, for example). Unless the interpolation is really bad, one would think the odd ratio should do reasonably well, as there is an overlapping grid on the nest and coarse domains. For an even ratio, since there is no co-locating grid, there will be some kind of averaging done when feedback is on.
    A. Typically the second full level should be set to 0.993-0.996.
    A. You will need to add these variables in Registry.EM (Registry.EM_COMMON since V3.4), then recompile and revise namelist.input by adding the below options:

    auxhistN_outname = “rainfall_d<domain>”
    auxhistN_interval = 10, 10,
    frames_per_auxhistN = 1000, 1000,
    io_form_auxhistN = 2

    Note N is the stream number for your extra output variables.

    A. Without quilting, and with regular NetCDF, the head processor will gather and write out data. With quilting, N number of processors will be set aside to do data gathering (while the head processor writes). For a large number of grid points, this can improve wallclock time.
    A. A segmentation fault error often means there is a memory issue. Try typing one of the following to see if it helps:

    1. setenv MP_STACK_SIZE 64000000 ( OMP_STACKSIZE)

    2. If you are using csh or tcsh, try this: limit stacksize unlimited

    3. If you are using sh or bash, use this command: ulimit -s unlimited

    This may not solve your problem but the default stack size is often quite small and may result in segmentation faults due to insufficient stack memory.

    A. The default output format of the ts (time series) is a simple text file, one per station per domain.  
    A. Trying to reduce the timestep. Sometimes this works but not always. Another thing you could try is to add:

    smooth_cg_topo = .true.

    under &domains in the namelist when you run real. This smoothes the model topography to match the low resolution topography that comes with the driving data, if CFL happens along boundary zones. If CFL occurs near complex terrain, you may try to set epssm=0.2 ( up to 0.5) to see if that helps.
    A. 1) your machine does not have enough memory to run real.exe on your domain

    2) if your machine has enough memory and multiple CPUs, but a single CPU can't access all the memory, you need to build WRF with the dmpar (MPI) option and run it on multiple processors (mpirun -np N real.exe with N>1) to utilize more of the available memory.

    A. To find shared libraries at run-time, the Unix operating system usually looks at the environment variable LD_LIBRARY_PATH to search the directories in the path for the library. 
    So one should try issuing these commands:
    echo $LD_LIBRARY_PATH
    If the path does not include /usr/local/lib, then ( in ksh for example)
    export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
    You can put this in your login script so that you do not have to issue this command every time.

    A. Local variables will not be seen by the output routines. You will need to declare the variables you would like to see in wrfout file as state variables in the Registry. Then these variables will become available to solve_em.F, which calls microphysics_driver. You will need to pass the new variables down to the driver, and then down to the physics scheme called in the driver. These state variables should be declared as 'OUT' variables, and dimensioned by patch dimensions (ime, jme). You then need to fill these arrays with the fields you need to output. Some information from 'Registry and Examples' from the tutorial lectures can be helpful too. (http://www.mmm.ucar.edu/wrf/users/tutorial/tutorial_presentation_summer.htm )  
    A. The namelist variable “do_coriolis” is not used in the code. You can
    set grid%f=0 to turn off coriolis forcing in module_initialize_<ideal-case>.F.  
    A. This option only works for a 32-bit system. It fails on some 64-bit systems, and unfortunately the developer does not yet know what the problem is.  
    A. You can use the namelist option mp_zero_out to remove negative moist variables. Negative moisture will not affect model integration in general, that we know of. Use of the moist_adv_opt=1 option will keep moisture variables positive.
    A. This is possibly because your wrfout/wrfrst file is too large (i.e. larger than 2GB). In c-shell you can use the following command to enable netCDF large file support.  

    Setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1 

    A. Try to add the following in &time_control:

    auxinput11_interval_s = 180, 180, 180, 180, 180,
    auxinput11_end_h = 6, 6, 6, 6, 6,

    and change the value appropriate to your case. The interval here should be smaller than the data interval in your obs file.
    A. The namelist options nproc_x and nproc_y can be used to do so, if you compile with dmpar. You can use numtiles if you compile WRF with OpenMP.
    A. Check your output and see if tmn is filled with reasonable values. This may be triggered because tmn (which is computed in geogrid with a low resolution (1 degree) data) did
    not match the landmask coming from higher resolution data (e.g. 30 sec).
    A. To define a time step smaller than 1 sec, use all three of the following in the namelist:

    time_step                = 0,
    time_step_fract_num      = 1,
    time_step_fract_den      = 3,

    For example, the above sets the time step to 1/3 sec.

    back to top

     

    1.4 Adding New Features in WRF

    A. To create a new idealized case, you need to create a new module_initialize_<case-name>.F, modify dyn_me/Makefile (see the example inside the file), and the top level Makefile (Makefile in the WRFV3 directory - again see the example inside the file). You should also add a new test case directory in the directory test/. No registry change is needed here. If you would like to input a 2D or 3D file instead of a single sounding, you can structure your data anyway you like, since you can write your own program to read to (just like read_input_jet in module_initialize_b_wave.F). If you can run the initialization module on a single processor, then you do not need to worry about MPI issues. If you do want to consider that option, then you need to be careful how you read the data (but you can most likely direct that part of the program to use the monitor processor only), and broadcast the data to other processors.
    A. Please use the module share/wrf_timeseries.F as a reference. It would be easier if you simply fill in one of the existing arrays for time series output (e.g. tslb) with your own information. All model data are available in that routine. If you would like to add one, you will need to modify Registry.EM (or Registry.EM_COMMON in V3.4 or later), and then make more changes in wrf_timeseries.F. Here is an example. Suppose you want to add ALBEDO and SWDOWN to WRFV3.2 timeseries output.

    (1) revise share/wrf_timeseries.F:


    - after line 337 add (in #if EM_CORE block):
            grid%ts_albedo(n,i) = grid%albedo(ix,iy)
            grid%ts_swdown(n,i) = grid%swdown(ix,iy)

    - after new line 442 add (in #if EM_CORE block)
      ts_buf(:,:) = grid%ts_swdown(:,:)
    CALL wrf_dm_min_reals(ts_buf(:,:),grid%ts_swdown(:,:),grid%ts_buf_size*grid%max_ts_locs)
      ts_buf(:,:) = grid%ts_albedo(:,:)
      CALL wrf_dm_min_reals(ts_buf(:,:),grid%ts_albedo(:,:),grid%ts_buf_size*grid%max_ts_locs)

    - after new line 493 add (in #if block, write-statement
                    ..., &
                                 grid%ts_albedo(n,i),               &
                                 grid%ts_swdown(n,i)

    (2) revise Registry/Registry.EM (or Registry.EM_COMMON)
    - after line 580 add
    state    real   ts_albedo       ?!     misc      -       -      -      "TS_ALBEDO"      "Albedo"
    state    real   ts_swdown     ?!    misc      -     -      -      "TS_SWDOWN"      "shortwave down"

    Remember the writes should be added in the #if (EM_CORE == 1) block, and hence after line 485. You will also need to change the format statement form 14(f13.5,1x) to 16(f13.5,1x). 

    A. These three variables are declared as namelist variables in Registry.EM (or in Registry.EM_COMMON in V3.4 or later). If you'd like to pass them to SFLX, you will need to modify the following modules:

    dyn_em/module_first_rk_step_part1.F (argument list)
    phys/module_surface_driver.F (argument list, variable declaration, and argument list to lsm call) phys/module_sf_noahdrv.F (argument list, variable declaration)
    phys/module_sf_noahlsm.F (argument list, variable declaration, and use of these variables)

    There are examples you can following from module_first_rd_step_part1.F, and those are the variables with config_flags, which are all namelist variables. You can see how they are passed in from module_first_rk_step_part1 to surface_driver, etc..       
    A. You can revise Registry.EM ( prior to WRFV3.3) and Registry.EM_COMMON ( since WRFV3.4) by adding 'h' in 8th column of the line corresponding to the variable you want to output. You will then need to do a 'clean -a' and then reconfigure and recompile WRF.

    Another way is to adopt the run-time output method. See http://www.mmm.ucar.edu/wrf/users/docs/user_guide_V3/users_guide_chap5.htm#runtimeio

    A. You can refer to phys/module_diagnostics.F to see how some variables are written out at every time step. This module is called by solve_em.F.
    A. Here is an example of a 2d state variable declared in registry (Registry.EM) from V3.1 code:

    state    real   T2      ij     misc        1         -     irhd      "T2"                   "TEMP at 2 M"       "K"

    to change it so that it does not output in the normal history file, but from another output unit (say io stream 3, say), this line would become

    state    real   T2     ij     misc        1         -     irh3d      "T2"                   "TEMP at 2 M"       "K"

    (simply add 3 after the letter 'h' in the 8th column). To do this for all 2D state variables, you will need to find those in the Registry.EM file (or Registry.EM_COMMON in V3.4 and later).

    These are the namelist items needed:

     auxhist3_outname = "my_output_d<domain>_<date>"
     auxhist3_interval = 30, 30,
     frames_per_auxhist3 = 1000, 1000,
     io_form_auxhist3 = 2

    Recompile after 'clean -a', and make the run.

    A. You need to add GRIB parameters of new fields in the Vtable so that ungrib will extract these new fields, and then write them to the intermediate file. Metgrid will process every fileld in the intermediate file.

    In order for real to recognize the new field, you need to first work with the Registry.EM (or Registry.EM_COMMON in V3.4 or later). Look for an example of field declaration for UU (a WPS variable) in the file. You can follow this to correctly declare your own, but you may need to modify some code as well. See main/real_em.F, and look for grid_fdda, which will show you some code. See if you can follow it. Once you get real to work with your data, you will add your own code to use the data in the model.

    A. On some machines such as IBM, you can always find the CPU and Wall Clock time the job takes from the output file. On other machines, like Linux, you can estimate the Wall Clock time by looking at the time when the first and last wrfout files are generated. The difference in between is a rough estimate of the Wall Clock time the job takes.

    back to top


    2. WPS QUESTIONS

    2.1 Input Data

    A. We put in an "option" so that the wind vectors can be specified in Earth coordinates, at least as far back as V3.1. The user may specify winds in Earth coordinates and have the WRF model internally rotate them to the WRF grid. To activate this capability, the user must specify a u- and v-component QC flag value of 129 for each wind vector that is to be rotated; otherwise, the model assumes the wind vector to be WRF grid-relative.
    A. When RH data is missing for the entire level, ungrib fills in the values. Since V3.0, if RH is missing from 300 to 100 mb, the “real” program fills it with 10%, and below 100 mb, the values would be pressure values / 1000 - this gives 7% at 70 mb, and 5% at 50 mb. Since V3.2, RH is filled to 5% from 300-70mband 0% from 70mb to the model top. If you would like to change it, please see ungrib/src/rrpr.F.
    A. This is because we calculate the percentage of each landuse type inside the model grid, then determine the type as the one with the largest percentage.
    A. MODIS has been flying for several years now, giving us different land-use datasets for different years.  The data contained inside the “standard” MODIS_geog.tar.gz has timestamps from Oct 2008, while the newer landuse_with_lakes.tar.gz has timestamps from Mar 2011.
    A. To do any masked interpolation correctly, you will need a field that defines the landmask of the input data. If you look at any of the Vtables, we name the landmask field from the incoming data LANDSEA. You need a pair of these landmask fields to do masked interpolation properly. If you do not yet have a landmask field from your data, you should create one and call it LANDSEA. You can use this as a constant input for metgrid (use the namelist variable 'constants_name'). Then in your METGRID.TBL, you should set (using SST as an example):

    name=SST
            interp_option=nearest_neighbor
            masked=land
    #        masked=both
    #        interp_land_mask  = LANDMASK(1)
            interp_mask = LANDSEA(1)
            fill_missing=273.
            flag_in_output=FLAG_SST

    In addition, if your SST is masked by some unique value over land (such as e-30), then having a landmask field is important. Having a matching landmask field for the SST ensures that SST is properly interpolated along water / land boundaries  
    A. Try to add this line

            missing_value=-1.E+30

    to name=SM000010, SM010200, ST000010, and ST010200 in METGRID.TBL so that they look something like this:

    name=SM000010
            interp_option=sixteen_pt+four_pt+average_4pt+search
            masked=water
            interp_mask=LANDSEA(0)
            fill_missing=1.
            missing_value=-1.E+30
            flag_in_output=FLAG_SM000010

    Re-run metgrid.exe, and see if you can get rid of the bad values in soil temp and moisture fields.
    A. WPS rotates boundary condition meteorological fields from source-grid to earth-relative by having the logical flag 'is_wind_grid_rel' set to .true. several places in the WPS source code. It will be set automatically based on input data, and therefore you do not need to change anything. 
    A. The solution is to modify the METGRID.TBL.ARW.ruc file that you use in metgrid. Set the ST and SM fields to mimic SOILT and SOILM, as below:
    ========================================
    name=ST
            z_dim_name=num_soilt_levels
            derived=yes
            fill_lev =   0 : SOILT000(200100)
            fill_lev =   5 : SOILT005(200100)
            fill_lev =  20 : SOILT020(200100)
            fill_lev =  40 : SOILT040(200100)
            fill_lev = 160 : SOILT160(200100)
            fill_lev = 300 : SOILT300(200100)
    ========================================
    name=SM
            z_dim_name=num_soilm_levels
            derived=yes
            fill_lev =   0 : SOILM000(200100)
            fill_lev =   5 : SOILM005(200100)
            fill_lev =  20 : SOILM020(200100)
            fill_lev =  40 : SOILM040(200100)
            fill_lev = 160 : SOILM160(200100)
            fill_lev = 300 : SOILM300(200100)

     The issue arises because the contents of st_input are used in module_soil_pre.F (subroutine process_soil_real) to set tmn to the soil lever nearest 30 cm depth.  Without ST defined in this way, the array contains zeroes.  See the below code for an example:

                      DO j = jts , MIN(jde-1,jte)
                         DO i = its , MIN(ide-1,ite)
                            tmn(i,j) = st_input(i,closest_layer+1,j)
                         END DO
                      END DO .
    A. WPS can ingest data in map projections of Lambert, Polarstereographic, Mercator, regular latitude-longitude, rotated latitude-longitude and Gaussian grid. For ARW, accepted projections are 'lambert', 'polar', 'mercator', and 'lat-lon'; for NMM, a projection of 'rotated_ll' must be specified. The default value is 'lambert'.

    2.2 Revision of Landuse /Terrain Height

    A. There are a number of ways to do it. If you want to change all desert categories to agriculture land, simply modify the LANDUSE.TBL and/or VEGPARM.TBL to change the physics properties from desert to the land category you would like to use. If you would like to only change the desert area in part of your domain, you may have to use read_wrf_nc.f and an ncl script that reads and writes data out after modification. To use read_wrf_nc.f, see information in the User's Guide (Chapter 10, current version). For ncl, see page 20 of the tutorial talk on 'WRF Utilities' from
    http://www.mmm.ucar.edu/wrf/users/tutorial/tutorial_presentation_winter.htm

    If you modify input data to program REAL, use the namelist option surface_input_source=3 to avoid recalculation of dominant land categories (lu_index). 

    A. In V3.4.1 code, for example, find the lines around 1627 in dyn_em/module_initialize_real.F and change the line beginning with grid% from          

    !  Land use assignment.          

    DO j = jts, MIN(jde-1,jte)           

    DO i = its, MIN(ide-1,ite)              

    grid%lu_index(i,j) = grid%ivgtyp(i,j)

    to              

    grid%ivgtyp(i,j) = grid%lu_index(i,j)  

    Recompile. Check wrfinput_d01 after re-running real. It should not have   any effect on other fields in the input file.

    A. In order to get this, you must have NCAR Graphics installed on your computer, then you need to specify it directly in this line:

    NCARG_LIBS              =       -L$(NCARG_ROOT)/lib -lncarg -lncarg_gks -lncarg_c \
                                   -L/usr/X11R6/lib -lX11

    here NCARG_ROOT is coming to the script as an environment variable
    A. One way to do this is to edit phys/module_physics_init.F in the subroutine landuse_init.
    Try adding a line in this part of the code, just before the last line quoted here:

    ! Set arrays according to lu_index
          itf = min0(ite, ide-1)
          jtf = min0(jte, jde-1)
          IF(usemonalb)CALL wrf_message ( 'Climatological albedo is used instead of table values' )
          DO j = jts, jtf
            DO i = its, itf
              IS=nint(lu_index(i,j))

    so the new section reads:

    ! Set arrays according to lu_index
          itf = min0(ite, ide-1)
          jtf = min0(jte, jde-1)
          IF(usemonalb)CALL wrf_message ( 'Climatological albedo is used instead of table values' )
          DO j = jts, jtf
            DO i = its, itf
              lu_index = 16
              IS=nint(lu_index(i,j))

    Recompile the code after making the change. You may want to save the default wrf.exe somewhere before you do this.
    Check the field LU_INDEX in wrfout file.  
    A. Follow this procedure:

    1) obtain the special 30 second land use data set that has inland lake water defined as category 28 (ocean-connected water is still category 16). The purpose of this is to have a separate land use category that will tell real.exe to do something different than for ocean points.

    2) put the data in the same place as the normal landuse_30s data file.

    3) edit the GEOGRID.TBL file.  Under the section called "name=LANDUSEF", make the changes shown below:

    name=LANDUSEF

            priority=1

            dest_type=categorical

            z_dim_name=land_cat

            landmask_water = modis_30s:17            # Calculate a landmask from this field

            landmask_water =   default:16,28            # Calculate a landmask from this field            # CHANGE THIS

            dominant=LU_INDEX

            interp_option =     modis_30s:nearest_neighbor

            interp_option = lake30s:average_gcell(3.0)+search                  # ADD THIS LINE

            interp_option =     30s:nearest_neighbor+search

            interp_option =      2m:four_pt

            interp_option =      5m:four_pt

            interp_option =     10m:four_pt

            interp_option = default:nearest_neighbor+search

            rel_path=     modis_30s:modis_landuse_20class_30s/

            rel_path=     lake30s:landuse_30s_with_lakes/                 # ADD THIS LINE

            rel_path=     30s:landuse_30s/

            rel_path=      2m:landuse_2m/

            rel_path=      5m:landuse_5m/

            rel_path=     10m:landuse_10m/

            rel_path= default:landuse_30s/

    The first change just tells the land mask calculation to recognize both categories 16 and 28 as water, which does not pertain to fixing the lakes problem, but if you do not do this, the land mask will be wrong.  The second change tells geogrid how to interpolate when lake30s is requested.  The third change tells geogrid where to look for the lake30s data.

    4) In namelist.wps, change goeg_data_res as follows:

     geog_data_res       = 'lake30s+5m','lake30s+2m','lake30s+30s',

    This tells geogrid to use lake30s, if available (it will only occur for landuse, since that is the only field for which it is defined in GEOGRID.TBL).

    Also add the following line to the "&metgrid" section:

     constants_name = 'TAVGSFC'

    This tells metgrid to look for the TAVGSFC field when metgrid is eventually run. TAVGSFC needs to be generated after running ungrib, using the utility program avg_tsfc.exe.

    5) After running ungrib.exe, also run util/avg_tsfc.exe

    This creates the TAVGSFC file, which contains, at every grid point, the diurnally averaged surface air temperature.  This is the temperature that inland lakes will be assigned, which is typically better than what the lakes would otherwise be assigned if they were categorized as ocean, but in some situations it may be a degradation of the lake temperature.  So use at your own risk.

    6) Before running WRF, add the following line to the "&physics" section of namelist.input:

      num_land_cat             = 28,

     WRF will stop with an error if you do not do this.

    A. For 12-monthly value, you need to add some code similar to that for which vegetation fraction is used in the initialization routine. Edit dyn_em/module_initialize_real.F
    and look for

            CALL monthly_interp_to_date ( grid%greenfrac , current_date , grid%vegfra , &
                                          ids , ide , jds , jde , kds , kde , &
                                          ims , ime , jms , jme , kms , kme , &
                                          its , ite , jts , jte , kts , kte )

    You need to make the same call for your LAI. You will also need to name your 12 monthly fields something other than LAI since that is the 2D field name used by the model after the interpolation. For example, name it as LAI12m, then add a call in dyn_em/module_initialize_real.F


      CALL monthly_interp_to_date ( grid%lai12m , current_date , grid%lai , &
                                          ids , ide , jds , jde , kds , kde , &
                                          ims , ime , jms , jme , kms , kme , &
                                          its , ite , jts , jte , kts , kte )

    You need to add LAI12m to the Registry in the "WPS Variables" section, as below:
    state    real   lai12m         imj      dyn_em      1        Z     i1  "LAI12M" "montlhy leaf area index" "dimensionless"

    You can refer to the WRF manual on How to Write Static Data to the Geogrid Binary Format .
    A. When you include masked data into WPS, it is important to have a corresponding landmask field. With a properly defined landmask, metgrid will know what to use to interpolate the data. Another option is to define missing values (or negative values in our case if they are over water) with some large value, -1.E30, for example. Then add the following in metgrid/METGRID.TBL to tell the program not to use it when interpolating. An example for field name SM000010  is shown below (note the use of missing_value):

    name=SM000010
            interp_option=sixteen_pt+four_pt+average_4pt+search
            masked=water
            interp_mask=LANDSEA(0)
            missing_value=-1e30
            fill_missing=1.
            flag_in_output=FLAG_SM000010
    A. This is what you can typically do: Assuming you want to use GFDL data for initial conditions, and GFS data for boundary conditions, you first run GFS data for the entire time period, all the way through program real (later you will use the wrfbdy_d01 file and discard the wrfinput_d01 file). You then run GFDL data for only the initial time. Since the GFDL data does not have complete soil data, you can supplement it with GFS surface data only (this is only needed if you would like to use the LSM option in the model later). To get GFS surface data only, you should create a Vtable that contains those surface variables only. You can ingest GFDL and GFS surface data in metgrid to obtain a full dataset for the program real. Once you run real for the first time period only, you obtain wrfinput_d01 with GFDL data.
    You can then run the model.

    back to top



    2.3 Intermediate Format Data

    A. The intermediate formatted data may be on constant pressure levels (most common), and on some height surfaces. If your data is on pressure levels, you will need to have the matching height data on the pressure levels. And if you have data on height levels, you will need to have the matching pressure fields on those height levels. This will ensure that you can do interpolation to the vertical grid that the model uses. For pressure level data, the pressure value, itself, is provided in the 'header' written before the 2d slab and that is the variable xlvl in the format description in chapter 3 of the user's guide.   
    A. The 'regular_ll' projection in the index file refers to a regular lat/lon (i.e., geographic) projection that assumes the earth is a sphere rather than ellipsoid. The known_lat and known_lon values refer to the center of the grid cell.  

    back to top

     

     
    Home -- Model System -- User Support -- Doc / Pub -- Links -- Download -- WRF Real-time Forecast