Description:
WRF initialization routine. Program_name, a global variable defined in frame/module_domain.F, is set, then a routine init_modules is called. This calls all the init programs that are provided by the modules that are linked into WRF. These include initialization of external I/O packages. Also, some key initializations for distributed-memory parallelism occur here if 'DM_PARALLEL' is specified in the compile: setting up I/O quilt processes to act as I/O servers and dividing up MPI communicators among those as well as initializing external communication packages such as 'RSL' or RSL_LITE.
The wrf namelist.input file is read and stored in the USE associated structure model_config_rec, defined in frame/module_configure.F, by the call to initial_config. On distributed memory parallel runs this is done only on one processor, and then broadcast as a buffer. For distributed-memory, the broadcast of the configuration information is accomplished by first putting the configuration information into a buffer (get_config_as_buffer), broadcasting the buffer, then setting the configuration information (set_config_as_buffer).
Among the configuration variables read from the namelist is debug_level. This is retrieved using nl_get_debug_level (Registry generated and defined in frame/module_configure.F). The value is then used to set the debug-print information level for use by wrf_debug throughout the code. Debug_level of zero (the default) causes no information to be printed when the model runs. The higher the number (up to 1000) the more information is printed.
'RSL' is required for WRF nesting options. The non-MPI build that allows nesting is only supported on machines with the -DSTUBMPI option. Check to see if the WRF model is being asked for a for a multi-domain run (max_dom > 1, from the namelist). If so, then we check to make sure that we are under the parallel run option or we are on an acceptable machine. The top-most domain in the simulation is then allocated and configured by calling alloc_and_configure_domain. Here, in the case of this root domain, the routine is passed the globally accessible pointer to TYPE(domain), head_grid, defined in frame/module_domain.F. The parent is null and the child index is given as negative, signifying none. Afterwards, because the call to alloc_and_configure_domain may modify the models configuration data stored in model_config_rec, the configuration information is again repacked into a buffer, broadcast, and unpacked on each task (for 'DM_PARALLEL' compiles). The call to setup_timekeeping for head_grid relies on this configuration information, and it must occur after the second broadcast of the configuration information.
The head grid is initialized with read-in data through the call to med_initialdata_input, which is passed the pointer head_grid and a locally declared configuration data structure, config_flags, that is set by a call to model_to_grid_config_rec. It is also necessary that the indices into the 4d tracer arrays such as moisture be set with a call to set_scalar_indices_from_config prior to the call to initialize the domain. Both of these calls are told which domain they are setting up for by passing in the integer id of the head domain as head_grid%id, which is 1 for the top-most domain.
In the case that write_restart_at_0h is set to true in the namelist, the model simply generates a restart file using the just read-in data and then shuts down. This is used for ensemble breeding, and is not typically enabled.
Called by :
Arguments:
1. no_init1 :: LOGICAL , INTENT( IN )WRF_INIT calls :