User Tools

Site Tools


atmos:software:wrf:cluster_notes

Note: Throughout this document, the characters “cluster>” indicate a line that will be executed from the command line. The “—“ indicate sections that are part of text files that need to be added or edited.

Document created by G. Mullendore for ATSC 405 (2009)

A. Setting up WRF/WPS

The first time you use WRF, you'll need to copy and untar the model and visualization software.

WRF: Weather Research & Forecasting Model

WPS: WRF Preprocessing System

RIP: “Read/Interpolate/Plot” Visualization Software

1. Copy the files to your home directory and untar them. The programs are located in

/home/data/wrf311_cluster.tar.gz (both WRF and WPS)

/home/data/RIP4.tar.gz

Note: If you ran WRF & RIP for Mesoscale class, you still need to copy the wrf311 file as it’s a newer version and includes the preprocessing system. You do not need to copy RIP.

2. Update your .bashrc

# User specific aliases and functions

#aliases

alias rm='rm -i'

alias cp='cp -i'

alias mv='mv -i'

#PORTLAND

export PGI=/opt/pgi

PATH=$PGI/linux86/7.2/bin:$PATH

PATH=$PGI/linux86/7.2/mpi2/mpich/bin:$PATH

MANPATH=$MANPATH:$PGI/linux86/7.2/mpi2/mpich/man

export PATH MANPATH

export LM_LICENSE_FILE=$PGI/license.dat

#NETCDF

export NETCDF=/usr/local/netcdf

export PATH=$NETCDF/bin:$PATH

#NCAR Graphics

export NCARG_ROOT=/usr/local

export PATH=$NCARG_ROOT/bin:$PATH

# for WPS

export JASPERLIB='/usr/local/lib'

export JASPERINC='/usr/local/include'

#RIP- Replace “full_path” with the path to your RIP4 directory

export RIP_ROOT=/full_path/RIP4

export PATH=$RIP_ROOT:$PATH

cluster>source .bashrc


B. Setting up your account to work with multiple processors- DO ONCE FOR EACH FORECAST CITY

If using multiple processors all the time, you would not need to reset your daemon for each city. However, because we are sharing the cluster with other users, your own set of node numbers may change for each extra credit group.

1. Start the mpd, or message passing daemon. This daemon is a program that controls the way the processors communicate when working together on the same problem.

1) Shut down any message passing daemons currently running

cluster> mpdallexit

SUBSECTION: ONLY IF FIRST FORECAST

2) Create configuration file.

cluster>gvim ~/.mpd.conf

MPD_SECRETWORD=abra2-du10o

Note: Make up a new secret word! Anything is fine for the secret word. If you have already run models, you will already have this file in your directory.

3) Change the permissions of the configuration file.

cluster>chmod 600 ~/.mpd.conf

END SUBSECTION: ONLY IF FIRST FORECAST

4) Start daemon.

cluster>mpdboot -n # -f ~/machines.boot

Replace # with the number of nodes in your machines.boot file plus one.

5) Test daemon.

cluster>mpdtrace

You should see a list of your subset of nodes (should be same names as in machines.execute, but may be in different order and will include the head node, called “cluster”). If you do not see a list, start at number 1 above and double-check all your steps. If still not producing the right list, email Gretchen the output you are getting from “mpdtrace”.

C. Initialize forecast

SUBSECTION: ONLY IF NEW LOCATION

1) Set domain

Open the cluster> wrf311/WPS directory.

namelist.wps contains a list of settings for the forecast run.

a) Always a good idea to keep a copy of the original namelist file around before editing a new one:

cluster>cp namelist.wps namelist.wps.orig

b) Edit namelist.wps

To set the domain for the region we want to simulate, edit the “&geogrid” section of the file. Notice that there are two columns. The first column is for the main simulations, additional columns are for “nested” grids. We will not be using nested grids for these forecasts.

geog_data_path = this is the location of the geographic data

set as '/home/data/WPS_GEOG'

dx, dy = this is the grid spacing in x (e-w) and y (n-s) in meters

the input data is 12km, so I recommend choosing 6 or 4 km resolution

geog_data_res = resolution of geographical data

set as '2m'

ref_lat, ref_lon = the center latitude and longitude locations for the forecast

e_we = ending grid point in the e-w direction

for example, it you want to run a forecast for a 100 km region, and your dx is set to 4000, this would be 25

e_sn = ending grid point int eh n-s direction

In &share section:

max_dom = 1,

Note on domain size and time step:

-I tried some small domain runs (25×25 grid, 4km resolution), and the near-boundary pressure gradients were very noisy. It's normal to get some noise near model boundaries- one should never use output from right near the boundary- a 25×25 grid is so small that noise may exist in the area of interest. There is still noise in the 50×50 grid (6km resolution, 24s time step) but the domain is larger, giving more confidence in the central area of the domain.

c) Check domain

./util/plotgrids.exe

idt gmeta

-scroll right arrow to see plot

-plotgrids.exe only shows state outlines, so for a small domain you may not see much. You can increase your domain size to see if you're centered in the right spot, and then decrease the domain size again to region you want. Note: plotgrids.exe runs on what is in “namelist.wps”, so you do not need to run geogrid.exe first.

d) After checking that the domain is setup correctly, run the domain setup:

mpiexec -l -machinefile ~/machines.execute -n # /full_path/WPS/geogrid.exe > geogrid.log 2>&1 < /dev/null &

-replace full_path by the path to your WPS directory and # by the number of nodes you want to run on (can't be higher than the number of nodes listed in ~/machines.execute! Also, # should be the number of processors you want to use, since cluster only have one processor on one node, it doesn't matter in this case. In other cases, e.g. hpc has 8 processors on 1 node, so you have to put the actual number of processors behind '-n'.)

-The above command runs on the processor list set in your own machines.execute file.

ssh to a node to check your job.

-When the job has completed successfully, you'll get a message saying so at the end of geogrid.log.

SUBSECTION: ONLY IF NEW TIME

3) Link 12 km NAM files

Note: We will always be running a forecast from 00Z (19 CDT). Because of the lag due to the NAM having to complete first, and then the time it takes “cluster” to download the NAM files, initialization files will not be available before midnight CDT.

NAM files are downloaded into /home/data/nam/YYYYMMDD, one for each day (the 00Z forecasts).

a) Edit namelist.wps

start_date = 'YYYY-MM-DD_00:00:00'

end_date = 'YYYY-MM-DD_00:00:00'

-set to 2 days after start_date

interval_seconds = 10800

-this is the time between NAM input files = 3 hours

b) link Vtable

ln -s ungrib/Variable_Tables/Vtable.NAM Vtable

c) link to NAM files

./link_grib.csh /home/data/nam/YYYYMMDD/nam.*tm00.grib2

-use the date the forecast starts from

d) pull needed variables from NAM data

mpiexec -l -machinefile ~/machines.execute -n # /full_path/WPS/ungrib.exe > ungrib.log 2>&1 < /dev/null &

-replace “full_path” with the path to your WPS directory

-replace # with the number of processors you want to run on (maximum is number in your machines.execute file)

-this step takes 4 hours on 5 processors- this long time is due to the high resolution of the NAM input files

  • NOTE:: If this step does not work, try using the command “ ./ungrib.exe > ungrib.log 2>&1 & ”

SUBSECTION: IF NEW LOCATION OR NEW TIME

4) Create WRF init files

mpiexec -l -machinefile ~/machines.execute -n # /full_path/WPS/metgrid.exe > metgrid.log 2>&1 < /dev/null &

-for a 50×50 grid, this step about 30 minutes on 5 processors

5) Move init files to WRF run directory

mv met_em.d01.*.nc ../WRFV3/run

D. Run forecast

1) Edit namelist.input

cd ../WRFV3/run

namelist.input:

basic explanations of variables:

http://www.mmm.ucar.edu/wrf/OnLineTutorial/Basics/WRF/namelist.input.htm

more details on physics options (&physics):

http://www.mmm.ucar.edu/wrf/users/docs/user_guide/users_guide_chap5.html#Phys

At minimum, edit the following lines in namelist.input:

run_hours = 48,

start_year… end_hour: change to match the time period you are simulating

interval seconds: 10800

time_step: change to be 6*dx, but also good to pick a time_step that evenly divides into the output time step (history_interval). For example, if using dx=4000, 4*dx (in km) = 24. If history_interval = 180 (minutes), then 24s is good, because it divides into 10800s evenly.

e_we, e_sn, dx, dy = change to match setup values

num_metgrid_levels = 40

cu_physics = 0,

-turns off cumulus parameterization. However, 4 or 6km is at the edge of resolving convection, so in some cases you may get a better answer if you leave this on.

pd_moist = .false.,

pd_scalar = .false.,

2) Run WRF init

mpiexec -l -machinefile ~/machines.execute -n # /full_path/WRFV3/run/real.exe > real.log 2>&1 < /dev/null &

NOTE: if they exist, delete rsl files first: \rm rsl.*

3) Run forecast

mpiexec -l -machinefile ~/machines.execute -n # /full_path/WRFV3/run/wrf.exe > wrf.log 2>&1 < /dev/null &

NOTE: if they exist, delete rsl files first: \rm rsl.*

-test run of size 50×50 took 80 minutes on 5 nodes

E. Plot/analyze forecast.

1) Quick check

ncdump -h wrfout_d01_YYYY-MM-DD_00:00:00 | less

-header of netCDF file

2) Create new directory for RIP files

RIP will create a bunch of files, so create a new directory and then change to that directory to run RIP. For example, I created a directory /home/gretchen/wrf311/MMDD, substituting in the month and day of the forecast

3) Convert the model output data file(s) to RIP-format data files

ripdp_wrfarw NAME basic FULL_PATH/wrfout_d01_YYYY-MM-DD_00:00:00

-replace NAME with a nickname for this run, FULL_PATH with the path to your run directory, and YYYY-MM-DD with your initialization time. For example, “ripdp_wrfarw 1029 basic /home/gretchen/wrf/WRFV3/run/wrfout_d01_2008-10-29_00:00:00”

4) Create plots

a) temperature

cp /home/data/rip_temp.in .
rip NAME rip_temp.in

-creates multi-panel plot of temperature/SLP/winds → rip_temp.ps

-you can copy rip_temp.in to your own directory to change the plot parameters

-see the Appendices at http://www.mmm.ucar.edu/wrf/users/docs/ripug.htm for more help with RIP

b) precipitation

cp /home/data/rip_pcp.in .

rip NAME rip_pcp.in

-plots 24-hr accumulated pcp from 24hrs to 48hrs, every 6hr → rip_pcp.ps

-the contours are fixed to go from .25 mm (0.01 in) to 2.5 mm (0.1 in)

-if you need to plot higher rain rates, you will need to change the cbeg (beginning contour), cint (contour interval), and cend (ending contour) values in the file

atmos/software/wrf/cluster_notes.txt · Last modified: 2020/01/29 17:25 by 127.0.0.1