Warning You're reading an old version (v5.2) of this documentation.
If you want up-to-date information, please have a look at master.
Scratch Folder for CSEM Inversions
A scratch folder (or directory) for temporary file storage needs to be
specified when running CSEM inversions. This can be done in one of two
ways, using either the command line argument -scratch
<scratchfolder>
(see Command Line Arguments) or by
specification in the mare2dem.settings
file (see
MARE2DEM Settings File). Any scratch folder specified in the settings
file will take precedence over that given in the command line. If not
specified in either way, MARE2DEM uses the default scratch folder /tmp
.
For example, starting MARE2DEM with:
mpirun -n 12 MARE2DEM -scratch /local Demo.0.resisitivy
will use /local
for temporarily storage of the scratch
files.
Note
Since huge volumes of scratch data can be written, it is recommend
to use a scratch folder on the local hard drive for each compute
node in the cluster since that will give the fastest speed. If
instead a network mounted directory (like your home director) is
used, all that data will need to be passed across the cluster
network and that can result in much slower file I/O as well as
potentially overwhelming the cluster network. Most clusters usually
have the directories /tmp
or /local
available for local
file storage. When in doubt, check the user documentation for your
cluster or ask someone from the IT team.
Scratch File Rationale
When inverting CSEM data (i.e. from dipole sources), MARE2DEM writes scratch files to the file space (hard-drive) to temporarily store the large arrays formed by the wavenumber domain sensitivity functions:
where \({\hat F}_i\) contains the wavenumber domain electric or magnetic field for the ith requested data response at wavenumber \({k}_x\). These are computed serially over a discrete set of wavenumbers and files are written for each wavenumber. Once all wavenumbers have been computed, the sensitivities are Fourier transformed to the x spatial domain.
For a given modeling task running on a single processing core, the number of total CSEM sensitivity function elements is
where \({n_{kx}}\) is the number of wavenumbers (MARE2DEM uses 30 by default), \(n_r\) is the number of responses, \({n_{p}}\) is the number of free parameters and the factor 2 arises from the negative wavenumbers also being computed by MARE2DEM. \(n_r\) depends on number of receivers \({n_{rx}}\) and tranmitters \({n_{tx}}\) used in the parallel data decompositon, as well as the particular requested responses and pairs of transmitters and receivers given in the input data file’s data table. For example, suppose the data table has all receiver-transmitter pairs and all x,y,z components of the electric and magnetic fields, then \(n_r\) will be
For most real CSEM data sets, \(n_r\) can be much smaller than this since often not all receiver-transmitter pairs have data, for example at long offsets where the transmitter is to far away from the receiver. Further, usually only the inline horizontal electric field and possibly the crossline magnetic and vertical electric fields are inverted (since the other components are nominally nil).
To give an idea of the total size of the sensitivity functions above, suppose a given problem inverts only the inline electric field and there are 100,000 free parameters (\({n_{p}=10^5}\)). Further, suppose \({n_{rx}=20,n_{tx}=20}\) and all receiver-transmitter pairs have data. Combined with MARE2DEM’s default wavenumber sampling (\({n_{kx}=30}\)), this gives \({n_r = 400}\) and thus
Since the sensitivity uses complex double precision values requiring 16 bytes of storage per value, this example requires 36 GB storage and if stored in memory rather than in files, this could exceed available memory of the cluster node. While most modern cluster nodes have at least 128 GB memory, consider a 24 core node that is running 24 independent modeling tasks as part of the parallel data decomposition approach. When all 24 cores are running, a total of 864 GB memory would be needed, which likely exceeds the available memory on many readily accessible cluster nodes (other than the most state of the art systems). Hence MARE2DEM writes the wavenumber sensitivity functions to scratch space on the fly so that memory is conserved. Further, the sensitivity data are written to separate files consisting of no more than 10 responses per wavenumber. Thus once all the wavenumbers have been computed and written to scratch files, the Fourier transformation to the x domain can proceed serially over these file subsets, requiring only a modest amount of memory.
Another subtle aspect is that MARE2DEM’s memory usage during CSEM inversion depends on \({n_r}\), and so the parallel data decomposition settings for \({n_{rx}}\) and \({n_{tx}}\) can be adjust to reduce memory usage as needed.