Running Cfour In Parallel
A parallel Cfour run is carried out via
xcfour > output file
where this command can be performed either interactively (only recommended for very short runs), in the background (i.e., with &)
or within a shell script (required when using a queuing system). However, please don't run xcfour via mpirun -whateveroption xcfour, mpiexec -whateveroption xcfour, or similar commands. xcfour is a driver program that will automatically launch the subprograms with mpiexec or similar commands.
As input files a CFOUR run requires
(a) a input file ZMAT with all information concerning geometry, requested quantum-chemical method, basis set, etc.
(b) the Basis-set file GENBAS
If the MPI parallel computation uses more than one file system (e.g. local disks of multiple nodes), the ZMAT and GENBAS file has to be present on all of them.
With the current public version:
(a) HF-SCF, CCD and CCSD can be run in parallel using RHF, UHF, and ROHF references for energies, first, and second derivatives and therefore for all properties. This just requires that the calculation is done with ABCDTYPE=AOBASIS and CC_PROG=ECC or CC_PROG=VCC is used.
Note: Using the option ECC is not recommended for ROHF gradients. That is, if you are doing a
geometry optimization with ROHF as your reference wave function then it is safe to use the option VCC.
(b) CCSD(T) can be run in parallel using RHF and UHF references for energies, first, and second derivatives and therefore for all properties. This just requires that the calculation is done with ABCDTYPE=AOBASIS and CC_PROG=ECC is used.
(c) in EOM-CCSD (EOMEE,EOMIP,EOMEA) the MPI parallelization works as usual for the underlying CCSD and does only include the ABCD term in AO basis if used by the EOM method. Other contributions will eventually parallelized in the future. These calculations work as long as ABCDTYPE is set to AOBASIS. EOMEA computations require the use of CC_PROG=VCC.
(d) Because of the structure of CFOUR it does not make sense to run MP2 in parallel up to now. There are certainly better programs for parallel MP2 calculations.