Since 18/01/2017

header ads

The Questaal Suite

 

A collection of electronic structure codes based mainly on the LMTO method

The package contains four semi autonomous parts with various submodules. Two of them are all-electron DFT implementations: a full potential (FP) code based on smooth Hankel functions and a more traditional atomistic spheres approximation (ASA) code. The third implements empirical tight binding models (TBE). The last but not least is the QSGW implementation.

Introduction

The basis set used to represent crystal eigenfunctions is composed of atom centered functions, linear muffin tin orbitals (LMTO), rather than the more commonly used augmented plane wave basis. This has advantages, basis sets are much smaller for a given level of accuracy, but it also requires somewhat more knowledge for the user to operate. It is also possible to take plane waves and atom centered functions in combination. Another feature is the augmentation. It is carried out in a manner somewhat resembling the PAW method, though it has a proper convergence that all-electron methods possess.

The package contains a variety of useful special purpose programs, e.g. ability to calculate magnetic exchange interactions, and an implementation of the coherent potential approximation.

Perhaps the most important is the connection to implementation for GW calculations, in an all-electron framework. GW is usually implemented as an extension to the LDA, i.e. G and W are generated from the LDA. You can use this package for LDA-based GW calculations, but it also implements the Quasiparticle Self-consistent GW approximation (QSGW).

See doc/README for a brief description and further pointers.

Installation

Prerequisites


  • C and Fortran compilers (GCC v7+ and Intel v17+ are known to work)
  • BLAS, LAPACK and FFTW3 implementations
  • libxc v2.2+
  • HDF5 v1.10+
  • Python v2.7+
  • ninja build system v1.7+ (https://ninja-build.org)
  • hwloc, tcsh, bc and make (for running the tests and other scripts)

MPI, ScaLAPACK etc.. are optional.

If there is no prepackaged version of the ninja build system you can compile it localy with the following three lines:

git clone --branch release git://github.com/ninja-build/ninja.git
cd ninja && CXX=g++ ./configure.py --bootstrap
ln -s -f `pwd`/ninja ~/bin/

Downloading and Installation


First of all, clone the repository to your local machine using for example the command below (remember to change <username> to your bitbucket username).

git clone https://<username>@bitbucket.org/lmto/lm.git

If you'd rather not be asked for password every time you can import a public ssh key to your account.

In an empty folder (we use 'build' here), create file flags.mk which contains variables and flags necessary for the build.

mkdir lm/build                           # Choose any directory name
cd lm/build
../genflags.py intel opt > flags.mk      # assuming you are using the intel compiler ifort

If using gnu compilers, run genflags.py with gcc (intel and cray are also options)

../genflags.py gcc   opt > flags.mk      # For gnu compilers

genflags.py will attempt to find include files, fortran modules and lib files for libxc in the commonly kept places. Environment variables INCLUDE and CPATH will be searched for .h and .mod files, and LIBRARY_PATH for the relevant .a or .so files. Beware that libxc .mod files have to be generated by the same compiler you are using now. If they are not, the compilation will most likely fail later.

An attempt is made to use Intel's MKL even with gcc however if MKLROOT variable is not found it is abandoned and the generic -lblas -llapack -lfftw3 are generated. If you have a more favourable libs implementing BLAS, LAPACK (+ BLACS and SCALAPACK for the MPI case) please specify them manually.

To obtain an MPI-enabled example flags.mk, add an argument to genflags.py e.g.

../genflags.py intel openmpi opt > flags.mk      # Using openmpi
../genflags.py intel intelmpi opt > flags.mk     # Using intelmpi

The MPI version changes the interface to the Intel MKL cluster libs and the default compiler wrappers.

Owing to the wide ranging variability of software environments on compute clusters we are not attempting to fully autoconfigure a compilation. flags.mk is a template which can be easily customised to any particular environment.

Inspect flags.mk. If you have a more or less standard linux build, it may be fine as is. In an MPI enabled compilation ensure the "mpirun" variable in the flags.mk file is set to the appropriate value for your cluster, it will be used in the GW related scripts. If the launcher is to be changed afterwards, the compilation default can be overridden by an mpicmd file placed either in the same path as the binaries or in the run path of a calculation with the later taking precedence. For example `set mpirun = "aprun" will use the cray launcher aprun instead of the more conventional mpirun. This is handy when even non-MPI but openmp enabled binaries have to be run on a cray compute node. For the build to succeed you must have the libraries and utilities described at the beginning of this documentation. If you are using another OS, notably Mac OS, you will need to make some adjustments. See below.

In the folder where flags.mk resides, generate a build file with the configure.py utility from the source path.

../configure.py
Compiling the executables

The ninja build tool uses the generated build file to compile and link all executables as quickly as possible.

ninja

will build target all using all available cores for simultaneous compilation/linking processes. On a well integrated system, autocompleting after 'ninja' shall list all available targets.

It is advisable to test the build. Each module has a related set of tests which can be invoked with the following command after replacing '\<modulename>' with one of the modules: lm, fp, gf, pgf, sx, gwd, optics, tb, mol, dmft, gw or all.

ninja test-<modulename>

Just ninja test will start most of the tests while ninja test-all will include even the heaviest test. Since this may take a while, the progress can be monitored with ninja stat-tests. The tests can be run on a queuing system by setting the qcmd and qhrd variables in the flags.mk file, they are described there. If qcmd is not set the test jobs will be load balanced internally and can be stopped with ninja stop-tests.

Additional Notes


If compiler optimisations cause erroneous results one may wish to set lower level of optimisations for certain sensitive files only or have a number of different flags applying to different groups of files. If such is the case the flags.mk file may be modified to contain the the special flags for a group of files in a variable beginning with the prefix lessflags and ending in a word of one's choosing. Then a variable beginning with lessfiles and ending in the word chosen for the lessflags variable, must be defined containing the questionable file names relative to the lm/ path. Many pairs of special flags/sensitive files groups are allowed so long as the contents of the lesserfiles variables do not overlap.

To obtain an example containing special flags try

genflags.py intel opt

On CRAY installations it is advisable to use cray's compiler wrappers 'cc', 'ftn' and 'CC' with the intel backend. Assuming the default backend is cray

module swap PrgEnv-cray PrgEnv-intel

will switch your environment to intel. The wrappers bundle pretty much all standard system libs (BLAS, LAPACK, SCALAPACK, FFTW3, MPI etc..) except libxc.

genflags.py cray opt

will give a reasonable starting point for such an endeavour.

The executables (and possibly other files in future) can be installed in a path defined in a variable named prefix in the flags.mk file. Subsequent ninja install will copy only the binaries already build, without introducing any additional dependencies.

Shortcuts and workarounds for specifix systems

Apple Mac OS X

An easy way to obtain all prerequisites is to setup brew from https://brew.sh/ then install the necessary packages with the following command:

brew install gcc git grep gnu-sed awk diffutils gzip ninja make hdf5 libxc scalapack openblas fftw

The default homebrew HDF5 does not support MPI so do not use MPI with the build unless you have custom MPI supporting HDF5.

If you'd like to try Apple's vecLib framework or Intel's MKL you may need the f2c style complex function wrapper from https://github.com/mcg1969/vecLibFort. Edit the ldflags variable in flags.mk

GNU/Linux

The use of high performance linear algebra libraries is highly recommended. Intel distributes the MKL under a freeware style license and it includes decent fftw3 and scalapack implementations in addition to the core blas/lapack, it is one of the overall best performing library in our experience.

The following instructions are likely to produce slow underperforming binaries but they can be handy for getting started quickly, carrying out some tutorials and other small calculations.

PREPARATORY STEPS

OpenSUSE Leap 15.0

sudo zypper -n ar -f https://download.opensuse.org/repositories/science/openSUSE_Leap_15.0/science.repo
sudo zypper -n --gpg-auto-import-keys in openmpi3-devel fftw3-openmp-devel libxc-devel ninja gcc-fortran libscalapack2-openmpi3-devel hdf5-openmpi3-devel libopenblas_openmp-devel python make git tcsh bc Modules hwloc pkg-config
. /etc/profile.d/modules.sh
module load gnu-openmpi

Fedora 29

sudo dnf -y install openmpi-devel fftw-devel libxc-devel ninja-build gcc-gfortran scalapack-openmpi-devel hdf5-openmpi-devel openblas-devel python make git tcsh bc hwloc pkg-config
. /etc/profile.d/00-modulepath.sh
. /etc/profile.d/modules.sh
module load mpi/openmpi-x86_64

Ubuntu 18.04 LTS, 18.10 and 19.04

sudo apt-get update
sudo apt-get install -y libopenmpi-dev libscalapack-openmpi-dev libhdf5-openmpi-dev gfortran libfftw3-dev libxc-dev libopenblas-dev ninja-build python make git tcsh bc hwloc-nox pkg-config

Archlinux (~2019.03)

sudo pacman --noconfirm -Sy --needed gcc-fortran openmpi hdf5-openmpi openblas fftw ninja make python git tcsh bc hwloc file awk diffutils fakeroot which patch pkgconf

# Read carefully all files from the following clones before proceeding, they may not have been vetted and may contain exploits!
git clone https://aur.archlinux.org/libxc.git libxc-aur
pushd libxc-aur; makepkg -sif --noconfirm --needed; popd
git clone https://aur.archlinux.org/scalapack.git scalapack-aur
pushd scalapack-aur; makepkg -sif --noconfirm --needed; popd

The default GCC versions in Debian 9, CentOS 7 and Ubuntu 18.04 LTS are too old and lack necessary features. Installation is only possible with custom packages in this case.

DOWNLOAD AND BUILD STEPS

Common to all of the above systems:

git clone https://<username>@bitbucket.org/lmto/lm.git
mkdir lm/build
cd lm/build

../genflags.py gcc opt openmpi openblas > flags.mk
../configure.py

ninja

Some of the systems have rather outdated gfortran and may issue warnings about unrecognised flags. On some of the systems genflags.py may warn about not finding a common roof for libraries and modules or include files.



https://bitbucket.org/lmto/lm/src/master/README.md

Post a Comment

0 Comments