The HPC Challenge benchmark or HPCC, expands on HPL by adding six extra benchmarks to provide a more complete, yet still fairly synthetic, picture of a cluster's performance.
I managed to build HPCC without issues using the Atlas library, once I read the instructions, but struggled significantly with MKL until it was pointed out to me the existence of this tool that can be used to generate the appropriate link lines and compiler flags.
I shall concentrate on the install of HPCC using MKL as the results from running various HPL tests, see this post, showed that performance is significantly better than using the Atlas library. MKL is not free, but it's probably worth its price given the performance boost in HPL. We shall find out whether the same is true for HPCC.
I shall concentrate on the install of HPCC using MKL as the results from running various HPL tests, see this post, showed that performance is significantly better than using the Atlas library. MKL is not free, but it's probably worth its price given the performance boost in HPL. We shall find out whether the same is true for HPCC.
- Install the Intel MKL library, which can be downloaded from here. Then installation is fairly straight forward. Unpack the tarball, run install.sh and follow the instructions.
- Download HPCC from here and extract it to ../hpcc-1.4.1
- Create Make.intel file on ../hpcc-1.4.1./hpl . See relevant content below:
- From ../hpcc-1.4.1 run make arch=intel to build HPCC.
# ----------------------------------------------------------------------
# - HPL Directory Structure / HPL library ------------------------------
# ----------------------------------------------------------------------
#
TOPdir = ../../..
INCdir = $(TOPdir)/include
BINdir = $(TOPdir)/bin/$(ARCH)
LIBdir = $(TOPdir)/lib/$(ARCH)
#
HPLlib = $(LIBdir)/libhpl.a
#
# ----------------------------------------------------------------------
# - Message Passing library (MPI) --------------------------------------
# ----------------------------------------------------------------------
#
#MPdir = /usr/bin/mpi
#MPinc = -I$(MPdir)/include
#MPlib = /usr/lib64/mpich2/lib/libmpich.a
#
# ----------------------------------------------------------------------
# - Linear Algebra library (BLAS or VSIPL) -----------------------------
# ----------------------------------------------------------------------
#
LAdir = /opt/intel/mkl/lib/intel64
LAinc = /opt/intel/mkl/include
LAlib = -Wl,--start-group $(LAdir)/libmkl_cdft_core.a $(LAdir)/libmkl_intel_lp64.a $(LAdir)/libmkl_sequential.a $(LAdir)/libmkl_core.a $(LAdir)/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lpthread -lm
# ----------------------------------------------------------------------
# - F77 / C interface --------------------------------------------------
# ----------------------------------------------------------------------
#
F2CDEFS =
#
# ----------------------------------------------------------------------
# - HPL includes / libraries / specifics -------------------------------
# ----------------------------------------------------------------------
#
HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) -I$(LAinc)
HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib)
#
# - Compile time options -----------------------------------------------
#
HPL_OPTS = -DHPL_CALL_CBLAS
#
# ----------------------------------------------------------------------
#
HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES)
#
# ----------------------------------------------------------------------
# - Compilers / linkers - Optimization flags ---------------------------
# ----------------------------------------------------------------------
#
CC = /usr/bin/mpicc
CCNOOPT = $(HPL_DEFS)
CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops -DMKL_ILP64 -m64
#
# On some platforms, it is necessary to use the Fortran linker to find
# the Fortran internals used in the BLAS library.
#
LINKER = /usr/bin/mpicc
LINKFLAGS = $(CCFLAGS)
mv _hpccinf.txt hpccinf.txt
You can now run hpcc with:
mpiexec.hydra -n 4 ./hpcc
Results can be found here.
No comments:
Post a Comment