慕工大LRZ计算服务器相关命令及SPARTA编译

Linux笔记

Posted by 陈陈 on December 9, 2020

Login

ssh -Y lxlogin2.lrz.de -l ge54cel2

The nodes lxlogin2~8 maybe work, just try it. VNP is not needed for the connection outside the campus network.

Introduction of the cluster

https://doku.lrz.de/display/PUBLIC/Linux+Cluster

Check the status of the cluster

https://doku.lrz.de/display/PUBLIC/Linux+Cluster+node+usage

Upload or download files

  1. sent DATA FROM LRZ to local computer

    scp ge54cel2@lxlogin2.lrz.de:/dss/dsshome1/lxc0E/ge54cel2/testcase/circle2D_M10vib/tmp_grid1000.plt ~/incoming/circle2D_M10vib

  2. sent data from local computer to LRZ

    scp -r sparta/ ge54cel2@lxlogin7.lrz.de:/dss/dsshome1/lxc0E/ge54cel2

Compile the Sparta

  1. check module

    module list module show mpi.intel/2019

  2. find the libraries

    echo $MPI_LIB

  3. Set the path in the Makefile.foo

    MPI_INC = -I/lrz/sys/intel/impi2019u6/impi/2019.6.154/intel64/include
    MPI_PATH = -L/lrz/sys/intel/impi2019u6/impi/2019.6.154/intel64/lib/
    MPI_LIB = /lrz/sys/intel/impi2019u6/impi/2019.6.154/intel64/lib/libmpicxx.so.12

  4. compile the code

    make foo

The bash clusterscript.md

#!/bin/bash
#SBATCH -o ./myjob_output.%j.%N.out
#SBATCH -D ./
#SBATCH -J test
#SBATCH --get-user-env
#SBATCH --clusters=tum_aer
#SBATCH --partition=tum_aer_batch
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
# --- multiples of 28 for mpp2 ---
#SBATCH --mail-type=end,begin
#SBATCH --mail-user=song.chen@tum.de
#SBATCH --export=NONE
#SBATCH --time=04:00:00
#
module load slurm_setup
#
#
#
#module load gcc/7
#
#cd $SCRATCH/mydata
#
cd /dss/dsshome1/lxc0E/ge54cel2/sparta/examples/circle_test2
mpirun -n $SLURM_NTASKS ./spa_foo < inputfile.dat -------------------------------------------------

Frequently used commands

  1. submit the job

    sbatch clusterscript.md

  2. check queue

    squeue –clusters=mpp2

  3. delete script

    scancel –clusters=mpp2 194595