Anvil的设置 这样直接是并行不会丢失速度 用srun就行
#!/bin/bash
#!/bin/bash
# Copy/paste this job script into a text file and submit with the command:
# sbatch thefilename
# job standard output will go to the file slurm-%j.out (where %j is the job ID)
#SBATCH -A mch220024
#SBATCH -p wholenode # the default queue is "wholenode" queue
#SBATCH --time=24:00:00 # walltime limit (HH:MM:SS)
#SBATCH --nodes=1 # number of nodes 同步设置node与下面的cpus-per-task,保持一致就解决了通讯问题
#SBATCH --cpus-per-task=2
#SBATCH --ntasks=64 # 不超过一个节点即保持128就行 即n128 node 4
#SBATCH --job-name="lmp"
#SBATCH -o out%j # Name of stdout output file
#SBATCH -e err%j # Name of stderr error file
#SBATCH --mail-user=kluo@iastate.edu
#SBATCH --mail-type=all # Send email to above address at begin and end of job
# Set environment variables
export OMP_NUM_THREADS=1
export TF_INTRA_OP_PARALLELISM_THREADS=1
export TF_INTER_OP_PARALLELISM_THREADS=1
因为只有1008个原子 这样设置是最高效的目前,不同的任务都要重新测试,当原子很多的时候尽量加大ntasks的数值
source ~/.bashrc
{ source /anvil/projects/x-phy220096/kluo/Mg/dis18/lmp2210.sh; }
# Execute the job with srun
srun --mpi=pmi2 -n $SLURM_NTASKS lmp -i eqNVT.in
#mpirun -np 256 lmp < in.lammps
评论
发表评论