Difference between revisions of "Task 8 - Molecular Dynamics Simulations"
(→Intro) |
|||
(3 intermediate revisions by one other user not shown) | |||
Line 3: | Line 3: | ||
In this section we will simulate the wildtype protein and two interesting mutants with MD, e.g. the gromacs package. For this we will use an automatic pipeline. As the final simulations will take a while, we will post the analysis part at a later point. |
In this section we will simulate the wildtype protein and two interesting mutants with MD, e.g. the gromacs package. For this we will use an automatic pipeline. As the final simulations will take a while, we will post the analysis part at a later point. |
||
The pipeline is available as a git repository. All the work needs to be done on the LRZ now. |
The pipeline is available as a git repository. All the work needs to be done on the LRZ now. |
||
+ | |||
+ | The slides of the task: [[File:MD talk.pdf]] |
||
== LRZ == |
== LRZ == |
||
Line 31: | Line 33: | ||
#SBATCH --ntasks=32 |
#SBATCH --ntasks=32 |
||
#SBATCH --mail-type=end |
#SBATCH --mail-type=end |
||
− | #SBATCH --mail-user= |
+ | #SBATCH --mail-user=offman@lrz.de |
#SBATCH --export=NONE |
#SBATCH --export=NONE |
||
+ | #SBATCH --time=02:00:00 |
||
− | #SBATCH --export=PATH="$HOME/test/AGroS:$PATH",PATH="$HOME/apps/bin/:$PATH" |
||
− | #SBATCH --time=04:00:00 |
||
source /etc/profile.d/modules.sh |
source /etc/profile.d/modules.sh |
||
module load gromacs |
module load gromacs |
||
+ | export PATH="$HOME/test/AGroS:$PATH" |
||
+ | export PATH="$HOME/apps/bin/:$PATH" |
||
AGroS 1whz_new.pdb -dir /home/hpc/pr32fi/lu32xul/test -threads 32 |
AGroS 1whz_new.pdb -dir /home/hpc/pr32fi/lu32xul/test -threads 32 |
||
</code> |
</code> |
||
In this script we do not use the standard cluster <code>--clusters=mpp1</code> but a test queue to get a quicker answer whether the simulation works at all. |
In this script we do not use the standard cluster <code>--clusters=mpp1</code> but a test queue to get a quicker answer whether the simulation works at all. |
||
+ | |||
+ | =Submit Job= |
||
Submission is done using the following command <code>sbatch job.script</code> |
Submission is done using the following command <code>sbatch job.script</code> |
||
Line 46: | Line 51: | ||
If the test simulation fails due to a gromacs problem try to use only 16 cores and change that also for the commandline call of AGroS. |
If the test simulation fails due to a gromacs problem try to use only 16 cores and change that also for the commandline call of AGroS. |
||
− | In the real script you choose the standard cluster and instead of only |
+ | In the real script you choose the standard cluster and instead of only 2 hours (limit) you set something like 16-32 hours depending on the size of your protein. |
+ | =Waiting= |
||
The state of the job and whether it really sits in the queue can be checked with the command <code>squeue -u <username> <queue></code> where the queue can either be <code>--clusters=mpp1</code> or <code>--partition=mpp1_inter</code>. |
The state of the job and whether it really sits in the queue can be checked with the command <code>squeue -u <username> <queue></code> where the queue can either be <code>--clusters=mpp1</code> or <code>--partition=mpp1_inter</code>. |
||
Latest revision as of 16:21, 27 June 2012
Intro
In this section we will simulate the wildtype protein and two interesting mutants with MD, e.g. the gromacs package. For this we will use an automatic pipeline. As the final simulations will take a while, we will post the analysis part at a later point. The pipeline is available as a git repository. All the work needs to be done on the LRZ now.
The slides of the task: File:MD talk.pdf
LRZ
Prepare Environment
- Login to the LRZ:
ssh -XY username@lx64ia2.lrz.de
orssh -XY username@lx64ia3.lrz.de
- In order to use git you have to load the software module first. http://www.lrz.de/services/compute/supermuc/software/
- Go to a designated directory and clone the repository from https://github.com/offmarc/AGroS
- Include all the scripts in the PATH environment variable
- Get a license for SCRWL4 and install it into the same dir where the scripts are: http://dunbrack.fccc.edu/scwrl4/
- Finally copy the WT and two mutants to the LRZ (scp)
- IMPORTANT: Before you continue you should have a look at the scripts and check what they do!
Prepare Job Scripts
General info about preparing the Job Scripts can be found at http://www.lrz.de/services/compute/linux-cluster/batch_parallel/
Submission can only be done from lxia4-1, lxia4-2.
For each of the three structures you will have to create a separate job script.
Here is an example that together with the info on the above stated LRZ page should give you an idea how to do it.
#!/bin/bash
#SBATCH -o /home/hpc/pr32fi/lu32xul/test/info.out
#SBATCH -D /home/hpc/pr32fi/lu32xul/test/
#SBATCH -J 1whz_MD
#SBATCH --partition=mpp1_inter
#SBATCH --get-user-env
#SBATCH --ntasks=32
#SBATCH --mail-type=end
#SBATCH --mail-user=offman@lrz.de
#SBATCH --export=NONE
#SBATCH --time=02:00:00
source /etc/profile.d/modules.sh
module load gromacs
export PATH="$HOME/test/AGroS:$PATH"
export PATH="$HOME/apps/bin/:$PATH"
AGroS 1whz_new.pdb -dir /home/hpc/pr32fi/lu32xul/test -threads 32
In this script we do not use the standard cluster --clusters=mpp1
but a test queue to get a quicker answer whether the simulation works at all.
Submit Job
Submission is done using the following command sbatch job.script
If the test simulation fails due to a gromacs problem try to use only 16 cores and change that also for the commandline call of AGroS.
In the real script you choose the standard cluster and instead of only 2 hours (limit) you set something like 16-32 hours depending on the size of your protein.
Waiting
The state of the job and whether it really sits in the queue can be checked with the command squeue -u <username> <queue>
where the queue can either be --clusters=mpp1
or --partition=mpp1_inter
.
Once this all worked you have to wait and write a bit about the different steps of the simulation etc.
We also want you to look at the intermediate PDB files created in the workflow, visualize them and explain what is special, different about them and why we need them.