The MATLAB Parallel Computing Toolbox (PCT) lets you solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters.
The following is a brief tutorial. More details are given in the pdf attachment to this page
An additional PCT example is available on the vortex front-end under /util/academic/matlab/example
In that directory, the files MyMatlabScript.m and slurmMATLAB contain a simple example of running PCT on the CCR cluster.
- A "dart-throwing" Monte-Carlo pi calculation piMCserial.m (sequential code):
function [time]=piMCserial(N) tic n=0; for j=1:N x=rand; y=rand; if (x^2+y^2)<=1, n=n+1; end end mypi=4*n/N error=abs(pi-mypi) time=toc;
- A parallelized version is:
function [time]=piMCparallel(N) tic n=0; for j=(labindex-1)*(N/numlabs)+1:labindex*(N/numlabs) x=rand; y=rand; if (x^2+y^2)<=1, n=n+1; end end mypi=4*gplus(n)/N; %global sum across cores error=abs(pi-mypi) time=gop(@max, toc); %global max across cores filename=strcat('stuff',int2str(labindex),'.mat'); save(filename);
Note: The same code is executed at the same time on each worker (labindex). Each worker has its own workspace. The above code is written so that x mat files are stored for x workers: stuff1.mat, stuff2.mat, etc. The final answer is globally summed using the gplus parallel command. The total number of workers (numlabs) is specified by the matlabpool.
- To initialize a matlabpool across 12 workers, write the following wrapper "runjob.m" for the above code:
%%runjob.m matlabpool 12 spmd [time]=piMCparallel(N) end matlabpool close
- Write the following SLURM script "slurm-MATLAB-pi" to submit runjob.m, which calls the original code. Make sure that all files and subroutines reside in the same directory. In the script, note the use of "export HOME" to reassign the HOME directory prior to running MatLab. This can boost MatLab performance and avoid file access issues when running multiple simultaneous PCS jobs.
#!/bin/bash #SBATCH --time=01:00:00 #SBATCH --constraint=CPU-E5645 #SBATCH --nodes=1 #SBATCH --cpus-per-task=1 #SBATCH --ntasks-per-node=12 #SBATCH --mail-user=username@buffalo.edu #SBATCH --mail-type=END #SBATCH --job-name=MatlabPi #SBATCH --output=matlab.out #SBATCH --error=matlab.err cd $SLURM_SUBMIT_DIR echo "working directory = "$SLURM_SUBMIT_DIR module load matlab module list ulimit -s unlimited # # adjusting home directory reportedly yields a 10x MatLab speedup # it also avoids potential problem with MatLab when running multiple # simultaneous PCT jobs export HOME=$SLURMTMPDIR matlab < runjob.m # echo "All Done!"
[user@vortex:~]$sbatch slurm-MATLAB-pi
- After the job finishes, collect your mat files.
- Note: The above method works as long as all workers are on a single node. To use workers spanning multiple nodes, submit your jobs through the MDCS
The files below are from a workshop held on 09/25/2014. The workshop focused on using the MATLAB Parallel Computing Toolbox (PCT) in the SLURM environment.