See below to download non-shared MDCS setup instructions.
Follow these steps to set up your personal (i.e. non-shared) MATLAB installation to use MDCS on the academic cluster (rush):
Copy contents of $MATLABROOT\toolbox\distcomp\examples\integration\pbs\nonshared to $MATLABROOT\toolbox\local
NOTE:You will need administrative privileges on your computer to install these files
- Download and unzip the CCR customization files (attached below) to $MATLABROOT\toolbox\local
- Download the latest mbatchNonSharedSlurm.m file (attached below) to your working directory. Rename if desired and modify as needed for your application.
--- you should modify the following variables:
remoteDataLocation Where to store files at CCR
StorageLocation Where to store files locally
ppn Processors Per Node
time Requested walltime
email Who to e-mail when job completes
pjob.NumWorkersRange Set high value to desired # of cpus
AttachedFiles List files needed by your application
--- in the createTask() function
Set the 2nd argument to be "@" followed by the name of your properly formatted Matlab function as seen in the image below. Match the 3rd and 4th arguments to the output and inputs of your function.
NOTE: Your code will not automatically run in parallel just because you are using MDCS! You'll need to make code changes to take advantage of the additional resources. For example, see pages 17-26 of CCR's MDCS workshop. See also the official Mathworks MDCS page.
In Matlab, adjust the path settings as shown in this image:
Submit your MDCS job by typing "mbatchNonSharedSlurm" in the Matlab command window.
MatLab will retrieve results from "remoteDataLocation" when the job is complete and copy them to "StorageLocation".
See the CCR SLURM guidelines for instructions on monitoring the progress of the job.
NOTE: $MATLABROOT is the root of your Matlab installation. It can be determined by typing "matlabroot" in the Matlab command window.
NOTE: The SLURM sbatch command does not accept scripts that have DOS end-of-line (CR+LF) delimiters. If you get an error related to this restriction, you'll likely need to run "dos2unix" on the slurmParallelJobWrapper.sh file.
NOTE: Be sure the directories specified by "remoteDataLocation" and "StorageLocation" exist before submitting your job.