The documentation for slurm commands can be found on the official Slurm website. Of special note is the documentation of the sbatch command Below is a reference table to be used as a comparison to our previously used SGE scheduler.

User Commands SGE Slurm
Job submission qsub [script_file] sbatch [script_file]
Job deletion qdel [job_id] scancel [job_id]
Job status (by job) qstat -u \* [-j job_id] squeue [job_id]
Job status (by user) qstat [-u user_name] squeue -u [user_name]
Queue list qconf -sql squeue
Node list qhost sinfo -N OR scontrol show nodes
Cluster status qhost -q sinfo
Environment SGE Slurm
Script directive #$ #SBATCH
Queue -q [queue] -p [queue]
Node Count N/A -N [min[-max]]
CPU Count -pe [PE] [count] -n [count]
Wall Clock Limit -l h_rt=[seconds] -t [min] OR -t [days-hh:mm:ss]
Standard Output File -o [file_name] -o [file_name] OR --output=[file name]
Standard Error File -e [file_name] -e [file_name] OR --error=[file name]
Combine stdout/stderr -j yes (use -o without -e)
Copy Environment -V --export=[ALL | NONE | variables]
Event Notification -m abe --mail-type=[events]
Email Address -M [address] --mail-user=[address]
Job Name -N [name] --job-name=[name]
Job Restart -r [yes|no] --requeue OR --no-requeue
Working Directory -wd [directory] --workdir=[dir_name]
Resource Sharing -l exclusive --exclusive OR--shared
Memory Size -l mem_free=[memory][K|M|G] --mem=[mem][M|G|T] OR --mem-per-cpu=[mem][M|G|T]
Account to charge -A [account] --account=[account]
Tasks Per Node (Fixed allocation_rule in PE) --tasks-per-node=[count]
CPUs Per Task   --cpus-per-task=[count]
Job Dependency -hold_jid [job_id | job_name] --depend=[state:job_id]
Job Project -P [name] --wckey=[name]
Quality Of Service   --qos=[name]
Job Arrays -t [array_spec] --array=[array_spec]
Generic Resources -l [resource]=[value] --gres=[resource_spec]
Licenses -l [license]=[count] --licenses=[license_spec]