Top of Page
Skip main navigation

OIIT Research Computing Services

The Office of Innovation and Information Technology (OIIT) Research Computing Services offers extensive assistance in the areas of high-performance computing, data analysis, and scientific simulations. Our team is responsible for maintaining the infrastructure and software applications of the HPC clusters.

 

High Performance Computing (HPC) refers to the use of advanced computing techniques and technologies to solve complex computational problems or perform large-scale simulations at significantly higher speeds and with greater efficiency than conventional computing methods.

Selachii - Academic Cluster – Provides computational resources for faculty and students to enhance their academic curriculum. 

Sphyrna - Research Cluster – Provides researchers, scholars, and their collaborators access to computational resources for projects involving large datasets within a secure and controlled environment.

Sphyrna cluster

Welcome to the Sphyrna Research Cluster! This guide is designed to help members of the NSU research community access and utilize the computational resources available. Whether you are new to high-performance computing or looking to refresh your knowledge, this guide will provide the essential information you need to get started.

 

Obtaining a NSU HPC account

In order to have an account created on the HPC, please submit a Service Now request  . NSU Faculty and their research group members can have access to the research cluster. 

 

Cluster Access:

The HPC is located behind the protection of the NSU campus firewall. If you are not connecting from a secure campus network, you will need to be connected to the NSU network via the Shark VPN to access the HPC. For more details, see https://sharkvpn.nova.edu.

Access to the cluster is via a command-line interface; this requires a UNIX-like terminal (e.g., Terminal app on macOS) or a secure shell client (e.g., PuTTY on Windows).

To log in to researchhpc.nova.edu from a UNIX-like terminal, use the following command, replacing “username” with your NSU username:  ssh username@researchhpc.nova.edu 

 

File Transfer:

Use a secure copy client (e.g., WinSCP on Windows) or the secure copy (scp) command in a UNIX-like terminal.

To copy a file from your local disk to your home directory on the cluster, use the following command:

scp path_to_file username@researchhpc.nova.edu:~

Home Directory:

Your home directory on the cluster exists in a logical volume on a parallel file system. This directory is mounted on the login node and every compute node. Any file you copy using the above command will be visible to every computer in the cluster.

Alternative File Transfer Methods:

Use SFTP via command line or GUI-driven client software such as WinSCP.

Note: FTP access is disabled on the HPC for security reasons.

Storage Allocation:

Users are allocated up to 100 GB of storage within their home directories. For temporary storage needs and jobs requiring extensive input and output operations, scratch space is available.

Additional archive storage is available for short-term retention of large data sets.

Loading Software as Modules

Pre-installed Software:

Many opensource software applications are pre-installed on the HPC and available as modules, which allow users to readily utilize these tools. In the context of SLURM, modules are indispensable tools that grant access to various software packages, libraries, and environments. Any software beyond the basic operating system is incorporated through the module utility.

 

Academic HPC (Selachii)Cluster Login Instructions:

Obtaining a NSU HPC account

 In order to have an account created on the HPC, please submit a service now request. NSU Faculty and students can have access to the academic cluster. 

  1. VPN access: go to https://sharkvpn.nova.edu and follow the prompts to download the VPN client and login with SSO. 
  2. Access to the NSU HPC cluster is through a login node, and login nodes are accessed through using SSH (such as OpenSSH on Mac /Linux or PuTTY on Windows) to connect to academichpc.nova.edu.

Mac OS: Open the Terminal by opening the Applications folder, then the Utilities folder, and then click Terminal. App. 

Ex: To Connect to Putty -Host name-> academichpc.nova.edu  (Click Open)

 

  1. Upon first login, you may require changing your password using the password Follow the system prompts to set a new, secure password. 
  1. Access to the Academic HPC Linux cluster is to be via Secure Shell protocol (putty) to respective login nodes and access to compute nodes must be via the job scheduler (Slurm). Direct access to compute nodes is not permitted. 
  1. The purpose of a login node is to provide access to the cluster via SSH and to prepare for running a program (e.g., editing files, compiling, and submitting jobs). 
  1. Any important data/files should be copied to your local computers. OIIT HPC is not responsible for backing up for any data for users at this moment.

OIIT Research Computing Services does not purchase licensed software, except for basic software commonly used by most researchers. Individual research groups are responsible for buying their own software, which OIIT Research Computing Services will then install and maintain on the cluster. We will ensure that access to the software is restricted to members of the licensed user group.

  1. SAS and MATLAB are typical examples of licensed software. To gain access to the MATLAB Parallel Server installed on the cluster, research groups must demonstrate that they have purchased the necessary workstation license.
  2. Departments or labs can collectively purchase shared licensed software by contributing to the license fee. Our team will offer guidance on the necessary license requirements for installation on the HPC cluster.
  3. To request the installation of new software or upgrades to existing applications, please submit a request. Our team will collaborate with you to identify the software requirements and the most effective deployment method.

Selachii Academic Cluster:  Research Computing accounts on the academic cluster are available for faculty and students. Faculty members can sponsor students, postdoctoral fellows, or colleagues with whom they are collaborating on research projects or teaching a class.

Sphyrna Research Cluster: NSU Faculty and their research group members can request accounts to access the research cluster. Please note that student access is not permitted for this cluster.

Authorized Activities

Research Computing resources are for official NSU research only and are not to be used for personal activities of any kind.

Account Deactivation and Termination

Research computing accounts will comply with the computing NSU account removal policy.

Research computing accounts on the Sphyrna research cluster will be deactivated after 180 days of inactivity. Accounts will be removed from the clusters after one year of inactivity, and all associated data will be deleted.

Accounts created specifically for use in a class on the academic cluster will expire when that class section concludes and will be removed shortly thereafter, including the deletion of all associated data.

Account Sharing

As mandated by the NSU Acceptable Use Policy sharing your account and access information with another user is not permitted.

 

                            Slurm Command Reference

 Slurm Command Reference Command

Purpose

Example

sinfo

View information about Slurm nodes and partitions

sinfo --partition investor

squeue

View information about jobs

squeue -u myname

sbatch

Submit a batch script to Slurm

sbatch myjob

scancel

Signal or cancel jobs, job arrays or job steps

scancel jobID

srun

Run an interactive job

srun --ntasks 4 --partition investor --pty bash

Common options to use in your sbatch submission scripts. 

 sbatch option

Purpose

#SBATCH --qos

Request access to the resources available to your group

#SBATCH --account

Charge resources used by this job to specified account

#SBATCH --partition

Place your job in the group of servers appropriate for your request

#SBATCH --nodes

Specify the number of nodes to be allocated to this job

#SBATCH --ntasks

Specify number of tasks for this job (default is 1 core per task)

#SBATCH --ntasks-per-node

Request that ntasks be invoked on each node.

#SBATCH --cpus-per-task

Specify number of cores for each task (default is 1 core per task)

#SBATCH --mem

Total memory requested for this job (Specified in MB)

#SBATCH --mem-per-cpu

Memory required per allocated core (Specified in MB)

#SBATCH --job-name

Name for the job allocation that will appear when querying running jobs

#SBATCH --output

Direct the batch script's standard output to the file name specified. The default is "slurm-%j.out", where "%j" is the job ID.

#SBATCH --error

Direct the batch script's error output to the file name specified.

#SBATCH --mail-type

Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL

 

NSU Research Cluster Policies and Procedures: 

Users of the Research cluster agree to abide by all policies listed here. By requesting a user account, users acknowledge that they have read and understood these policies.

Authentication and Access Policy

The Research cluster is restricted to facilitate research within the University. This policy applies to any person with a research HPC account.

Accounts

HPC research cluster accounts are available to NSU faculty and staff. Sponsored guests must first be onboard as NSU affiliate to obtain a university userID.

Account Categories

Principal Investigator (PI): A NSU faculty sponsor leading a research effort. PIs may sponsor any number of accounts, but these accounts must be used for research only.

Research Assistant/Researcher Scientist: researchers supporting a PI’s research.

Staff: NSU employee supporting a PI’s research.

Sponsored Guest: An individual assisting a Principal Investigator (PI) with research, who is not already affiliated with NSU and collaborates closely with the PI. Sponsored guests may be enrolled as NSU Affiliates by the Principal Investigator (PI) to obtain a university userID before they request an account on the research cluster.

Account Guidelines

  • Research cluster resources are designated for official NSU research only and may not be used for personal activities of any kind.
  • Accounts must be sponsored by NSU faculty or staff who have been designated as Principal investigators (PI) conducting research for NSU. PIs are responsible to oversee use of ALL sponsored accounts under their projects.
  • Account requests will be verified with the associated PI.
  • Users may be asked to renew their account every twelve months.
  • Users are responsible for promptly notifying OIIT research computing services if their sponsor needs to be changed (e.g. sponsor leaves the project or otherwise loses affiliation with NSU).
  • Research computing accounts will adhere to the NSU account removal policy. Accounts on the research cluster will be deactivated after 180 days of inactivity and removed after one year of inactivity, at which point all associated data will be deleted. 

Access Controls

  • Shared accounts are not allowed on the research cluster. Sharing passwords or credentials is a violation of the Information Security Acceptable Use Policy.
  • All user accounts must operate under the principle of “least privilege” to ensure that processes operate at privilege levels no higher than the necessary to accomplish required functions.
  • Access is via Secure Shell protocol (SSH) to the respective login nodes. Direct access to compute nodes is not permitted.
  • Users may request an exception to have elevated permissions on their project instances to accomplish legitimate research tasks. Approval is at the discretion of OIIT Research Computing Services and authorized by IT Security.
  • Users with the capability for elevated permissions are not permitted to enable elevated permissions for other accounts.
  • A maximum number of concurrent login sessions will be enforced on login nodes. 
  • SSH sessions that have been idle for a specified amount of time will be automatically disconnected.
  • Account holders are not permitted to enable “Guest” accounts or anonymous access to data or services hosted on the research cluster resources.
Return to top of page