Community serves the Slurm community, which consists of:

  1. Many users of Slurm
  2. Many system administrators that manage Slurm
  3. Organizations that have Slurm running on their systems
  4. Companies that provide products and services based on Slurm

Some organizations in the Slurm community include:

  1. Bright Computing – Bright Computing is a specialist in cluster management software and services for high-performance computing. Its flagship product — Bright Cluster Manager — with its intuitive graphical user interface and powerful cluster management shell, makes clusters of any size easy to install, use and manage, including systems with GPGPU or Xeon Phi technology. Slurm is the default workload manager for Bright Cluster Manager, with all other major workload managers available as pre-configured options. Bright Computing also provides commercial support for Slurm, providing a one-stop solution for our customers.
  2. Bull – The only truly European IT company capable of designing, integrating and implementing supercomputers, Bull has made Extreme Computing one of its key strategic priorities. Bull Extreme Computing solutions are based on bullx, a range of innovative systems designed for uncompromised performance, which has gained worldwide recognition. Bullx solutions software stack is bullx scs (SuperComputer Suite) and comprises many open source components, including Slurm Workload Manager; it is used today in many supercomputers around the globe including 3 petafloplic sites Tera100 at CEA, Curie at GENCI and Helios at IFERC.
  3. CEA – The French Alternative Energies and Atomic Energy Commission (CEA) leads research, development and innovation in four main areas: low-carbon energy sources, global defense and security, information technologies and healthcare technologies. The CEA’s leadership position in the world of research is built on a cross-disciplinary culture of engineers and researchers, ideal for creating synergy between fundamental research and technology innovation. Experience of supercomputing centers design and operation acquired for many years has led CEA to select a rich portfolio of open source software for petascale supercomputers: Slurm is one of the flagship products used to manage all CEA HPC resources. Because it is considered a strategic component, CEA is increasing its knowledge and its involvement in Slurm, bringing improvements and new developments to the community.
  4. CSCS – As the Swiss National Supercomputing Centre, CSCS has a diverse range of computational science systems, varying in size from small test clusters up to large Cray supercomputers and including production meteorological machines. Across this diverse range of systems CSCS runs Slurm Workload Manager as the sole batch/queuing system in a site-wide ecosystem for a diverse user community of some 1000 users. CSCS was the first site to run Slurm on Cray supercomputers, including the current XE6 flagship system which has been using Slurm for nearly 2 years and CSCS is very happy with the results. The design and implementation of the Slurm code base is of a high quality and the support we receive from the developers is exemplary. Slurm has a rich feature set along with a plugin technology and scripting facility that enables the functionality to be tailored to our needs. CSCS is more than happy with its decision to move to Slurm and have no hesitation in recommending it.
  5. Greenplum – Greenplum, a division of EMC, is driving the future of Big Data analytics with breakthrough products that harness the skills of data science teams to help global organizations realize the full promise of business agility and become data-driven, predictive enterprises. The division’s products include Greenplum Unified Analytics Platform, Greenplum Data Computing Appliance, Greenplum Analytics Lab, Greenplum Database, Greenplum HD and Greenplum Chorus. They embody the power of open systems, cloud computing, virtualization and social collaboration—enabling global organizations to gain greater insight and value from their data than ever before possible. Recent efforts have focused on expanding the range of the analytics to run on any cluster, as opposed to only those configured specifically for Hadoop. Greenplum continues to work with the Slurm team to extend Slurm to support such applications.
  6. Intel
  7. Lawrence Livermore National Laboratory – Founded in 1952, Lawrence Livermore National Laboratory (LLNL) provides solutions to the USA’s most important national security challenges through innovative science, engineering and technology. Slurm Workload Manager was originally developed at LLNL and is now the default workload manager on all the HPC systems at LLNL, including the Sequoia supercomputer, which is the number one in the Top500 of June 2012.
  8. SchedMD – SchedMD was founded in 2010 by the primary architects and developers of the Slurm Workload Manager, Moe Jette and Danny Auble, to satisfy the development and support needs of the broader open-source community. SchedMD personnel have decades of High Performance Computing (HPC) expertise, have written about 80% of Slurm (over 500,000 lines of code total) and maintain the canonical version of Slurm, which is freely available. As the developers of Slurm, SchedMD personnel are uniquely qualified to provide training, installation, configuration, custom development and support (including joint Slurm support with Bull and Cray for their customers). Our vision is to extend workload management functionality to address Exascale requirements and beyond in an open and collaborative fashion. SchedMD also works to bring HPC technology to the world of Big Data, with early results yielding orders of magnitude improvement in performance.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s