A Unified Interface
for AI Supercomputers

on-prem or in the cloud

First-class support for all popular deep learning frameworks. Most other frameworks can be used without any changes.

Schedule GPU workloads on the most scalable and fault-tolerant cluster managers available.

Train & deploy ML models in a breeze.

Schedule ML jobs via command line.

Run even the most demanding distributed deep learning jobs spanning dozens of GPUs effortlessly.

  • Train and deploy deep learning models
  • Observe progress via console and web app
  • Manage the full lifecycle of your ML jobs.
  • Reproducible & comparable experiments

    Code and data is kept in a versioned history such that you can go back in time and rerun an experiment.

  • No local dependencies
  • Experiments run in reproducible containers
  • Easily compare results across different runs
  • Single and multi-GPU workloads

    Train your models on single or multiple GPUs. Scale to multiple GPU servers with distributed training frameworks.

  • Supports NVIDIA GPUs
  • Multi-GPU support
  • Use distributed tensorflow to scale to multiple servers
  • Schedule a demo

    Are you interested in learning more about RiseML? Schedule a free demo with one of our ML experts.

    RiseML Features

    Framework Support

    Supports all major deep learning frameworks.

    Versioned Code & Data

    All code and data is kept in a versioned history.

    Kubernetes & DC/OS

    Built on top of the best cluster managers available.

    Reproducible Setup

    All experiments run in reproducible containers.

    Unified Interface

    Switch between command-line and web interfaces seamlessly.

    Data Privacy

    Keep your code and data private with on-prem deployments.

    Management Dashboard

    Keep an overview about your GPU cluster.

    Distributed Training

    Run deep learning workloads spanning multiple servers.

    Hyperparameter Optimization

    Automatically schedule experiments in parallel.