RiseML allows you to easily create a demo for your machine learning model. We strongly believe that demos are great for several reasons:
- They provide the user an immediate understanding of the problem your model solves. Instead of talking about it, you can simply show a demo to your friends, colleagues, audience, etc.. This is incredibly more intuitive and engaging than describing it with words or formulae.
- They allow to form an intuition on how well the problem is solved. Examples are great but feeding a model with your own examples is better. Although this may take away from the fear of an immediate AGI, in general, this is a good thing.
- They can show you where the model still has problems. This is a great way to identify failure modes and can help you improve your model. It is also a great source for new research challenges.
As a researcher, you can use RiseML demos for your talks on conferences and to promote your GitHub repository. As a developer or job applicant, you can use demos to showcase your machine learning expertise to potential employers.
Building demos usually involves a lot of hassle: hacking together a user interface, an API to talk to the model, and an infastructure to actually run the UI and model. RiseML tries to simplify this so that you can focus on the intersting part: machine learning.
RiseML allows you to build demos for all kinds of machine learning problems. The platform itself is framework-agnostic. This means you can use Keras, Tensorflow, Torch or even your own framework. For convenience, we ship an SDK that contains often used components.
Follow the Getting Started guide to find out how easy it is!
Let's get you started with setting up your first demo. It only takes a few minutes. All you need is your web browser and an account on GitHub. As an example, we will deploy a real-time demo for artistic style transfer, which can also be queried using an API.
Create RiseML account
If you don't have an account on RiseML already, sign up here. Your account will be linked to your GitHub account.
Create GitHub repository
The code and configuration for your project must reside on GitHub. RiseML will pull the code from there and run it on the RiseML infrastructure according to your configuration.
Fork example project
For our example, fork this neural style project to your GitHub account. It contains code by the original author to perform style transfer. In our fork, we have made two simple additions:
- code in
demo.pyto run an API endpoint
riseml.ymlwhich describes how to deploy the code and run the demo.
If you want to enable your own repository, you need to provide those two additions.
The code in
demo.py consists of two important parts.
The first part is a
transfer_style function that takes an image as input, applies the neural network, and returns a stylized version.
def transfer_style(input_image): input_image = Image.open(BytesIO(input_image)).convert('RGB') # original code starts here ... y = model(x) ... # original code ends here med.save(output_image, format='JPEG') return output_image.getvalue()
The function wraps existing code to call the model. It also adds adds boilerplate to accept and return images via parameters instead of reading and writing them from/to disk.
Machine learning projects usually provide code to call and use their machine learning model.
Therefore, implementing a function analogous to
transfer_style for your own project is easy!
The second important part in
demo.py is a (blocking) call to our SDK:
import riseml ... riseml.serve(transfer_style)
This serves the
transfer_style function as an endoint according to the API specifications in the
riseml.yml is the central configuration piece of your project, so let's have a quick look:
deploy: image: name: nvidia/cuda:8.0-cudnn5-devel install: - apt-get update && apt-get install -y python-pip wget - pip install riseml numpy scipy chainer Pillow - wget https://github.com/gafr/chainer-fast-neuralstyle-models/raw/master/models/kanagawa.model gpu: yes run: - python demo.py --model kanagawa.model --gpu 0 input: image: image/jpeg output: image: image/jpeg demo: title: Real-Time Style Transfer description: Demo for Chainer implementation of "Perceptual Losses for Real-Time Style Transfer and Super-Resolution".
This tells RiseML to:
- use the docker image
- provide a GPU (gpu: yes)
- start the API endpoint with
python demo.py -model kanagawa.model --gpu 0(run: ...)
The install commands in the image section customize the image. Here, we install Chainer, the RiseML SDK, and download a pre-trained model. For your own project you can use any docker image which fits your needs and install the RiseML SDK in the same way (see Configuration).
By executing the run command an API endpoint is started on the RiseML infrastructure. The endpoint serves the pre-trained model and is defined via the input and output directives. In our example, the input is an image and the output is also an image (the stylized version). Based on the input, output, and demo directives, RiseML can generate a demo web page for your endpoint.
Deploy and link to RiseML
To start a deployment all you need to do is activate your GitHub repository by linking it in the RiseML app. Navigate to your repositories. From the list of your GitHub repositories select the repository you want to link. (If you have already activated repositories you need to navigate to "Repositories" to see this list).
This starts the deployment and you will be forwarded to the status page.
The current status will automatically change while deploying your model.
You can click on the console tab to see the a live console output of your deployment and commands.
View demo and access API
Wait for the status to become green, which signals a successful deployment. You can now view the demo by clicking on the demo button on your status page. The demo calls uses your API endpoint with input provided by the user.
If you want, you can also query the API directly. Instructions for Python and Curl are provided on the bottom of the status page.
Push a commit
Each commit to the GitHub repository automatically triggers a re-deployment of your project. RiseML will pull the new changes, terminate the existing deployment, and start a new deployment.
To test this, let's change our project to use a different style.
riseml.ymlin the fork of your GitHub repository and click the edit button.
Tell RiseML to download another model by adding the following command to the install section:
and change the run command to use the new model:
python demo.py --model starrynight.model --gpu 0
If you want, you can also choose another pre-trained model. A list of different styles is available here.
Save and commit the change, and navigate to your demo repository in your repositories. You will see the status change while your project is re-deployed. Once the re-reployment is succesful, navigate to the demo page to see the demo with the new style.
If you want to stop the demo you can unlink the repository in the list of your repositories:
This will terminate the deployment. You can re-start it at any time by adding the GitHub repository to RiseML again.
How RiseML works
This section gives you an overview of the components of RiseML and how they work.
The code and configuration for your project resides in your GitHub repository. You can activate a GitHub repository for deployment by linking it in the RiseML user interface. Under the hood we are using GitHub webhooks.
If you link a repository which contains a
riseml.yml, RiseML will immediately start deploying the latest revision.
Whenever you push a new revision to your GitHub repository, it is automatically deployed on RiseML.
Unlinking the repository from RiseML stops the deployment and also deletes the webhook.
RiseML deploys an API and a demo for your machine learning project. Two components are required for a successful API deployment:
riseml.ymlfile which describes the API endpoint and its deployment
- an implementation of the API endpoint
riseml.yml file must reside in the root of your repository.
It defines in what environment to run your code (e.g., which framework to use), what steps to perform before running your code (like downloading a pre-trained model), and how to start your code.
It also defines what kind of messages your API can process and return.
See Configuration for a detailed desription of the directives.
The code for the implemention of the endpoint also needs to reside in your GitHub repository. Typically, you already have code to call your machine learning model. We make it very easy to wrap this into an API endpoint using our SDK:
import riseml riseml.serve(predict_function)
predict_function as an API endpoint on the network.
Requests are parsed and forwarded to the
predict_function as specified via the schema in the
Return values are also validated according to this specification.
This means you do not need to bother with writing network or schema related code.
See API Schemas for a more detailed desription of schemas.
The Python RiseML SDK can be installed with:
pip install -U riseml.
SDKs for other languages will be coming soon.
A successful deployment puts an API in place and RiseML can also provide a demo page for it.
To enable a demo, you need to extend the
riseml.yml with a demo directive (see Configuration).
Based on the defined input and output, RiseML can generate a demo page (note: while in beta, we only serve demos for APIs that accept and return images).
The demo page allows a user to feed the model with her own input, like images from a webcam or file upload. This is highly engaging and a great showcase for new projects. You can add the link to your demo on your GitHub repository. The link to your demo is:
You can also find the link to the demo on the status page of your repository.
You configure RiseML by adding directives to the
riseml.yml file must be placed in the root of your git repository.
When GitHub receives a push request on an activated repository, RiseML checks out the latest configuration and follows its directives.
riseml.yml file is written in YAML format.
Here is a minimal
riseml.yml based on Ubuntu 16.04 LTS:
deploy: image: name: ubuntu:16.04 run: /bin/echo "hello world"
riseml.yml with GPU option based on Ubuntu 16.04 LTS with CUDA 8 runtime libraries:
deploy: gpu: yes image: name: nvidia/cuda:8-runtime run: nvidia-smi
Its content is structured in following sections
The deploy section contains directives relevant to code deployment.
The gpu flag requests a GPU instance when set to true / yes.
deploy / image
The image section contains directives relevant to a machine image.
deploy / image / name
The image name is a string that references a base image on Docker Hub. Recommended images are:
||Ubuntu 16.04 LTS|
||Ubuntu 16.04 LTS with CUDA 8 development libraries|
||Ubuntu 16.04 LTS with CUDA 8 runtime libraries|
You can customize these images using install commands as described below.
In particular, you may want to install the Python RiseML SDK by adding
pip install -U riseml.
deploy / image / install
The install directive contains a string or a list of strings with shell commands that are executed to prepare a final image. Before the install steps are performed a copy of the git repository is imported at the
/code mountpoint. This section is optional.
deploy / run
The run directive contains a string or a list of strings with shell commands that are executed in sequence when the final image gets deployed. The commands should start an API endpoint according to the defined input and ouput (see below).
deploy / input
A set of key / value pairs that map input parameter names to content types. Basic content types are image/jpeg, video/mpeg, audio/mpeg and text/plain. This section is optional.
deploy / output
A set of key / value pairs that map output parameter names to content types. Basic content types are image/jpeg, video/mpeg, audio/mpeg and text/plain. Additional content types are curated in the schemas repository. This section is optional.
deploy / demo
The demo sections contains directives relevant to demos.
deploy / demo / title
A string that overwrites the default demo title.
deploy / demo / description
A string that overwrites the default demo description.
deploy / demo / readme
The readme sections contains directives for adding a readme on the demo page.
deploy / demo / readme / content
A markdown string that sets the content for the readme on the demo page.
Below is an example for a real-world
deploy: gpu: yes image: name: riseml/base:latest-squashed install: - apt-get -y update - apt-get -y install python3-minimal python3-pip - pip3 install -r requirements.txt run: - python3 demo.py input: image: image/jpeg output: image: image/jpeg demo: title: "Getting started with RiseML" description: "rotate by 180°" readme: content: | # header ## subheader ### subsubheader text
A machine learning model can be seen as a component that receives an input and generates an output. For example, in the case of object recognition the input could be an image and the output a list of rectangles, one for each identified object in the image. RiseML offers the ability to define and validate the content type of input and output when using the RiseML SDK to deploy an API. That is why we call a pair of content types for input and output an API schema.
Running a demo on RiseML requires an API schema.
Using an API schema
riseml.yml file there is a section for input and output types (see Configuration for reference):
deploy: ... input: input name: content type output: output name: content type ...
Choose input/output names and content types that reflect the nature of your machine learning model. Here's an example from the neural style transfer from the Getting Started section.
deploy: ... input: image: image/jpeg output: image: image/jpeg ...
Basic input and output types are:
- video/mpeg (coming soon)
- audio/mpeg (coming soon)
- text/plain (coming soon)
Besides basic types for unstructured data, RiseML allows you to process structured data. You can either choose from a predefined list of content types or define your own content type. All structured data must be in JSON format and adhere to a JSON schema.
Supported JSON schemas are collected in our public GitHub schema repository. Here's a list of currently supported JSON schemas in the latest RiseML SDK:
Custom content types
If you want to create your own content type, simply create a JSON schema and issue a pull request.
My deployment failed
riseml.yml or code contains an error your deployment may fail.
Check the status page for errors and then check the console log.
If everything seems fine but your deployment is still not running we may be waiting for free resource.
Please contact us on gitter or via email us.
Do you support machine learning models which require a GPU?
Yes, if you deploy a project using the
gpu: yes flag it will run on servers with Nvidia Titan X Pascal or Tesla K80 GPUs.
Which frameworks do you support?
RiseML works with any machine learning framework, i.e., Keras, Tensorflow, Torch, caffe, Theano, mxnet, CNTK, Paddle, dynet, deeplearning4j... All you need is the RiseML SDK in the framework's programming language, e.g., Python for Keras.
Do I need to use the RiseML SDK?
No, you can also provide your own code to run the endpoint, as long as it behaves the same way our SDK-based endpoint does. However, certain functionality is only available with our SDK, e.g., schema validation, so we recommend using it when possible.
How do I install and use the RiseML SDK?
You can install the Python RiseML SDK using
pip install -U riseml.
You should add this to the install commands in your riseml.yml.
predict_function as an endpoint, call the SDK as follows:
import riseml ... riseml.serve(predict_function)
Does RiseML work with my machine learning problem?
You can deploy any kind of model to RiseML and get an API endpoint. Currently, we provide demo pages for image-based machine learning models. Support for other kinds of demo pages will come soon!
How much traffic can you handle?
We can easily scale your demo to run on several GPUs. If you expect a lot of traffic, notify us!