Notebooks

On-Demand, GPU-enabled JupyterLab Environments

trainML Notebooks are full instances of JupyterLab running on up to 4 dedicated GPUs. Our pre-built conda environments are designed specifically for machine learning model training on GPUs, with the latest Tensorflow, PyTorch, MXNet, and others are pre-installed.



Prebuilt, Optimized Environments

trainML's Notebooks run in your choice of pre-built, conda-based Python environments, configured with all the popular deep learning frameworks, their dependencies, and a multitude of other packages to facilitate data analysis and model training. Everything you need for GPU acceleration is already installed to ensure version compatibility with the deep learning frameworks. Pre-installed frameworks include:

  • PyTorch
  • Tensorflow
  • Apache MXNet
  • LightGBM
  • JAX
  • Theano
  • Keras

GPU acceleration libraries included:

  • NVIDIA Driver
  • CUDA
  • cuDNN
  • CuBLAS
  • Apex
  • OpenCV
  • NCCL

Notebooks That Don't Erase Themselves

With other cloud Notebook providers, any changes to the environment are discarded every time you stop the notebook instance (or they stop it for you). This leaves users with the unfortunate trade-off between keeping the instance running (and continuing to pay) or reinstalling their additional libraries every time they restart the instance (and paying for this time). With trainML Notebooks, no such trade-off is necessary. All modifications to the initial Python environment are retained through each job run. No matter how many times you stop and start the Notebook, it will always pick up right from where you left off, saving you time and money.

Load Models and Data for Free

Why waste money and time loading your data and model code into the instance's local storage? trainML's unique job environment can automatically download a git repository, configure access keys, and attach datasets for you before billing begins. Attached datasets are fully cached on local NVMe storage and do not incur additional storage cost no matter how many notebook instances they are attached to.

Dynamic Instance Type Changes

Unlike other cloud Notebook providers, your notebook instance isn't locked into a specific number of GPUs. With trainML, you can start testing your model on 1 GPU, and when you are ready to scale up for training, you can add more GPUs to the existing notebook job. Once training is done, you scale back down to do some analysis. Since all trainML notebook instances are fully persistent, the notebook can transition seamlessly through these resource changes, allowing you to minimize your expense without compromising time.

Forking and Converting for Rapid Parallel Experimentation

Running a new copy of a notebook on a new dedicated GPU isn't as simple as copying a file. Normally, you have to provision a whole new GPU-enabled computing environment to run the notebook. If you had staged data or had installed new packages or libraries, all that work as to be replicated in the new environment.
With trainML notebooks, you can create a new copy of a notebook environment in only 3 clicks. Unlike other cloud notebooks, when you fork a trainML notebook, the entire working directory and computing environment is copied. All installed packages, datasets, checkpoints, and configuration settings are copied into the new notebook automatically. If you code it setup so you can run it as a script instead of interactively, you can even convert it to an independent training job that can run autonomously, send its output to a location you specify, and automatically terminate when training finishes.



Read about the process for starting and using a trainML Notebook.

Learn More

Get started creating a trainML Notebook

Try It Now

Find out more about instance and storage billing and credits

Learn More