Medical Imaging

Deep learning is enabling game-changing innovations in medical imaging, diagnostics, and personalized medicine. From diagnosing Alzheimer's, to evaluating stroke risk, to matching specific drugs to specific cancer presentations, the potential of these innovations are literally life-saving. However, actually delivering these innovations to market has been a substantial challenge for these innovators.


The Research Commercialization Gap

Much of the popular press around deep learning in healthcare has been reporting on analytical results published in academic papers. Many of these studies have shown exciting results, and have done so for several years now. Yet, these techniques have yet to be adopted by even cutting-edge academic medical centers for actual care delivery. The numerous startups attempting to commercialize this technology struggle to move beyond the proof-of-concept phase and frequently fail.


There are a number of issues that make the research to commercialization gap even larger for healthcare deep learning startups. As Andrew Ng points out, the heterogenous nature of healthcare imaging systems can dramatically affect model results. Building a commercializable model means a lot more than implementing a reference architecture from another paper.


In order to turn that architecture into a commercial product, the startup will need to obtain training data that is truly representative of the image type and quality that prediction will use. They will need to train the model(s) likely consuming thousands or tens of thousands of GPU hours. Once they finally have a model ready for inference, they need to integrate the inference process with the customer's PACS securely and in accordance with all laws and regulations related to patient data. On top of that, they need a process to reliably update the model in production, track customer usage, and prevent the leakage of their hard earned intellectual property.

Money is Time

Training these models is no small task. A single training run can use terabytes of space and run for over 2 weeks. The compute cost to perform a single training run can cost over $1,500 on the major cloud providers, the same cost as buying a GPU from Best Buy. Numerous startups have already failed because they ran out of compute credits before producing a commercializable model.


trainML gives these startups more time for their money. With CloudBender™, they can seamlessly use credits on their cloud provider, or run workloads on their own GPU systems. They get all the job scheduling and automation capabilities they expect from the cloud, but with the much more affordable cost structure of on-prem and zero IT overhead.


Instead of paying $1,500 per training run, they buy a GPU workstation for less than $20,000, and perform 6-8 training runs every month. The workstation pays for itself in less than 3 months and gives the startup the runway they need to iterate more effectively with their customers.

Federated Inference

An even bigger challenge than training the model is deploying it for inference. For other types of inference, a startup can simply host a web service and work with the customer to build a process to call it. With medical imaging data, many customers will be unwilling to transmit that data outside their physical site or cloud account. For international customers, many counties have regulations that prohibit patient data from leaving their borders.


If the data can't come to the model, the only other option is to send the model to the data. But how? Few startups (or enterprises, for that matter) have the skills, staff, or tools to support and manage numerous on-premise software installations. How do they release model updates? How do they make sure customers download and install the updates? How do they know customers are in compliance with their licensing model? Can they prevent customers from reverse engineering their model? Does the customer even have a GPU-enabled system to run inference on?


With Federated Inference, trainML customers can seamlessly and securely integrate their inference services with other customers of the trainML platform. By installing a CloudBender-managed trainML server inside the hospital's site or cloud, the hospital can grant a inference service provider Federated Inference access their trainML resources. This will allow the startup to execute Inference Jobs, using their models on data stores local to the hospital's site, without that data ever leaving the site.


The startup is able to maintain their own webservice to initiate and track inference jobs, ensuring they are in complete control of model access, billing, and the user experience. The hospital is in complete control of their data, and can firewall the inference job from communicating externally while the data is attached. The startup never has access to the data, the hospital never has access to the model, and the inference results still get delivered.


This feature is only available to Enterprise plans. Please contact us for more details.




Read about the process for starting and monitoring a trainML Inference Job.

Learn More

Get started creating a trainML Inference Job.

Try It Now

Walk through a full model development workflow from data loading to inference.

Learn More

Find out more about instance and storage billing and credits

Learn More