Build Reproducible and Scalable Computational Biology Systems
This post introduces high-level trends at the intersection of biology and AI, discusses new (and old) technical challenges in building reproducible and scalable systems for AI-driven computational biology, and how frameworks like Metaflow can help address them.
Develop ML and AI with Metaflow, Deploy with NVIDIA Triton Inference Server
Learn how Outerbounds and NVIDIA are collaborating to provide an end-to-end system for developing, scaling, deploying, and improving AI systems.
Making Large Language Models Uncool Again
We sat down with Jeremy Howard, the creator of fast.ai, to discuss the rapidly evolving AI landscape.
Better, Faster, Stronger LLM Fine-tuning
Reactive, configurable, cheaper LLM fine-tuning with Metaflow
Retrieval-Augmented Generation: How to Use Your Data to Guide LLMs
Learn how to use Retrieval Augmented Generation to control hallucinations and get more relevant responses from LLMs.
Fine-tuning a Large Language Model using Metaflow, featuring LLaMA and LoRA
A workflow template built with Metaflow for fine-tuning LLMs for custom use cases.
Large Language Models and the Future of Custom, Fine-tuned LLMs
An overview of instruction tuning for large language models using the LLaMA family of models.
Training a Large Language Model With Metaflow, Featuring Dolly
We use Metaflow to train Dolly, to show an example of fine-tuning LLMs and what it takes to use these models in practice.
Large Language Models and the Future of the ML Infrastructure Stack
A peek into how Outerbounds views the ongoing evolution of the machine learning stack, in the wake of recent LLM waves.