Oops! Something went wrong while submitting the form.
🚨
NEW
🔥
Popular
Developer
Evaluating Llama Guard with MAX 24.6 and Hugging Face
Imagine unlocking a world of open innovation while ensuring secure, reliable, and enterprise-ready Gen AI deployments—MAX 24.6 enables enterprise AI teams to seamlessly run a vast range of cutting-edge AI models from Hugging Face on NVIDIA GPUs.
December 19, 2024
/
Bill Welense
,
Read
🚨
NEW
🔥
Popular
Developer
Build a Continuous Chat Interface with Llama 3 and MAX Serve
Build a Chat Application with Llama 3 and MAX Serve
December 17, 2024
/
Ehsan M. Kermani
,
Read
🚨
NEW
🔥
Popular
Product
Introducing MAX 24.6: A GPU Native Generative AI Platform
MAX 24.6 release bog featuring MAX GPU
December 17, 2024
/
Modular Team
,
Read
🚨
NEW
🔥
Popular
Engineering
MAX GPU: State of the Art Throughput on a New GenAI platform
Measuring state of the art GPU performance compared to vLLM on Modular's MAX 24.6
December 17, 2024
/
Max Hutchinson
,
Tyler Kenney
,
Read
🚨
NEW
🔥
Popular
Developer
Chat with Documents Using Llama3.1, RAG, and MAX
What if you could interact with your documents and get real-time, accurate answers, directly from them? In this post, we’ll dig into how we built a RAG app backed by MAX, our framework for GenAI, with Streamlit for the UI.
November 11, 2024
/
Bill Welense
,
Read
🚨
NEW
🔥
Popular
Developer
Why Magic?
November 5, 2024
/
Bill Welense
,
Read
🚨
NEW
Developer
Understanding SIMD: Infinite Complexity of Trivial Problems
A deep dive into the complexities of optimizing code for SIMD instruction sets across multiple platforms.
October 25, 2024
/
Ash Vardanian
,
Read
🚨
NEW
Community
Community Spotlight: Writing Mojo with Cursor
October 10, 2024
/
Julian Acero
,
Caroline Frasca
,
Read
🚨
NEW
Developer
Hands-on with Mojo 24.5
Hands-on with Mojo 24.5 and learn how to apply new language features in your code
October 1, 2024
/
Ehsan M. Kermani
,
Read
🚨
NEW
Product
MAX 24.5 - With SOTA CPU Performance for Llama 3.1
We’re excited to announce the release of MAX 24.5, which ships with significant improvements to Llama 3.1 CPU performance, new Python graph API bindings, our biggest update to Mojo ever, industry-standard packaging, and a clarified license.