
Deploying MAX on Amazon SageMaker
Model deployment is often the domain of IT professionals and cloud infrastructure experts who understand how to securely and reliably host model endpoints that scale with usage demand. Thankfully, Amazon SageMaker is fully managed and handles all the underlying infrastructure, allowing developers and data scientists like you and me, who are not IT experts, to use simple APIs to host secure, low-latency, and highly scalable model endpoints.

Semantic Search with MAX Engine
In the field of natural language processing (NLP), semantic search focuses on understanding the context and intent behind queries, going beyond mere keyword matching to provide more relevant and contextually appropriate results. This approach relies on advanced embedding models to convert text into high-dimensional vectors, capturing the complex semantics of language.

Getting Started with MAX Engine C API
In this blog post, we introduce the MAX Engine C API, gradually building awareness of its capabilities. The C API enables the integration of the MAX Engine into high-performance application code, facilitating running inference with models from PyTorch, TensorFlow and ONNX suitable for environment that does not require Python dependencies.

Mojo🔥 ❤️ Pi 🥧: Approximating Pi with Mojo🔥 using Monte Carlo methods
March 14th aka 3/14 or 3.14 is known as $\pi$ Day, and it honors the mathematical constant $\pi$ (pi), which represents the ratio of a circle's circumference to its diameter. On this special day, I wanted to dedicate a blog post to the beauty of mathematics, numerical methods, $\pi$, and Mojo. So join me on this journey as I implement a fast vectorized Monte Carlo approximation method of calculating $\pi$. Happy $\pi$ Day!

Evaluating MAX Engine inference accuracy on the ImageNet dataset
MAX Engine is a high-performance AI compiler and runtime designed to deliver low latency, and high-throughput inference for AI applications. We've shared how you can get started quickly with MAX in this getting started guide, and how you can deploy MAX Engine optimized models as a microservice using MAX Serving.

Optimize and deploy AI models with MAX Engine and MAX Serving
MAX Developer Edition preview is now available to developers worldwide and ICYMI, feel free to check out our getting started with MAX blog post. Today, I’d like to dive a little deeper and show how to build an end-to-end application using MAX.

Getting started with MAX Developer Edition
Today we’re thrilled to announce that MAX Developer Edition is now available in preview for developers worldwide! 🥳🎉. In this developer blog post, we'll take an in-depth look at MAX, its key features and capabilities, and how to use it to deploy your first MAX optimized model. Using code examples we’ll illustrate its benefits, cover key concepts, and share additional resources to continue your MAX journey.

Mojo🔥 ♥️ Python: Calculating and plotting a Valentine’s day ♥️ using Mojo and Python
On Valentine’s Day yesterday, I wanted to create something special to celebrate my love for Mojo and Python. My search on the interwebs led me to a nifty little equation that plots a heart. The equation is quite simple and I’ll refer to this as the “heart equation” through the rest of this blog post

Mojo🔥 SDK v0.7 now available for download!
Mojo SDK v0.7 is the first big release of Mojo🔥 in 2024, and it’s chock full of new language and standard library feature goodness. In this blog post, I’ll share some of the key highlights from this release with examples, and discuss what they are and when to use them. I’m only going to cover the new features, for a complete list of what’s new, what’s changed, what’s removed, and what’s fixed in this release, be sure to check out the changelog in the Mojo documentation.