Mojo🔥 - A journey to 68,000x speedup over Python - Part 3
We started this blog post series to describe how to write Mojo🔥 for the Mandelbrot set to achieve over 35,000x speedup over Python. To recap the optimizations so far, in part 1 we ported the code into Mojo to get around a 90x speedup, and then, in part 2 we vectorized and parallelized the code to get a 26,000x. This blog post continues this journey to show another performance technique that takes well beyond our promised 35,000x speedup goal.
How Mojo🔥 gets a 35,000x speedup over Python – Part 2
In this blog post (part 2 of our 3 part blog post series), we continue our optimization journey and describe how to go from 90x to 26,000x speedup over Python. We will share insights into the techniques we use and discuss why Mojo is well positioned to address them.
We’ve raised $100M to fix AI infrastructure for the world's developers
We are excited to announce that we have raised $100 million in new funding, led by General Catalyst and filled by existing investors GV (Google Ventures), SV Angel, Greylock, and Factory. This second round of funding follows our first $30 million round from last year and will enable us to supercharge our vision for the future of AI infrastructure for the world's developers.
How Mojo🔥 gets a 35,000x speedup over Python – Part 1
When we announced Mojo, we claimed that Mojo can be 35,000 times faster than Python and we demonstrated this using a specific compute-bound problem: generating a Mandelbrot set. This is an impressive number that has garnered some scrutiny (as it should). In this blog post series, we will show you how we arrived at this figure and the optimizations we implemented to achieve the performance.
An easy introduction to Mojo🔥 for Python programmers
Learning a new programming language is hard. You have to learn new syntax, keywords, and best practices, all of which can be frustrating when you’re just starting. In this blog post, I want to share a gentle introduction to Mojo from a Python programmer’s perspective.
What’s the difference between the AI Engine and Mojo?
On May 2nd, we announced our next-generation AI developer platform with two exciting breakthrough technologies — the Mojo programming language and the Modular AI Engine. In just over two months, more than 110k developers have signed up for the Mojo Playground to learn Mojo and experience its performance firsthand, over 30k developers have signed up to our waitlist for the AI engine, and our Modular community on Discord has grown to 17k developers! We’re incredibly excited to see developers sharing their experience with Mojo, providing product feedback, and learning from each other.
Modular natively supports dynamic shapes for AI workloads
Today’s AI infrastructure is difficult to evaluate - so many converge on simple and quantifiable metrics like QPS, Latency and Throughput. This is one reason why today’s AI industry is rife with bespoke tools that provide high performance on benchmarks but have significant usability challenges in real-world AI deployment scenarios.
Do LLMs eliminate the need for programming languages?
We’re very excited about the positive reception of Mojo since its launch as well as the community of people building around it. Given new Large Language Model (LLM) powered developer tools like Copilot and Ghostwriter, many developers are wondering about the future of programming – do programming languages still matter when AI writes the code?
Accelerating AI model serving with the Modular AI Engine
A few weeks ago, we announced the world’s fastest unified AI inference engine. The Modular AI Engine provides significant usability, portability, and performance gains for the leading AI frameworks — PyTorch and TensorFlow — and delivers world-leading execution performance for all cloud-available CPU architectures.
Our launch & what's next
Last week, we launched Modular to the world after more than 16 months in stealth. We started Modular with a deep conviction — after 6+ years of building and scaling AI infrastructure to billions of users and 20+ years of building foundational compute infrastructure — it was clear the world needed a better path forward. Everyone wants less complexity, better access to compute and hardware, and the ability to develop and deploy AI faster.