Last week, Chris Lattner sat down for an interview on the Developer Voices podcast with Kris Jenkins . It was a wide-ranging episode that explored a variety of topics, including the motivations behind creating Mojo, what it offers to both Python and non-Python programmers alike, how it is built for performance, and which performance features actually matter. This post recaps a number of highlights from the podcast, edited for clarity and brevity. You can find the full 90 minute interview on YouTube .
Why are you building a new language? Summarized from 3:29 into the video
We started with how do we make GPUs go brrrr ? How do we make these crazy high performance CPUs that have matrix operations, bfloat16 and other AI extensions. How do we rationalize this wide array of different hardware and put it into a system that we can program in? For hardware it used to be that a CPU would come out every year and it would be slightly better than the old one. But now we have these really weird dedicated hardware blocks which are very specialized. The most extreme is a GPU, being able to program these things requires fundamentally different kinds of software.
We weren't originally intending to build a language at Modular. We started building a very fancy code generator that had no frontend. We wrote everything using a new compiler framework named MLIR. It's now part of the LLVM family, but it's a next generation replacement in many ways. It allows making domain specific compilers really fast, and so we were writing everything in pure IR. We did that for quite some months to just prove out that the code generation philosophy in the stack could work and deliver the results we wanted. When we had confidence in that, we said "Now what do we do about syntax?"
In the case of Mojo we don't build an AST traditionally, we generate MLIR directly from the parser. When we decided it was working with a novel and useful code generational approach, we had to decide how are we going to get this IR. The obvious thing to reach for is a domain specific language embedded in some other language. For example Python, with a decorator based approach, generates IR by walking some other languages AST. The other approach is to go build a language. We decided let's do the hard thing, and we think the cost benefit trade off is worth it.
Why build Mojo as a superset of Python? Summarized from 11:49 into the video
What I realized is that I want to control some of the risk, but also meet developers where they are. For AI in particular, Python is "the thing", everybody uses Python. If you pick anything else, you'd have to justify why it's better than Python. I care about the hundreds of millions of developers who already know Python, not having to retrain them is a huge feature. From an engineering and a design perspective, it's very useful because we already know what the language looks like. So we don't have to bike shed, which is way easier than rationalizing everything using first principles. Mojo is a very extended version of Python, there's still design work, but at least we could focus our energy, as most of those early decisions are made for us.
In the case of Swift, we progressively migrated the Objective C community. We did that by making it so both worlds could live together. You can call one from the other happily, for example use an Objective C package, but write your UI in Swift. In the case of Mojo we're building it into a full superset of Python. And so all the Python idioms, whether they're a good idea or not will work in Mojo. Even today you can import arbitrary Python packages, mixing and matching them directly. We're continuing to make progress on more dynamic features in particular. Mojo will soon be a really good replacement for CUDA, we're close to being a really good replacement for languages like Rust, and eventually we'll be a good superset of what Python is loved for. But we have to do those steps incrementally and build out the features as we go.
Objective C and Python have the dark truth that all the important stuff is written in C or C++. If you look at NumPy as just one example, it has a very nice Python API, but it's all written in C, C++. The consequence of using Mojo that you don't have to switch out your Python knowledge. Programmers are busy people, they have things going on. Most people are not going to be learning a new thing for the fun of it. On the other hand you can meet people where they are, and provide something familiar so they don't have to retrain from scratch. Programmers I've seen generally love growing, and learning new tricks. That's why people are excited about what's going on with Mojo.
How does Mojo's type system work? Summarized from 20:02 into the video
If you ask the typical programmer, they would tell you Python doesn't have types. If you talk to a more advanced developer, they'd say it has dynamic types. And so there is a list , dict , string , but it's all runtime. If you go further you can say Python has one type, which is a reference to a Python object. In Mojo we said let's give this thing a name, which became PythonObject . Now you can create a Mojo type Int with conversions between PythonObject to have them integrate and interoperate. Now it can be on the stack instead of boxed and heap allocated, it can be the size of the word on your machine or whatever properties you need. And so you can have the ability to opt into a fully static world. CPU and GPU high performance numeric programmers never want anything dynamic. They want full control over the machine, and access to very fiddly low level optimizations. But we also allow you to start removing types, so you can get fully dynamic behavior if you'd like.
There's a whole class of languages, including TypeScript and even Python itself, where it's saying the fundamental nature of the universe is untyped. But you can provide type hints than can be used to inform error messages. Typing in Mojo is actually typed, this is very important. We've built a modern statically typed, generic, higher order functional type system – like many modern languages. It's pretty familiar to people and I think familiarity is good.
The other side of it though, is that we are very hardcore about pushing language features into the library. I'll pick on C++ here because it's an easier victim but it's a very powerful language. For example float and double are built into the language, so there's certain behavior and conversions that only work with builtins, which can't be exposed out to the library. Another example is you can't overload the dot operator in C++, you can't build a smart reference. That's been driving people nuts for decades now. In Mojo we take a very aggressive approach to this, which is to push everything that we can into libraries, and push the magic out of the compiler. For example Int is not built into the language, the float types are not built in the language, these are all just libraries and so the type system composes and we use it for all the builtin types. That forces the language to provide only the core essentials for expressing libraries. We want everything to be ergonomic and very dynamic and flexible and powerful for the Python folks.
Most people will build on top of higher level libraries. And so we have the MAX engine, which you to just run a graph if you don't want to know how any of that stuff works. here's a graph, go to town. Mojo allows you to add some types to go 100x or 1000x faster, without even doing fancy accelerator stuff. This is a material speedup for not having to retrain all your engineers. And so that's cool, even if you don't use the full power of what Mojo can do. You might not want to write GPU level code, you might just want your code to be fast. But when you want to go for performance, you can do that without someone telling you that you picked the wrong language.
How do you deliver high performance and portable code? Summarized from 26:34 into the video
Mojo believes in zero cost abstractions which are seen in C and Rust and many languages. The way we do it is pretty innovative because we have this fancy MLR compiler behind the scenes. This allows us to build up zero cost abstractions for various hardware. Swift in a way was syntactic sugar for LLVM, at the very bottom of the stack it could talk directly to LLVM primitives. Mojo does basically that same trick, but it supercharges it by moving to this MLIR world. And so MLIR being a much more modern compiler stack has way more powerful features.
We can expose things like bfloat16 , and really weird numerics on accelerators like tiled matmuls, and the tensor core on a GPU, and then wrap them in really nice libraries. So you get direct low level access to crazy exotic hardware. But then you have syntactic sugar, built-in libraries, and now you can extend the language without having to be a compiler nerd. I think this is very important and powerful. So for example complex numbers are hardware accelerated on certain chips. If you do a multiply accumulate, it's four multiplies and a couple of adds, certain CPUs have operations that can do that in one shot. We have a compile time metaprogramming system that can say if you're on supported hardware go do the optimize, else just do the generic thing.
Mojo has superpowers because of the domain it's in. So you can write one piece of source code and that runs partially on your CPU, partially on your GPU. And these may have different pointer sizes and may have different numerics and capabilities. there are these very nerdy compiler things that enable this to just work in a way that people aren't quite used to. Mojo was designed in 2022 instead of 1985, we have things like SIMD which are core to the language, as all computers have SIMD instructions these days.
We also have direct support for explicit vectorization, so you get full access to the hardware. It's been really fun seeing Mojo developers worldwide take something like game of life, and they say I'll start using this, I'll try this. Oh wow, it's a thousand times faster than the code I started with. With library based extensibility, our vectorized function is just a library function. And you can vectorize this code by using a combinator from the library, and build it one step at a time. I've seen tons of people go through this growth path where they say wow, this is really cool, I'm having fun, I'm building something interesting, I'm learning something. This is where I think people love going through the growth path.
How does Mojo work with GPUs and AI? Summarized from 32:07 into the video
In the AI space I'm in love with AI for both the user applications, but also all the systems and all the technology that were built to support it. One of the things I found super inspiring is that today you can sign up for a Jupyter notebook on Google Cloud, and get access to a TPU. And with a few lines of code, you can now be programming on an exaflop supercomputer. You're describing a novel computation that gets mapped, partitioned, scaled out across thousands of chips, at massive data center speed. High Performance Compute (HPC) people have been doing for a long time. but now you have AI researchers doing this, without having to write low level high performance code.
What made that possible was a shift from completely imperative programming to declarative programming. And the way this works is that in AI, you build a machine learning graph and you have the AI researcher thinking about the level of ops like matrix multiplication, convolution, gather, or reduction etc. So they think about simple compositions of these highly parallel operators. And then you give this graph to a very fancy compiler stack, and it's actually doing fusion on the loop. You're taking this very complicated math, doing very high tech compiler transformations, and then also dealing with distribution across clusters. You can do these things because it's a declarative specification, you're not trying to take a pile of C code and parallelize it. What this whole stack evolves into in the case of Mojo is we have, what's called the MAX engine. The MAX engine is a very fancy AI compiler stack.
It's like a XLA, but after learning a lot of lessons being familiar with these technologies, it can run machine learning graphs. But we also want it to be able to talk to the imperative code. You need to be able to write custom operators, invent new algorithms. Like if Modular doesn't know what an FFT is, but you do and that's really important to your signal processing domain. We want the stack to be completely extensible. And so Mojo enables people to write an algorithm in very simple and familiar code. You can understand it because you're just writing source code. In contrast, CUDA is its own little world and it's very different to Python. So instead of writing a CUDA kernel, you can write some Mojo code and because of MLIR, it can reflect onto it and we can see what the code is doing.
That allows us to take it and do things like fancy compiler fusions, and do the placement. That's something the world doesn't have because in the AI space, the state of the art technologies are built around CUDA and math libraries like Intel, MKL where the operators are all black boxes. There are these fancy graph compiler things, but they don't actually have the ability to see into the logic that they're orchestrating, and so they can't do these high level transformations. It just is very janky in various ways. Part of our mission is to solve this, take all these systems a major step forward. What drives Mojo is the extreme high performance needs of AI. We want to be state of the art on performance without relying on vendor libraries that do matrix multiplications, for example. It is also why we care about usability, many people are building graphs with Python and PyTorch. And so we want to meet people where they are, it all flows together.
How are you solving the AI divide between research and production? Summarized from 37:19 into the video
The traditional TensorFlow and PyTorch libraries were designed 8 or 10 years ago, depending on how you count. They're coming from a research world and training world, but today a huge amount of focus has shifted to deployment. When you get into deployment mode you don't really want Python in production. It can be done, but there's some challenges with that. We've entered into this world with researchers who love, live, and breathe Python, and it's great for their use-case. But then you have production people that have to rewrite these models and tokenization logic for LLMs in C or Rust to be able to ship something. A big part of Mojo is about solving that problem, having one language that can scale which can heal the divide between all the personas that are building these systems. Whether they're high performance numerics people, deployment engineers, AI researchers, we can get everybody to be able to talk to each other, because they're literally speaking different languages which is massively impacting AI getting into production. It's an evolution of a lot of very well considered systems that were locally developed, aggregated and then hill climbed like crazy, because AI has changed a lot in the last five to eight years.
Nobody's had a chance to go back and first principles some of the technology, all this stuff grew really quickly. Some people might say that building a programming language is insane, but It's just a multi-year project. You have to be very practical about this and you have to make sure to not sign up for something you can't deliver on. It's a big bet that lots of other people aren't willing to make for a wide variety of reasons. And so you have to be right, but if you're right, then it's actually a really good contribution to the world.
An analogy I've seen is that it's like writing C or Rust code but with Python syntax. This is disarming to people because people are taught that Python is slow and to never write a for loop. In Mojo that wisdom is invalid, because it's not the same implementation. Many things that people knew to be false are actually totally fine. We have folks that from HPC backgrounds, experts in low level system architecture, saying it's so weird to be writing assembly code in Python. It does twist your brain, open your eyes and shift your perspective, but otherwise it's familiar. It isn't driven by novelty for novelty sake. It's about pragmatism.
How does metaprogramming work, and how does it compare to Zig? Summarized from 42:38 into the video
In the AI world source code is effectively a metaprogram. It's a bunch of imperative logic, that describes roughly a graph, you then distribute and transform it. Python has long been used for metaprogramming for a wide variety of different domains, that's one of the reasons it's been very successful in the AI community. If you look at high performance numerics, people often use C++ and templates for metaprogramming, because you want an algorithm that works on float , double , or float32/float64 . And of course, then it turns in this massive cataclysm of templates depending on how advanced you get.
More modern languages like Zig for example, have said let's not have a different meta language than the language. They use the same language for both normal programming and metaprogramming. In case of Mojo we said that's actually a really fantastic idea. Python is highly dynamic, you can overload operators, you can do all these things dynamically, we can't pay the expense, we can't have even a single clock cycle extra in our domain. We need bare metal performance, but we want the benefit of the abstractions and the extensibility that Python provides. So we fused comptime metaprogramming and the dynamic metaprogramming of Python.
This is one of the major ingredients that allows Mojo to be extremely expressive with the same idea as Zig, you can build the standard runtime algorithms, you can allocate heap data structures, you can do all this stuff at compile time. This gives you a composition that enables really expressive libraries. It allows you to build combinators, higher level functions, and features, composing the benefit of the compiler world with the runtime world. You have values, objects, functions, features, classes, types, and you can use them either at compile time or run time. A simple example is a function that creates a lookup table. You can call it at runtime and pass dynamic values in as the arguments, doing all kinds of math on them. But you can also run it at compile time, calculate the dynamic data structure, do all the logic that would have run at runtime. The output of that is a list, that list is then burnt into the executable, and now you don't have to compute it at runtime.
That's a simple example, there are many fancier examples. Types are just values, and so your types can be compile time values. So you can do much more fancy, higher level programming. There's a whole rabbit hole there, the cool thing about is that it comes back to enabling library developers to make demand specific abstractions and build things that allow modelling their world very clearly. Zig has its own personality, it's a very low level language. It's very different in certain ways to Mojo. Mojo wants to enable libraries and abstractions and so that's its focus. But this idea of using the language at comptime is shared, we're very happy to learn from other people.
How does memory management work, and how does it compare to Rust? Summarized from 48:25 into the video
One of the ways that we can embrace the entire Python ecosystems, is interoping with the CPython object model. Everything is just compatible, if you import Python you get the traditional reference counted indirect object box. In Mojo native code you get a very powerful type system. At the bottom you have types with move constructors, copy constructors, and destructors. So you can write code that manages resources and do so directly. One of the bottom foundational things is that you can call into C. And so if you want to you can call malloc and free through unsafe hooks. But again, we want people to be able to compose together Libraries and we want to do so in a safe way. What we have is references with lifetimes that work very similarly to the way they work in Rust, there are many implementation differences, but you can think of it that way.
In Mojo, it's way less in your face, and you don't have to micromanage the borrow checker quite as much, but it provides you the same approach and the ability to manage references. This is a really powerful thing, and it's a very important thing, we've learned a lot from Rust. They paved a lot of roads with really great work, but there's certain challenges with the borrow checker. One example of that is the Rust parser has a bunch of pretty complicated rules, and special cases for how it generates the IR. Then you have the borrow checker which comes along and tells you if you did it wrong, the simple cases are easy to understand. But in the complicated cases, you're dealing with the order of evaluation of how the parser did things which causes all kinds of edge cases.
The Mojo equivalent is actually a very different thing. Our parser rules are very simple and predictable, we push a lot of complexity out of the language and into the library. Our borrow checker isn't just an enforcer, it decides what the lifetime of a value is. And so a very big difference between Mojo is that in Rust, values are destroyed at the end of a scope. You can run into issues where you get exclusivity violations because something lives too long, although there are various solutions to improve this like non-lexical lifetimes. In Mojo a value is destroyed immediately after its last use. This makes it a much more friendly experience because your lifetime ends and therefore exclusivity violations get relaxed much earlier. It's better for memory use, for example if you're talking to a GPU, a tensor could be holding on to four gigabytes of data, and so you want to free the memory as early as possible. It's better for little things like tail calls and other core PL concepts, there's this pile of very low level, obscure details. Rust also has this thing called the drop check flag. And so they actually, in the worst case dynamically track whether or not a slot on the stack is live or not.
I've known the Rust community for a long time and have a lot of respect for it. But also it's around 14 years old, roughly the same age as Swift. And we've learned a lot from that journey, Mojo represents is an opportunity to take the learning and do the next step, there are a bunch of ways to simplify it. Mojo also supports async await natively because that's obviously important for high performance threaded applications. We don't need pinning, which is a really big deal it turns out, because all values have identity. There are these very low level nerdy tweaks to the way the type system works, in Mojo you never get implicit memcpy because of moves.
How does mutability and value semantics work in Mojo? Summarized from 55:47 into the video
One of the things that we pushed and Swift has is functional programming, it made the observation that functional programmers say it's amazing because you never mutate data. You always get new values, and because you get new values, you get composition, you get predictability, you get control, you get all these different benefits of not having mutation. C programmers would flip that around and say that creating a new value every time you want to insert something into a list is very bad for performance, so nobody could ever build a real system on top of that. Swift says the thing that you want is exclusive ownership of a value to get value semantics which still allows local mutation. Rust has its own take on the same idea, if you have exclusive access to a value, you can mutate it. In Swift the Array , Dictionary , and String types are all immutable in the Java sense, where your know that the value will never change underneath you. And so it looks very much like a functional programming idiom. If I have a String , it can't change unless I change it. If I change it, it's not gonna break anybody else. And through the implementation, it never does deep copies implicitly. And so there's a bunch of stuff that was developed and works really well in the Swift ecosystem that I think will come over naturally into the Mojo ecosystem.
The goal is bring forward the wonderful things of functional programming. Composition locality of reference so you don't have the spooky action at a distance that reference based languages have. So this is all what I love about the functional programming model, but then we can also bring in-place mutations, so you also get the efficiency. Swift has some problems, it implicitly sometimes copies things a million times and, and so we've learned from that and fixed some of those problems in Mojo.
This is all opt in, if you want to use fully dynamic stuff that's totally fine, which is a system that can scale because we're not trying to change the existing world. What we're doing is filling in the missing world. One way to look at modern Python is that it's only half the language, underneath it is C. If you're building a large scale application in Python, you end up having C or Rust or something else that goes with Python. And so what we're doing is keeping the Python syntax, but then replacing the C and having one system that can do both. So instead of having to switch from Python, with C and FFI and bindings and all that nonsense, you can still have the __add__ like you're familiar with and everything just works.
How does parallelization work in Mojo? Summarized from 01:00:34 into the video
We push it into libraries, we have things like a parallel for loop, and they're just library functions. You can pass a nested function in, and so that's the easiest way. We have a very high performance, low level threading library. Today's systems are not just four or eight cores, they're servers with 256 cores, and it's only going to get more crazy in the next few years. This is the world that Mojo is designed for. We haven't built out an actor system for Mojo yet. Swift has a full actor system which is type safe, it's very good for large scale loosely coupled distributed agents, it even supports distributed actors. It builds right on top of async await in a very nice way, and so we may do that. Right now we're very focused on structured compute, more supercomputer style, and the numerics side of things, so we haven't prioritised an actor system yet. I would prefer it to be in the library if we can, to make sure that it's memory safe. Other threading libraries are not memory safe and that leads to certain challenges. There may be a benefit to putting some logic in the compiler to mediate accesses across actors, and then put the bulk of it in the library with type system support.
Most programmers just want to say, here's a parallel for loop, go nuts. From the systems level, you want to be able to support structured nested parallelism. You need thread libraries to compose, you need async await. So you're not getting bogged down with hundreds of thousands of threads that then kill your machine.
How does CPU and GPU Composability work? Summarized from 01:03:27 into the video
If you think about CPUs with 256 cores, GPUs have thousands of cores, or thousands of threads. And the programming model around a GPU is extremely different than the traditional CPU programming model. One of our goals is to make it so people can write much more portable algorithms and applications. It's easy to understand how you make a graph portable. You implement the graph for one type of hardware, and another implementation for another type of hardware. The power of being declarative is that you're separating out a lot of the implementation concerns. But then if you get down into writing for loops, they're imperative code at the bottom of the stack. And so we've carved out the ability for people to define their own abstractions in Mojo.
When you start talking about accelerators, really what ends up mattering a lot parallelism, but also memory. And so how you use the memory hierarchy is the most important thing these days, particularly for GPUs and LLMs in this world that we inhabit. Modern GPUs and CPUs have many level memory hierarchies. In a CPU is you've got a big vector register file which is your L0 cache, then you have an L1 cache, which is really fast and close to the CPU, an L2 cache which is sometimes shared with one or two cores, an L3 cache and it's shared with all of the cores, and main memory. The GPU has roughly the same idea, although the details are very different.
Inherent to getting high performance with something like matrix multiplication is not just doing a dot product, you have to process the workload and tiles. We've seen an emergence of various tile based programming models instead of writing afor loop doing a load, store, add, and multiply. Instead you're thinking about processing a tile at a time. What you do is you write the algorithm for a tile and then you use higher level orchestration logic that then says on this device, I'll traverse this way, or I will prefetch the data in two steps ahead, or I will get better reuse if I go vertically and some horizontally etc. There are all these tricks that the world has developed. In AI you get a generalization called a tensor, and so you take a two dimensional grid of numbers and you make it an n-dimensional grid of numbers. What really happens is it gets linearized in memory. But if you go down a row, you're jumping a whole rows worth of data which is a lot less efficient. When we get into multidimensional vectors, that becomes more pronounced.
In matrix multiplication typically you're going horizontally through the row of one matrix and you're going vertically through the row of another matrix to compute an output element. If you tell the GPU to arrange those two matrices differently, you can transpose it ahead of time, which makes things a lot more efficient. Instead of processing one row and one column at a time, you can process two rows and one column. What that means is when you're processing a column, you're accessing two elements next to each other, for example. And so as you generalize this you get the idea of a tile, a logical concept of processing a two dimensional block of memory composing against other things. Modern hardware not only is it complicated with vectors and threads etc. but they're now adding full on matrix operations to the silicon. You can literally do a matrix multiplication of a very small matrix, say 4x4 or 16x16 for some accelerators, up to 128x128. The intuition is that AI is important to the world and silicon is fundamentally two dimensional. If you use the two dimensional nature of silicon to put down a matrix multiplication, you can get a lot of performance and other benefits from that. A lot of the challenge is how do I use these accelerators, map tiles onto the devices, use the memory hierarchy efficiently, and this becomes as important as the numerics because the performance difference can be 10x or 100x.
We have the Fortran world, and various C template libraries. They were built to try and combat some of these problems. They came up with sometimes very powerful, but niche solutions and the usability was never very great. This is where you want to pull together Mojo's ability to talk to all these crazy hardware features, like the matrix multiplication operations, along with higher order combinators so that you can build libraries that can handle things like tiling. You don't want to know how the orchestration logic works, you only want to know how part of it works. The compile time metaprogramming then enables you to write really reusable and portable code. One of the things I recently gave a talk to at the NVIDIA GTC conference, is how this all composes together to make it so you can write high performance numerics for the same algorithm, which works on GPUs and CPUs.
If you're making a major investment in building software, you want it to last for 10 years or 20 years. You want to be able to adapt to the needs of new hardware. And hardware will continue to evolve, it's moving faster than ever. Mojo is helping break through some of these boundaries that have prevented people from building portable software, while still being able to utilize the high performance, super fancy features that people are coming out with. NVIDIA is a great citizen in the software world because every time they come out with a new chip, they provide LLVM access. We can talk directly into that stack, hardware makers are very LLVM friendly these days, and Mojo can talk to MLIR and LLVM and get direct access to all these hardware optimizations.
Community and open source Summarized from 01:17:04 into the video
Mojo is still a relatively young language which launched last May, so it's been public for less than a year. But it's doing really well, we have over 175,000 people that have used Mojo, we have a nice discord community that has over 22,000 people hanging out, and all the Mojicians like talking to each other and building cool stuff. Actually, as we record today, we're open sourcing a big chunk of Mojo. The entire standard library, which is the heart and soul of the language, is all open sourcing. We've been on a quest to open source more of the stack over time, so that's a really big deal. We've been public about this and telling people about it for a while, people have been waiting for it for a long time. We're seeing continued growth of the community and continued passion projects, and I think this will be a huge step.
Open source is very important to me, I built the LLVM community from scratch from my research project at university. I helped build the Swift open source community and worked in many other communities. What I've seen is that open source isn't just about having code on GitHub. Open source is about having an open community, having an inclusive way of developing code together, working together with a common goal. And so we put a lot of energy into not just providing source code, but also getting a contribution model, and picking the Apache 2 license and following best practices.
I'm really excited about that, I think that people are going to have a lot of fun and I look forward to being much more open with our development of Mojo. Please join our Discord , that's a great place to go. There's everything from folks that are interested in type theory nerdery, to AI researchers, to people that just want a better Python. And again, the cool thing about Mojo is that it's being built with state of the art to solve these hardcore problems at the frontier of computer architecture and programming languages. We're building in a way that It's completely general, although we're focused on AI as there's a lot of pain and suffering in that world. But it turns out that a lot of people write web servers and other things. And it's fantastic to see people building into that space, even though we personally don't have the expertise to invest in that.
The thing I love about Swift is I still get people that stop me in the street and say thank you for helping to drive this thing and make it happen. Because of you, I learned how to get into programming and Objective C was always too scary. These things take a couple of years to play out, but what I hope happens with Mojo is we get all these people that know Python and can continue to grow. Because they're not faced with this scary threshold of learning C or Rust. And if we can get more people involved, be more inclusive to good ideas, what I think we'll find is these technologies can go even further and have an even bigger impact.
Until next time! 🔥