ML needs a new programming language – Interview with Chris Lattner
120 comments
·September 5, 2025MontyCarloHall
The reason why Python dominates is that modern ML applications don't exist in a vacuum. They aren't the standalone C/FORTRAN/MATLAB scripts of yore that load in some simple, homogeneous data, crunch some numbers, and spit out a single result. Rather, they are complex applications with functionality extending far beyond the number crunching, which requires a robust preexisting software ecosystem.
For example, a modern ML application might need an ETL pipeline to load and harmonize data of various types (text, images, video, etc., all in different formats) from various sources (local filesystem, cloud storage, HTTP, etc.) The actual computation then must leverage many different high-level functionalities, e.g. signal/image processing, optimization, statistics, etc. All of this computation might be too big for one machine, and so the application must dispatch jobs to a compute cluster or cloud. Finally, the end results might require sophisticated visualization and organization, with a GUI and database.
There is no single language with a rich enough ecosystem that can provide literally all of the aforementioned functionality besides Python. Python's numerical computing libraries (NumPy/PyTorch/JAX etc.) all call out to C/C++/FORTRAN under the hood and are thus extremely high-performance, and for functionality they don't implement, Python's C/C++ FFIs (e.g. Python.h, NumPy C integration, PyTorch/Boost C++ integration) are not perfect, but are good enough that implementing the performance-critical portions of code in C/C++ is much easier compared to re-implementing entire ecosystems of packages in another language like Julia.
benzible
Python's ecosystem is hard to beat, but Elixir/Nx already does a lot of what Mojo promises. EXLA gives you GPU/TPU compilation through XLA with similar performance to Mojo's demos, Explorer handles dataframes via Polars, and now Pythonx lets you embed Python when you need those specialized libraries.
The real difference is that Elixir was built for distributed systems from day one. OTP/BEAM gives the ability to handle millions of concurrent requests as well as coordinating across GPU nodes. If you're building actual ML services (not just optimizing kernels), having everything from Phoenix / LiveView to Nx in one stack built for extreme fault-tolerance might matter more than getting the last bit of performance out of your hardware.
melodyogonna
Who uses this Exla in production?
Hizonner
This guy is worried about GPU kernels, which are never, ever written in Python. As you point out, Python is a glue language for ML.
> There is no single language with a rich enough ecosystem that can provide literally all of the aforementioned functionality besides Python.
That may be true, but some of us are still bitter that all that grew up around an at-least-averagely-annoying language rather than something nicer.
MontyCarloHall
>This guy is worried about GPU kernels
Then the title should be "why GPU kernel programming needs a new programming language." I can get behind that; I've written CUDA C and it was not fun (though this was over a decade ago and things may have since improved, not to mention that the code I wrote then could today be replaced by a couple lines of PyTorch). That said, GPU kernel programming is fairly niche: for the vast majority of ML applications, the high-level API functions in PyTorch/TensorFlow/JAX/etc. provide optimal GPU performance. It's pretty rare that one would need to implement custom kernels.
>which are never, ever written in Python.
Not true! Triton is a Python API for writing kernels, which are JIT compiled.
catgary
I agree with you that writing kernels isn’t necessarily the most important thing for most ML devs. I think an MLIR-first workflow with robust support for the StableHLO and LinAlg dialects is the best path forward for ML/array programming, so on one hand I do applaud what Mojo is doing.
But I’m much more interested in how MLIR opens the door to “JAX in <x>”. I think Julia is moving in that direction with Reactant.jl, and I think there’s a Rust project doing something similar (I think burn.dev may be using ONNX has an even higher-level IR). In my ideal world, I would be able to write an ML model and training loop in some highly verified language and call it from Python/Rust for training.
jimbokun
> That may be true, but some of us are still bitter that all that grew up around an at-least-averagely-annoying language rather than something nicer.
Don't worry. If you stick around this industry long enough you'll see this happen several more times.
Hizonner
I'm basically retired. But I'm still bitter about each of the times...
ModernMech
> This guy is worried about GPU kernels, which are never, ever written in Python. As you point out, Python is a glue language for ML.
That's kind of the point of Mojo, they're trying to solve the so-called "two language problem" in this space. Why should you need two languages to write your glue code and kernel code? Why can't there be a language which is both as easy to write as Python, but can still express GPU kernels for ML applications? That's what Mojo is trying to be through clever use of LLVM MLIR.
nostrademons
It's interesting, people have been trying to solve the "two language problem" since before I started professionally programming 25 years ago, and in that time period two-language solutions have just gotten even more common. Back in the 90s they were usually spoken about only in reference to games and shell programming; now the pattern of "scripting language calls out to highly-optimized C or CUDA for compute-intensive tasks" is common for webapps, ML, cryptocurrency, drones, embedded, robotics, etc.
I think this is because many, many problem domains have a structure that lends themselves well to two-language solutions. They have a small homogenous computation structure on lots of data that needs to run extremely fast. And they also have a lot of configuration and data-munging that is basically quick one-time setup but has to be specified somewhere, and the more concisely you can specify it, the less human time development takes. The requirements on a language designed to run extremely fast are going to be very different from one that is designed to be as flexible and easy to write as possible. You usually achieve quick execution by eschewing flexibility and picking a programming model that is fairly close to the machine model, but you achieve flexibility by having lots of convenience features built into the language, most of which will have some cost in memory or indirections.
There've been a number of attempts at "one language to rule them all", notably PL/1, C++, Julia (in the mathematical programming subdomain), and Common Lisp, but it often feels like the "flexible" subset is shoehorned in to fit the need for zero-cost abstractions, and/or the "compute-optimized" subset is almost a whole separate language that is bolted on with similar but more verbose syntax.
goatlover
> There is no single language with a rich enough ecosystem that can provide literally all of the aforementioned functionality besides Python.
Have a hard time believing C++ and Java don't have rich enough ecosystems. Not saying they make for good glue languages, but everything was being written in those languages before Python became this popular.
j2kun
Yeah the OP here listed a bunch of Python stuff that all ends up shelling out to C++. C++ is rich enough, period, but people find it unpleasant to work in (which I agree with).
It's not about "richness," it's about giving a language ecosystem for people who don't really want to do the messy, low-level parts of software, and which can encapsulate the performance-critical parts with easy glue
nromiun
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
poly2it
I definitely think the license is a major holdback for the language. Very few individuals or organisation for that matter would like to invest in a new closed stack. CUDA is accepted simply because it has been along for such a long time. GPGPU needs a Linux moment.
pjmlp
I personally think they overshot themselves.
First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing, and has GPGPU support.
On the other hand, regardless of the sore state of JIT compilers on CPU side for Python, at least MVidia and Intel are quite serious on Python DSLs for GPGPU programming on CUDA and One API, so one gets close enough to C++ performance while staying in Python.
So Mojo isn't that appealing in the end.
dsharlet
The problem I've seen is this: in order to get good performance, no matter what language you use, you need to understand the hardware and how to use the instructions you want to use. It's not enough to know that you want to use tensor cores or whatever, you also need to understand the myriad low level requirements they have.
Most people that know this kind of thing don't get much value out of using a high level language to do it, and it's a huge risk because if the language fails to generate something that you want, you're stuck until a compiler team fixes and ships a patch which could take weeks or months. Even extremely fast bug fixes are still extremely slow on the timescales people want to work on.
I've spent a lot of my career trying to make high level languages for performance work well, and I've basically decided that the sweet spot for me is C++ templates: I can get the compiler to generate a lot of good code concisely, and when it fails the escape hatch of just writing some architecture specific intrinsics is right there whenever it is needed.
adgjlsfhk1
The counterpoint to this is that having a language that has a graceful slide between python like flexibility and hand optimized assembly is really useful. The thing I like most about Julia is it is very easy to both write fast somewhat sloppy code (e.g. for exploring new algorithms), but then you can go through and tune it easily for maximal performance and get as fast as anything out there.
mvieira38
> First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing
Got any sources on that? I've been interested in learning Julia for a while but don't because it feels useless compared to Python, especially now with 3.13
nickpsecurity
Here's some benefits it might try to offer as differentiators:
1. Easy packaging into one executable. Then, making sure that can be reproducible across versions. Getting code from prior, AI papers to rub can be hard.
2. Predictability vs Python runtime. Think concurrent, low-latency GC's or low/zero-overhead abstractions.
3. Metaprogramming. There have been macro proposals for Python. Mojo could borrow from D or Rust here.
4. Extensibility in a way where extensions don't get too tied into the internal state of Mojo like they do Python. I've considered Python to C++, Rust, or parallelized Python schemes many times. The extension interplay is harder to deal with than either Python or C++ itself.
5. Write once, run anywhere, to effortlessly move code across different accelerators. Several frameworks are doing this.
6. Heterogenous, hot-swappable, vendor-neutral acceleration. That's what I'm calling it when you can use the same code in a cluster with a combination of Nvidia GPU', AMD GPU's, Gaudi3's, NPU's, SIMD chips, etc.
pjmlp
Agree in most points, however I still can't use it today on Windows, and it needs that unavoidable framework.
Languages on their own is very hard to gain adoption.
raggi
I'm on the systems side, and I find some of what Chris and team are doing with Mojo pretty interesting and could be useful to eradicate a bunch of polyglot ffi mess across the board. I can't invest in it or even start discussions around using it until it's actually open.
melodyogonna
It is not ready for general-purpose programming. Modular itself tried offering a Mojo api for their MAX engine, but had to give up because the language still evolved too rapidly for such an investment.
As per the roadmap[1], I expect to start seeing more adoption once phase 1 is completed.
jb1991
It says at the top:
> write state of the art kernels
Mojo seems to be competing with C++ for writing kernels. PyTorch and Julia are high-level languages where you don't write the kernels.
Alexander-Barth
Actually in julia you can write kernels with a subset of the julia language:
https://cuda.juliagpu.org/stable/tutorials/introduction/#Wri...
With KernelAbstractions.jl you can actually target CUDA and ROCm:
https://juliagpu.github.io/KernelAbstractions.jl/stable/kern...
For python (or rather python-like), there is also triton (and probably others):
jakobnissen
I think Julia aspires to be performant enough that you can write the kernels in Julia, so Julia is more like Mojo + Python together.
Although I have my doubts that Julia is actually willing to make the compromises which would allow Julia to go that low level. I.e. semantic guarantees about allocations and inference, guarantees about certain optimizations, and more.
pjmlp
You can write kernels with Python using CUDA and Open API SDKs in 2025, that is one of the adoption problems regarding Mojo.
singularity2001
Is it really released? Last time I checked it was not open sourced I don't want to rely on some proprietary vaporware stack.
melodyogonna
It is released but not open-source. Modular was aiming to open-source the compiler by Q4 2026; however, Chris now says they could be able to do that considerably faster, perhaps early 2026[1].
If you're interested, they think the language will be ready for open source after completing phase 1 of the roadmap[2].
pansa2
Sounds to me like it's very incomplete:
> maybe a year, 18 months from now [...] we’ll add classes
ModernMech
They’re not going to see serious adoption before they open source. It’s just a rule of programming languages at this point if you don’t have the clout to force it, and Modular they don’t. People have been burned too many times by closed source languages.
Cynddl
Anyone knows what Mojo is doing that Julia cannot do? I appreciate that Julia is currently limited by its ecosystem (although it does interface nicely with Python), but I don't see how Mojo is any better then.
thetwentyone
Especially because Julia has pretty user friendly and robust GPU capabilities such as JuliaGPU and Reactant[2] among other generic-Julia-code to GPU options.
1: https://enzymead.github.io/Reactant.jl/dev/ 2: https://enzymead.github.io/Reactant.jl/dev/
jb1991
I get the impression that most of the comments in this thread don't understand what a GPU kernel is. These high-level languages like Python and Julia are not running on the kernel, they are calling into other kernels usually written in C++. The goal is different with Mojo, it says at the top of the article:
> write state of the art kernels
You don't write kernels in Julia.
arbitrandomuser
>You don't write kernels in Julia.
The package https://github.com/JuliaGPU/KernelAbstractions.jl was specifically designed so that julia can be compiled down to kernels.
Julia's is high level yes, but Julia's semantics allow it to be compiled down to machine code without a "runtime interpretter" . This is a core differentiating feature from Python. Julia can be used to write gpu kernels.
ssfrr
It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels
jakobnissen
Im pretty sure Julia does JIT compilation of pure Julia to the GPU: https://github.com/JuliaGPU/GPUCompiler.jl
adgjlsfhk1
Julia's GPU stack doesn't compile to C++. it compiles Julia straight to GPU assembly.
pjmlp
See new cu tile architecture on CUDA, designed from the ground up with Python in mind.
Alexander-Barth
I guess that the interoperability with Python is a bit better. But on the other hand, the PythonCall.jl (allowing calling python from julia) is quite good and stable. In Julia, you have quite good ML frameworks (Lux.jl and Flux.jl). I am not sure that you have mojo-native ML frameworks which are similarly usable.
jakobnissen
Mojo to me looks significantly lower level, with a much higher degree of control.
Also, it appears to be more robust. Julia is notoriously fickle in both semantics and performance, making it unsuitable for foundational software the way Mojo strives for.
ubj
> Anyone knows what Mojo is doing that Julia cannot do?
First-class support for AoT compilation.
https://docs.modular.com/mojo/cli/build
Yes, Julia has a few options for making executables but they feel like an afterthought.
jb1991
Isn't Mojo designed for writing kernels? That's what it says at the top of the article:
> write state of the art kernels
Julia and Python are high-level languages that call other languages where the kernels exist.
Sukera
No, you can write the kernels directly in Julia using KernelAbstractions.jl [1].
[1] https://juliagpu.github.io/KernelAbstractions.jl/stable/
null
MohamedMabrouk
* Compiling arbitrary Julia code into a native standalone binary (a la rust/C++) with all its consequcnes.
_aavaa_
Yeah, except Mojo’s license is a non-starter.
auggierose
Wow, just checked it out, and they distinguish (for commercial purposes) between CPU & Nvidia on one hand, and other "accelerators" (like TPU or AMD) on the other hand. For other accelerators you need to contact them for a license.
https://www.modular.com/blog/a-new-simpler-license-for-max-a...
_aavaa_
Yes; in particular see sections 2-4 of [0].
They say they'll open source in 2026 [1]. But until that has happened I'm operating under the assumption that it won't happen.
[0]: https://www.modular.com/legal/community
[1]: https://docs.modular.com/mojo/faq/#will-mojo-be-open-sourced
mdaniel
> I'm operating under the assumption that it won't happen.
Or, arguably worse: my expectation is that they'll open source it, wait for it to get a lot of adoption, possibly some contribution, certainly a lot of mindshare, and then change the license to some text no one has ever heard of that forbids use on nvidia hardware without paying the piper or whatever
If it ships with a CLA, I hope we never stop talking about that risk
actionfromafar
Same
rs186
To my naive mind, any language that is controlled by a single company instead of a non profit is a non-starter. Just look at how many companies reacted when Java license change happened. You must be either an idiot or way too smart for me to understand to base your business on a language like Mojo instead of Python.
null
frou_dh
Listening to this episode, I was quite surprised to hear that even now in Sept 2025, support for classes at all is considered a medium-term goal. The "superset of Python" angle was thrown around a lot in earlier discussions of Mojo 1-2 years ago, but at this rate of progress seems a bit of a pie-in-the-sky aspiration?
adgjlsfhk1
superset of Python was never a goal. It was a talking point to try and build momentum that was quietly dropped once it served it's purpose of getting Mojo some early attention.
fwip
I tend to agree, which is why I can't recommend Mojo, despite thinking their tech is pretty innovative. If they're willing to lie about something that basic, I can't trust any of their other claims.
ModernMech
I hope that’s not what it was, that makes them seem very manipulative and dishonest. I was under the impression it was a goal, but they dropped it when it became apparent it was too hard. That’s much more reasonable to understand.
JonChesterfield
ML seems to be doing just fine with python and cuda.
poly2it
Python and CUDA are not very well adapted for embedded ML.
postflopclarity
Julia could be a great language for ML. It needs more mindshare and developer attention though
singularity2001
What's the current state of time to first plot and executable size? Last time it was several seconds to get a 200 MB hello world. I'm sure they are moving in the right direction the only questions is are they there yet?
moelf
with juliac.jl and --trim, hello world is now 1MB and compiles in a second.
more realistic examples of compiling a Julia package into .so: https://indico.cern.ch/event/1515852/contributions/6599313/a...
adgjlsfhk1
julia> @time begin
using Plots
display(plot(rand(8)))
end
1.074321 seconds
On Julia 1.12 (currently at release candidate stage), <1mb hello world is possible with juliac (although juliac in 1.12 is still marked experimental)postflopclarity
improving, slowly. 5 steps forward 3 steps back.
1.9 and 1.10 made huge gains in package precompilation and native code caching. then attentions shifted and there were some regressions in compile times due to unrelated things in 1.11 and the upcoming 1.12. but at the same time, 1.12 will contain an experimental new feature `--trim` as well as some further standardization around entry points to run packages as programs, which is a big step towards generating self-contained small binaries. also nearly all efforts in improving tooling are focused on providing static analysis and helping developers make their program more easily compilable.
it's also important a bit to distinguish between a few similar but related needs. most of what I just described applies to generating binaries for arbitrary programs. but for the example you stated "time to first plot" of existing packages, this is already much improved in 1.10 and users (aka non-package-developers) should see sub-second TTFP, and TTFX for most packages they use that have been updated to use the precompilation goodies in recent versions
ModernMech
I recently looked into making Julia binaries, and it's not at all a good process. They say it's supported, but it's not exactly as easy as "cargo build" to get a Julia binary out. And the build process involves creating this minimal version of Julia you're expected to ship with your binary, so build times were terrible. I don't know if that gets amortized though.
As far as the executable size, it was only 85kb in my test, a bouncing balls simulation. However, it required 300MB of Julia libraries to be shipped with it. About 2/3 of that is in libjulia-codegen.dll, libLLVM-16jl.dll. So you're shipping this chunky runtime and their LLVM backend. If you're willing to pay for that, you can ship a Julia executable. It's a better story than what Python offers, but it's not great if you want small, self-contained executables.
postflopclarity
note that as a few other commenters have pointed out, this situation will improve greatly in 1.12 (although still many rough edges)
rvz
> It needs more mindshare and developer attention though
That is the problem. Julia could not compete against Python's mindshare.
A competitor to Python needs to be 100% compatible with its ecosystem.
numbers_guy
What makes Julia "great" for ML?
bobbylarrybobby
It's a low level language with a high level interface. In theory, GC aside, you should be able to write code as performant as C++ without having to actually write C++. It's also homoiconic and the compiler is part of the language’s API, so you can do neat things with macros that let more or less you temporarily turn it into a different language.
In practice, the Julia package ecosystem is weak and generally correctness is not a high priority. But the language is great, if you're willing to do a lot of the work yourself.
macawfish
Built-in autodifferentiation and amazing libraries built around it, plus tons of cutting edge applied math libraries that interoperate automatically, thanks to Julia's well conceived approach to the expression problem (multiple dispatch). Aside from that, the language itself is like a refined python so it should be pretty friendly off the bat to ML devs.
What Julia needs though: wayyyy more thorough tooling to support auto generated docs, well integrated with package management tooling and into the web package management ecosystem. Julia attracts really cutting edge research and researchers writing code. They often don't have time to write docs and that shouldn't really matter.
Julia could definitely use some work in the areas discussed in this podcast, not so much the high level interfaces but the low level ones. That's really hard though!
postflopclarity
I would use the term "potentially great" rather than plain "great"
but all the normal marketing words: in my opinion it is fast, expressive, and has particularly good APIs for array manipulation
numbers_guy
Interesting. I am experimenting with different ML ecosystems and wasn't really considering Julia at all but I put it on the list now.
mdaniel
I don't understand why in the world someone would go from one dynamically typed language to another. Even the kernels example cited below is "eh, the types are whatever you want them to be" https://cuda.juliagpu.org/stable/tutorials/introduction/#Wri...
Then again, I am also open to the fact that I'm jammed up by the production use of dynamically typed languages, and maybe the "for ML" part means "I code in Jupyter notebooks" and thus give no shits about whether person #2 can understand what's happening
postflopclarity
It's very important that readers, writers, maintainers, etc. of code are able to easily understand what that code is doing.
explicit and strict types on arguments to functions is one way, but certainly not the only way, nor probably the best way to effect that
mdaniel
I would actually be curious to hear your perspective on the "best way" that isn't typechecking. I literally cannot comprehend why someone would write such a thing
I readily admit that I am biased in that I believe that having a computer check that every reference to every relationship does what it promises, all the time
torginus
I think Mojo's cool and there's definitely a place for a modern applications programming language with C++ class(ish) performance, aka what Swift wanted to be but got trapped in the Apple ecosystem (designed by the same person as Mojo).
The strong AI focus seems to be a sign of the times, and not actually something that makes sense imo.
tomovo
While I appreciate all his work on LLVM, Chris Lattner's Swift didn't work out so well for me, so I'm cautious about this.
Swift has some nice features. However, the super slow compilation times and cryptic error messages really erase any gains in productivity for me.
- "The compiler is unable to type-check this expression in reasonable time?" On an M3 Pro? What the hell!?
- To find an error in SwiftUI code I sometimes need to comment everything out block by block to narrow it down and find the culprit. We're getting laughs from Kotlin devs.
melodyogonna
I think Swift is really successful in that there are so many new Apple developers who would use Swift now but wouldn't have used ObjC.
elpakal
To be fair to Chris, I’ve only seen the message about compiler not being able to type check the expression in swiftui closure hell. I think he left (maybe partly) because of the SwiftUI influence on Swift.
fnands
> The strong AI focus seems to be a sign of the times, and not actually something that makes sense imo.
It has been Mojo's explicit goal from the start. It has it's roots in the time that Chris Lattner spent at Google working on the compiler stack for TPUs.
It was explicitly designed to by Python-like because that is where (almost) all the ML/AI is happening.
ModernMech
It makes a lot of sense when you look at how much money they have raised:
https://techcrunch.com/2023/08/24/modular-raises-100m-for-ai...
You don’t raise $130M at a $600M valuation to make boring old dev infrastructure that is sorely needed but won’t generate any revenue because no one is willing to pay for general purpose programming languages in 2025.
You raise $130M to be the programming foundation of next Gen AI. VCs wrote some big friggen checks for that pitch.
diggan
> The strong AI focus seems to be a sign of the times, and not actually something that makes sense imo.
Are you sure about that? I think Mojo was always talked about as "The language for ML/AI", but I'm unsure if Mojo was announced before the current hype-cycle, must be 2-3 years at this point right?
torginus
According to wikipedia it was announced in May 2023
atbpaca
Mojo looks like the perfect balance between readability (python-like syntax) and efficiency (rust-like performance).
dboreham
ML is a programming language.
a3w
Meta Language is shortened to ML. Great language, and even fathered further ML dialects.
Machine Learning is shortened to ML, too.
This posting is about "Why ML needed Mojo", but does not tell us why the license of Mojo is garbage.
M. Learning as an example of compute intensive tasks could have been the Rails moment for Ruby here, but seems like Mojo is dead on arrival — it was trending here at hackernews when announced, but no one seems to talk about it now.
---------------
(I like em-dashes, but not written wit any AI, except for language tools spellchecker)
blizdiddy
Mojo is the enshitification of programming. Learning a language is too much cognitive investment for VC rugpulls. You make the entire compiler and runtime GPL or you pound sand, that has been the bar for decades. If the new cohort of programmers can’t hold the line, we’ll all suffer.
j2kun
What are you ranting about? Lattner has a strong track record of producing valuable, open source software artifacts (LLVM, Swift, MLIR) used across the industry.
pjmlp
For decades, paying for compiler tools was a thing.
analog31
True, but aren't we in a better place now? I think the move to free tools was motivated by programmers, and not by their employers. I've read that it became hard to hire people if you used proprietary tools. Even the great Microsoft open-sourced their flagship C# language. And it's ironic but telling that the developers of proprietary software don't trust proprietary tools. And every developer looks at the state of the art in proprietary engineering tooling, such as CAD, and retches a little bit. I've seen many comments on HN along those lines.
And "correlation is not causality," but the occupation with the most vibrant job market until recently was also the one that used free tools. Non-developers like myself looked to that trend and jumped on the bandwagon when we could. I'm doing things with Python that I can't do with Matlab because Python is free.
Interestingly, we may be going back to proprietary tools, if our IDE's become a "terminal" for the AI coding agents, paid for by our employers.
pjmlp
Not really, as many devs rediscover public domain, shareware, demos and open core, because it turns out there are bills to pay.
If you want the full C# experience, you will still be getting Windows, Visual Studio, or Rider.
VSCode C# support is under the same license as Visual Studio Community, and lack several tools, like the advanced graphical debugging for parallel code and code profiling.
The great Microsoft has not open sourced that debugger, nor many other tools on .NET ecosystem, also they can afford to subsidise C# development as gateway into Azure, and being valued in 4 trillion, the 2nd biggest in the world.
blizdiddy
I’d prefer to not touch a hot stove twice. Telling me what processors I can use is Oracle- level rent seeking, and it should be mocked just like Oracle.
pjmlp
I am quite sure Larry thinks very foundly of such folks when having vacations on his yatch or paying the bills to land the private jet off airport opening times.
kuschkufan
And it sucked so hard that GNU and LLVM were born.
pjmlp
LLVM was a research project embraced by Apple to avoid GCC and anything GPL.
Apple and Google have purged most GPL stuff out of their systems, after making clang shine.
Thank you for all the great interest in the podcast and in Mojo. If you're interested in learning more, Mojo has a FAQ that covers many topics (including "why not make Julia better" :-) here: https://docs.modular.com/mojo/faq/
Mojo also has a bunch of documentation https://docs.modular.com/mojo/ as well as hundreds of thousands of lines of open source code you can check out: https://github.com/modular/modular
The Mojo community is really great, please consider joining, either our discourse forum: https://forum.modular.com/ or discord https://discord.com/invite/modular chat.
-Chris Lattner