I've been writing a metaverse client in Rust for almost five years now, which is too long.[1]
Someone else set out to do something similar in C#/Unity and had something going in less than two years.
This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues.
I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs.
There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
So in D, is it now natural to mix borrow checking and garbage collection? I think some kind of "gradual memory management" is the holy grail, but like gradual typing, there are technical problems
The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness
---
So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?
Compared with GC, borrow checking affects every function signature
Compared with manual memory management, GC also affects every function signature.
IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there
The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.
Likewise, type safety is a global property of a program
So natural is a stretch at the moment, but you can use all kinds of different techniques, what is needed is more community and library standardization around some solutions.
For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.
And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.
Hey, thank you for spreading the joy of the borrow checker beyond Rust; awesome stuff, sounds very interesting, challenging, and useful!
One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
#4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.
Not a game dev, but based on what I do know of it, some of this sounds to me like it's just a severe mismatch between Rust's memory model and the needs of games.
Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.
But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.
> > The lower levels are buggy and have a lot of churn
>
> The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan
The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.
The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.
Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.
> > Back-references are difficult
>
> A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.
We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.
Arc is a very slow and primitive tool compared to a GC. If you are writing Arc everywhere, you would probably have better performance switching to a JVM language, C#, or Go.
This is incorrect if you are using Rc exclusively for back references. Since the back reference is weak, the reference count is only incremented once when you are creating the datatype. The problem isn't that it's slow, it's that it consumes extra memory for book keeping.
Objects are cheaper than Arc<T>. Otherwise using GC would suck a lot more than it does today (for certain types of data structures like trees accessed concurrently it is also a massive optimization).
Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.
One thing that struck me was the lavish praise heaped on the ECS of the game engine being migrated away from; this is extremely common.
I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).
People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.
I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
> It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
Having gone full-in on this approach before, with some good success, it still feels wrong to me today. Contiguous storage may work for reasonable numbers of elements, but it's potentially blocking a huge contiguous chunk of address space especially for large numbers of elements.
I probably say this because I still have to main 32-bit binaries (only 2G of address space), but it can potentially be problematic even on 64-bit machines (typically 256 TB of address space), especially if the data structure should be a reusable container with unknown number of instances. If you don't know a reasonable upper bound of elements beforehand, you have to reallocate later, or drastically over-reserve from the start. The former removes a pointer stability guarantee, the later is uneconomical, it may even be uneconomical on 64-bit depending on how many instances of the data structures you plan to have. And having to reallocate when overflowing the preallocated space makes operations less deterministic with regards to execution time.
> Recently I rewrote the b-tree to simply use a vec of internal nodes
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
Weak is very helpful in preventing ownership loops which prevent deallocation.
Weak plus RefCell lets you do back pointers cleanly. You call ".borrow()" to get access to the data protected by a RefCell. The run-time borrow panics if someone else is using the data item. This prevents two mutable pointers to the same data, which Rust requires.
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself.
There's been some work on Rust static deadlock analysis, but it's still a research topic.
I didn't consider that. Looking at how weak references work, that might work. It would reduce the need for raw pointers and unsafe code. But in exchange, it would add 16 bytes of overhead to every node in my data structure. That's pure overhead - since the reference count of all nodes should always be exactly 1.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
> What it doesn't protect you from is use-after-free bugs.
How about using hash maps/hash tables/dictionaries/however it's called in Rust? You could generate unique IDs for the elements rather than using vector indices.
But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).
But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.
I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser.
It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood.
Content is constantly coming in, being displayed, and then discarded as the user moves around the big world.
All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.
This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.
Given my experience with Bevy this doesn't happen very often, if ever.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks.
You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
And others like Pascal linage (Pascal, Object Pascal, Extended Pascal, Modula-2, Ada, Oberon,...), that have flexible bounds, they can be whatever numeric subranges we feel like using, or enumeration values.
Maths books aren't being weird. They are counting in a way most people learn to count. One apple, two apples, three apples. You don't start zeroth apple, one apple, two apples, then respond the set of apple contains three apples.
But computers are not actually counting array elements, it's more accurate to compare array indexing with distance measurement. The pointer (memory address) puts you at the start of the array, so the first element is right there under your feet (i.e. index 0). The other elements are found by measuring how far away from the start they are:
I find indices starting from zero much easier. Especially when index/pointer arithmetic is involved like converting between pixel or voxel indices and coordinates, or indexing in ring buffers. 1-based indexing is one of the reasons I eventuallz abandoned Mathematica, because it got way too cumbersome.
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
For languages with 0-based array element numbering, say what the numbers are: they're offsets. 0-based arrays have offsets, 1-based arrays have indices.
I don't think so. One based numbering is barring few particular (spoken) languages the default. You have to had to change your counting strategies when going from regular world to 0 based indices.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.
> Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
Given most languages since at least C have 0-based indexing... I would think most engineers picked it up early? I recall reading The C Programming Language 20 years ago, reading the reason and just following what it says. I don't think it's as complex as the descriptions people put forward of "fighting the borrow checker." One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust." I know which one I'm going to find more challenging when I get round to trying to learn Rust...
> Given most languages since at least C have 0-based indexing.
As I mentioned I started Basic on C64, and schools curriculum was in Pascal. I didn't learn about C until I got to college.
> One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust."
In practice they are, you start writing code. At first you trip on your feet, read stuff carefully, then try again until you succeed.
Then one day, you wake up and realize I know 0 indices and/or borrow checker. You don't know how you know, you just know you don't make those mistakes anymore.
I sometimes work on creating my own programming language (because there aren't enough of those already) and one of the things I want to do in it is 1-based indexing. Just so I can do:
You can't do possibly-erroneous pointer math on a C# object reference. You don't need to deal with the game life cycle AND the memory life cycle with a GC. In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
To say it's the same as using array indices is just not true.
> You can't do possibly-erroneous pointer math on a C# object reference.
Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.
> You don't need to deal with the game life cycle AND the memory life cycle with a GC.
I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.
> In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.
More than the trying to find another object kind of math, I was mostly thinking about address aliasing ie cleared handles pointing to re-used space and now live but different objects. You could just say "don't screw up your handle/alloc code" but it's just something you don't have to worry about when you don't roll your own.
The live C# but dead Unity object trick is mostly only useful for dangling handles and IDs and such. It's more that memory won't be ripped out from under you for none Unity data and the usual GC rules apply.
And again the difference between using the GC and rolling your own implementation is pretty big. In your hash map example you still have to solve the issue of how long you keep entries in that map. The GC answers that question.
While we don't need, we can, that is the beauty of languages like C#, that offer the productivity of automatic memory management, and the tools to go low level if desired/needed.
At least in terms of doing math on indices, I have to imagine you could just wrap the type to make indices opaque. The other concerns seem valid though.
Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.
> These crates also get "refactored" every few months, with breaking API changes
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).
It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
I'm finding most of the modern React ecosystem to be made of liabilities.
The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.
Yeah, not only is the structure of business workflows often resistant to mature software dev workflows, developers themselves increasingly lack the discipline, skills or interest in backwards compatibility or good initial designs anyway. Add to this the trend that fast changing software is actually a decent strategy to keep LLMs befuddled, and it’s probably going to become an unofficial standard to maintain support contracts.
On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.
Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..
Modelica, which is a DSL for modelling DAE systems, has a facility of automated conversions. You can provide a script that automatically modifies user's code then they upgrade to newer version of your lib, or prints the message if automatic migration is not possible.
It is very strange that more mainstream languages do not have such features (and I am not talking about 3rd party tools; in Modelica conversions are part of the language spec).
I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.
And some critical rust issues for games are not dealt with: on tiny glade with the devs did hit a libgcc issue on the native elf/linux build, and we did discovered that the rust toolchain for elf/linux targets does not support the static linking of libgcc (which is mandatory for games, any closed source binary). The issue is opened on rust github since 2015...
But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.
Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
I've always thought about this. In my mind there are two ways a language can guarantee memory safety:
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
Rust has plenty of constructs that do runtime checks in part to get around the fact that not everything can be expressed in a manner that the borrow checker can understand at compile time. IMO Rust should treat the array/index case in the same manner as these and provide a standard interface that prevents "use after free" and so on.
For a while now Unity has an incremental garbage collector where you pay a small amount of time per frame instead of introducing large pauses every time the GC kicks in.
Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.
You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.
Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.
Finding the fun and time to market are top priority for games development.
If it’s a really logic-intensive game like Factorio (C++), or RollerCoaster Tycoon (Assembly), then I don’t think you can get away with something like Unity.
For simpler things that have a lot of content, I don’t think you can get away with Rust, until its ecosystem grows to match the usual game engines of today.
We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
> We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
> Nobody has really pushed the performance issues.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.
Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.
More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
For the 4 people on HN not aware of it, this is a riff on Greenspun's tenth rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
> More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
But it also doesn't have even 10% of Unity features. Bevy docs themselves warn you that you are probably better off with something like Godot, at least while Bevy is still in early development.
Over the past year I've been working at my studio to add enough features to Bevy to ship real apps, and Bevy is at the point where one can reasonably do that, depending on your needs.
I think this has less to do with Rust and commercial game engines being better and more of a fetish that game programmers seem to have for entity component systems. One does not have to look far to see similar projects repeated in C++ years prior.
And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
If anything, making your own game engine makes process more frustrating, time consuming and leads to burnout quicker than ever, especially when your initial goal was just to make a game but instead you stuck figuring out your own render pipeline or inventing some other wheel. I have a headache just from thinking that at some point in engine development person would have to spend literal weeks figuring out export to Android with proper signage and all, when, again, all they wanted is to just make a game.
This seems entirely subjective, most importantly hinging on this part here: "all they wanted is to just make a game".
If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?
Speaking as someone who has made their own game engine for their indie game: it really depends on the game, and on the developer's personality and goals. I think you're probably right for the majority of cases, since the majority of games people want to make are reasonably well-served by general-purpose game engines.
But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:
I would bet that if you want to build a game engine and not the game, the game itself is probably not that compelling. Could still break out, like Minecraft, but if someone has an amazing game idea I would think they would want to ship it as fast as possible.
It is orders of magnitude easier to write an game engine for yourself than it is to create a monster like unity or unreal that needs to appeal to everyone and support every kind of game.
If we are talking 2d, it can be months to hack together a basic engine. 3d can be a bit harder but far from decades.
Thing is, if you designed your engine well and implemented great tooling, it should make it faster to implement the actual content of the game.
So upfront cost to be faster later. At least in theory. Obviously you might end up with subpar tooling that is worse than what a commercial could offers. But if you do something like an RPG with a lot of contend, every bit of extra efficiency in creating that content can help a lot.
Now, obviously from a purely commercial standpoint, not using an established engine makes nearly never sense. Super risky. Hard to hire outside talent. You are only justified when you have very, very specific needs that are hard to implement in a generic engine.
Also for us with an ADHD brain, hard things tend to be easier and easy things very hard, so yes the extra mental stimulation of writing an engine can help.
This is correct. If you want to build a game engine, you better know what kind of game it is by making at least a playable prototype in a conventional engine.
Making an actual indie game can take from 6 months (tiny) to 4-5years. If you multiply that by 10x, the upper bound would be 40-50 years. Of course, that's not how it would be but one has to consider whether their goal is to build a game engine OR a game, doing both at the same is almost guaranteed failure (statistically speaking).
> And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
I'm in that camp. After shifting from commercial gamedev I've been itching to build something. I kept thinking "I wanna build a game" but couldn't really think what that came is. Then realised "Actually it's because I want to build an engine" haha
After 30 years participating in Gamedev communities I feel like the "don't build an engine" was always an empty strawman aimed at nobody in reality.
The Venn diagram between the people interested in technical aspects of an engine and in also shipping a game is probably composed of a few hundred individuals, most of them working for studios.
The "kid that wants to make an engine to make an MMO" is gonna do neither. It's just a meme.
I shouldn't really care about it myself, but I do because Unity sucked the air out of every gamedev discussion and now there are almost no spaces to discuss anything advanced (even if it's applicable to Unity/Unreal/Godot).
My experience is the opposite. Plenty of intellectual stimulation comes from actually making the game. Designing and refining gameplay mechanics, level design, writing shaders, etc.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
I wrote a few benchmarks a few years ago comparing JS vs C++ compiled to WASM vs C++ compiled to x64 with -O3.
I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.
The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).
Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.
I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.
If I have all the time in the world, sure. When I'm racing against a deadline, I don't want to wrestle with the borrow checker too. Sure, it's objections help with the long term quality of the code and reduce bugs but that's hard to justify to a manager/process driven by Agile and Sprints. Quite possible that an experienced Rust dev can be very productive but there aren't tons of those going around.
Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.
Java is incredibly productive - it's fast and has the best tooling out there IMO.
Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.
Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]
Probably much better, given the improvements on the Swift optimizer, but just goes to show "tracing GC" bad, "reference counting GC" good isn't as straighforward as people make it to be, even if they are renowned developers.
It's a cherry picked, out-of-date counter-example. Swift isn't designed for building drivers.
In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.
Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.
It surely is, according to Apple's own documentation.
> Swift is a successor to the C, C++, and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics.
If developers have such a big problem glueing C libraries into Java JNI, or Panama, then maybe game industry is not where they are supposed to be, when even Assembly comes to play.
People have 240hz monitors these days, you have a bit over 4ms to render a frame. If that 1ms can be eliminated or amortised over a few frames it's still a big deal, and that's assuming 1ms is the worst case scenario and not the best.
I don’t think you need to work in absolutes here. There are plenty of games that do not need to render at 240hz and are capable of handling pauses up to 1ms. There’s tons of games that are currently written in languages that have larger GC pauses than that.
That has not been my experience. Sure, you don't have any control over the third-party stuff but I haven't seen this issue being widespread in the mainstream third-party libraries I've used e.g. logback, jackson, junit, jedis, pgJDBC etc which are very well known/widely used. The only place I've actually seen proliferation of this was by a contractor, who I suspect, was trying to ensure job security behind impenetrability.
On Objective-C, due to the way the language works, besides ClassFactoryGeneratorFactories, you would need to add all parameter names to the identifier.
I'd have said the same thing 10 years ago (or, I would have if I were comparing 10-year-old Java with modern Rust), but Java these days is actually pretty ergonomic. Rust's borrow checker balances out the ML-style niceties to bring it down to about Java's level for me, depending on the application.
Kotlin is nice indeed. Most of the issues I had with it were in interop with Java code (those pesky platform types, that behave like non-nullable but are nullable: and you are back in the NPE swamp!)
PascalCase has been my favourite since MS-DOS days, I have been through most Borland products, and Microsoft ones, alongside many Pascal influenced languages, thus it feels like home. :)
But yeah it is subjective, also don't have much qualms with other alternatives.
>I realized that what I'd boiled it down to looked an awful lot like Rust code
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
If I'd started in Rust I likely wouldn't have finished it at all. Java allowed me to start out just focused on the algorithm with very little regard for memory usage patterns and then refactor towards zero garbage collection. Rust can sort of allow the same thing by just sprinkling everything with clone and/or Rc/Arc, but it's much more in the way than just having a garage collector there automatically.
> "I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year."
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
You can't avoid a lot of this stuff, once libraries start using it or colleagues add it to your codebase then you need to know it. I'd argue you need to know it well before you decide to exclude it.
Then better be quite picky of what libraries one choses, because that is the thing, while we may not use them, the libraries migth impose them on us.
Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.
Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.
My experience with C++ is that it fundamentally "looks worse" and has worse tooling than more modern languages. And it feels like they keep adding new features that make it all even worse every year.
Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.
>> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
I think the choice of C++ vs JVM depends on your project. If you're not using the benefits of "unsafe" languages then it probably doesn't matter.
But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.
I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.
While there are many technical reasons to use C++ over Java in game development, many commercial games could be easily done in Java, as they are A or AA level at most.
Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.
Many people dream to make it as indie, most don't even achieve that.
No it isn't, there are now two versions of Minecraft, the classical one, and Minecraft Bedrock, that is the one written in C++.
Minecraft Bedrock doesn't have half of the community that Minecraft classical enjoys, hence why Microsoft is trying to use JavaScript based extensions to bring the mod community into Minecraft Bedrock.
Finally without Minecraft classical market success, there wouldn't exist Minecraft Bedrock at all, so Java did serve well enough to Notch's fortunes.
I'm not knocking indie development, the scene is very very vibrant. But indies don't typically push the hardware to its limits the same way.
And Java was a perfectly good choice of language for Notch for the same reasons.
I don't play Minecraft so I guess I'm outta touch. I knew about Bedrock and I've heard kids call Java the "old one". I didn't realise there's still an active community. Thanks for the correction :)
> but there is a reason that commercial gamedev is typically C++.
Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.
But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.
We were using a modified Luajit, in assembly, with a bit of other assembly dotted around the place. That assembly takes a long time to write (to beat a modern C++ compiler).
Then we had C++ for all our low level code and Lua for gameplay.
We were floating a middle layer of Rust for Lua bindings and the glue code for our transformation pipeline, but there was always a little too much friction to introduce. What we were particularly interested in was memory allocation bugs (use after free and leaks) and speeding up development of the engine. So I could see it having a place.
VPS/Cloud providers skimp on RAM. The JVM sucks for any low RAM workload, where you want the smallest possible single server instance. The startup times of JVM based applications are also horrendous. How many gigabytes of RAM does Digital Ocean give you with your smallest instance? They don't. They give you 512MiB. Suddenly using Java is no longer an option, because you will be wasting your day carefully tuning literally everything to fit in that amount.
You can get decent startup times if you have fewer dependencies. The JVM itself starts fairly quickly (<200 ms), the problem is all the class loading. If your "app" is a bloated multi gigabyte monstrosity... good luck!
I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
What do you mean by 'a mix of Haskell and Rust'? Is that a per-project choice or do you use both in a single project? I'm interested in the latter. If so, could you point me to an example?
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
> C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
This is really an “in theory” versus “in practice” argument.
Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.
I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.
I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.
None of this really supports your claim that "C++ has been faster than C for a long time."
You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.
In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.
For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).
Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.
Why doesn't that happen with C code?
C code is not plastic. It is brittle. It does not bend, it breaks.
This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.
A simple example:
Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.
Or let's change a linked list to an array. Aarrgghh again.
Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.
The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.
That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:
Nice catch. I had goofed by omitting optimization when checking this from an iPad.
That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:
It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:
My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:
C and C++ do have very different memory models, C essentially follows the "types are a way to decode memory" model while C++ has an actual object model where accessing memory using the wrong type is UB and objects have actual lifetimes. Not that this would necessarily lead to performance differences.
When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.
The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.
Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.
That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:
It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.
One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.
Most C++ code I have seen uses the STL. As for “hyper optimized” data structures, you already have those in C. See the B-Tree code those binary search routine I patched to run faster. Nothing C++ adds improves upon what you can do performance wise in C.
You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.
C and C++ can be made to generate pretty much the same assembly, sure. I find it much easier to maintain a template function than a macro that expands to a function as you did in the B-Tree code, but reasonable people can disagree on that.
Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.
Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.
In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.
Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.
Having worked on performance sensitive code (OpenZFS), I have found less to be more.
While C code makes more heavy use of linked lists than C++ code, most of the C code I have helped maintain made even heavier use of balanced binary search trees and B-trees than linked lists. It also used SLAB allocation to amortize allocation costs. In the case of OpenZFS, most of the code operated in the kernel where external memory fragmentation makes dynamic arrays (and “large” arrays in general) unusable.
I think you have not seen the C libraries available to make C even better. libuutil and libumem from OpenSolaris make doing these things extremely nice. Some of the first code I wrote professionally (and still maintain) was written in C++. There really is nothing from C++ that I miss in C when I have such libraries. In fact, I have long wanted to rewrite that C++ code in C since I find it easier to maintain due to the reduced abstractions.
This is not a convincing argument for C. None of this matches my experience across many companies. In particular, the specific things you cite — excessive dynamic memory usage, exceptions, bloat — are typically only raised by people who don’t actually use C++ in the kinds of serious applications where C++ is the tool of choice. Sure, you could write C++ the way you describe but that is just poor code. You can do that in any language.
For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.
This risks becoming a no true Scotsman, but it is indeed true that there is really no common idiomatic C++. Even the same code base can use vastly different styles in different areas.
Even regarding exceptions, I would not touch them anywhere close to the critical path, but, for example, during application setup I have no problem with them. And yet I know of people writing very high performance applications that are happy to throw on the critical path as long as it is a rare occurence.
> Sure, you could write C++ the way you describe but that is just poor code.
That is a problem with C++. C++ puts people into a sea of complexity and blames them when they do not get a good result. The purpose of high level programming languages is to make things easier for people, not make it even more likely to fail to write good code and then blame them when they do not.
If you try to follow the advice by the creators of C++, you often get further away from good code, and then when you complain, people say it is your fault. People who have actual success using C++ ignore the advice by the guys who made C++, which is an incredibly backward situation. This is a very different situation than you have with C where advice on good development practices does not conflict with reality.
> For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company.
Unfortunately, C++ does not make exceptions optional and even if you use a compiler flag to disable it, libraries can still throw them. Do you use the “non-throwing allocation functions” introduced in C++11 and avoid any library functions that can throw exceptions in your code to truly avoid exceptions? Given most people have been writing C++ code since before C++11, there is a good chance you do not. If you write code for Linux systems, you might be unaware that Linux can and will refuse to do allocations, even if most of the time it is willing to overcommit. This means that your C++ allocations can throw exceptions, even if you have used a compiler flag to “turn exception handling off”.
> It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
I have seen plenty of C++ software throw exceptions in wine, since it prints information about it to the console. It is amazing how often exceptions are used in the normal operation of such software. Of course, this goes unseen on the original platform, so the developers likely have no idea about all of the exceptions that their code throws.
I take it that you have never met Bjarne Stroustrup, who does not view exceptions as optional and will likely always tell you that you should not turn off exceptions, even if the compiler lets you.
> C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid.
Whenever anyone tries to point out C++’s flaws, someone else claims that they are doing it wrong. It is fallacious.
> In all of this you also failed to make an argument for why anyone should use C.
I was not trying to do that, but I will flip this on you and say that I do not see why you should use C++ over any other high level language given a choice. It is so bloated that it drowns people in choice, and when they inevitably make bad choices by trying to follow others’ advice (particularly Bjorne Stroustrup‘s) on how to make good choices, they are blamed for the mistake of doing that in the first place. I used to think so well of C++ based on its reputation, but these days, I think that the C++ language exists for masochists. It has no end of prescriptionists who will give bad advice on how to write “good code” and when following their advice turns out to produce bad code and you complain, there is no end of people telling you that the problems are your fault. The situation is the quintessence of masochism.
Just the other day, a guy on hacker news said that there was no point to using C structures of pointers over C++ classes, and that all C code should be compiled as C++. I replied with an explanation of why this is wrong:
Of course, C++ does not support that in member functions. You need to do such things via member function pointers if you want them, but advocates for C++ are largely prescriptionists who try to dissuade people from doing anything the way C does it and instead suggest whatever the latest C++ reinvention of things is instead, even though there was nothing wrong with doing it the C way.
> It isn’t like C++ can’t use C code.
It increasingly cannot. If C headers use variably modified types and do not have a guard macro an alternative for C++ that turns them into regular pointers, C++ cannot use the header. Here is an example of code using it that a C++ compiler cannot compile:
Unfortunately Stepanov and the STL are widely misunderstood. Stepanov core contributions is the set of concepts underlying the STL and the iterator model for generic programming. The set of algorithms and datastructures in the STL was only supposed to be a beginning, was never supposed to be a finished collection. Unfortunately many, if not most treat it that way.
But if you look beyond, you can find a whole world that extend the stl. If you are not happy, say, with unordered_map, you can find more or less drop in replacements that use the same iterator based interface, preserve value semantics and use the a common language to describe iterator and reference invalidation.
Regarding your specific use case, if you want intrusive lists you can use boost.intrusive which provides containers with STL semantics except it leaves ownership of the nodes to the user. The containers do not even need to be lists: you can put the same node in multiple lists linked list, binary trees (multiple flavors), and hash maps (although this is not fully intrusive) at the same time.
These days I don't generally need boost much, but I still reach for boost.intrusive quite often.
Except, nothing forbids me to use two linked lists in C++ using sys/queue.h, that is exactly one of the reason why Bjarne built C++ on top of C, and also unfortunely a reason why we have security pain points in C++.
Yet the C++ community is continually trying to get people to stay away from anything involving C. That said, newer C headers using _Generic for example are not usable from C++.
Because C++ was "TypeScript for C", plenty of room to improvement that WG 14 refuses to act on for the last 50 years.
Yes, most language features past the C89 subset are not supported, besides the C standard library, because C++ has much better alternatives, like why _Generic when templates are a much saner approach, than type dispatching with the pre-processor.
However that is besides the point, 99% of C89 code minus a few differences, is valid C++ code, and if the situation so requires, C++ code can be exactly the same way.
And lets not forget most FOSS projects have never moved beyond C89/C99 anyway, so stuff like _Generic is of relative importance.
In my experience, templates usually cause a lot of bloat that slows things down. Sure, in microbenchmarks it always looks good to specialize everything at compile time, whether this is what you want in a larger project is a different question. And then, also a C compiler can specialize a sort routine for your types just fine. It just needs to be able too look into it, i.e. it does not work for qsort from the libc. I agree to your point that C++ comes with fast implementations of algorithms out-of-the-box. In C you need to assemble a toolbox yourself. But once you have done this, I see no downside.
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time.
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
Ahh yes, now we are getting somewhere. "C++ is faster because it has all these features, no not those features nobody uses those. The STL, no, you rewrite that"
The poster you are responding to is correct. Modern C++ has established idiomatic code practices that are widely used in industry. Imagining how someone could use legacy language features in the most naive possible way, contrary to industry practice, is not a good faith argument. You can do that with any programming language.
You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
The JVM is excellent for throughput, once the program has warmed up, but it always has much more jitter than a more systemsy language like C++ or Rust. There are definitely use cases where you need to consistently react fast, where Java is not a good choice.
It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.
I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.
It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.
@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.
Errrm, sorry, I seem to have subverted this thread.
Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
Sqlx is completely lacking in the query composability department, and leads to a very large amount of boilerplate.
You can derive FromRow for your structs to cut down the boilerplate, but if you need to join two tables that happen to have a column with the same name it stops working, unless you remember to _always_ alias one of the columns to the same name, every time you query that table from anywhere (even when the duplicate column names would not be present). If a table gets added later that happens to share a column name with another table? Hope you don't ever have to join those two together.
Doing something CRUD-y like "change ordering based on a parameter" is not supported, and you have to fall back to sprintf("%s ORDER BY %s %s") style concatenation.
Gets even worse if you have to parameterize WHERE clauses.
I'm not going to deny your experience. But is Rust really that hard? It's a very smooth experience for me - sometimes enough for me to choose it instead of Python.
I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.
> This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
Most languages used with DBs are just as safe. This idea about Rust being more safe than languages with GC needs a rather big [Citation Needed] sign for the fans.
If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
For me personally, doing the clone-everything style of Rust for a first pass means I still have a graceful incremental path to go pursue the harder optimizations that are possible with more thoughtful memory management. The distinction is that I can do this optimization pass continuing to work in Rust rather than considering, and probably discarding, a potential rewrite to a net-new language if I had started in something like Ruby/Python/Elixir. FFI to optimize just the hot paths in a multi-language project has significant downsides and tradeoffs.
Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.
I don't find the single-vendor governance / commercial origins of those two languages very reassuring, but that's not something that will trouble everyone equally if at all.
None of those are to my personal taste and I think Kotlin is the only one with unambiguously strong adoption in industry. I'm trying not to make value-judgment statements about others that do like them.
Rust is actually quite suitable for a number of domains where it was never intended to excel.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?
I am absolutely convinced I can find success story of web backends built with all those languages.
There are 3 cases. The first is that you are comfortable with Rust and you just choose it for that. The second is that you're not comfortable with Rust and you choose something else that works for you.
The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.
This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.
> Rust is an obvious improvement over the interpreted languages.
Do we agree that most of the languages I mentioned above are not interpreted languages? You seem to only consider Go as a non-interpreted alternative...
Yeah, "web services backend" really means "code exercising APIs pioneered by SunOS in 1988". It's easy to be rock solid if your only dependency is the bedrock.
Perhaps. But a comparable Rust backend stack produces a single binary deployable that can absorb 50,000 QPS with no latency caused by garbage collection. You get all of that for free.
The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.
Yep, that's precisely it! When dealing with other languages I miss the "match" keyword and being able to open a block anywhere. Sure, sometimes Rust allows you to write terse abominations if you don't exercise a dose of caution and empathy for future maintainers (you included).
Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.
But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.
Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.
Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.
For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.
What language are you using that doesn’t have match? Even Java has the equivalent. The only ones I can think of that don’t are the scripting languages.. Python and JS.
public abstract sealed class Vehicle permits Car, Truck {
public Vehicle() {}
}
public final class Truck extends Vehicle implements Service {
public final int loadCapacity;
public Truck(int loadCapacity) {
this.loadCapacity = loadCapacity;
}
}
public non-sealed class Car extends Vehicle implements Service {
public final int numberOfSeats;
public final String brandName;
public Car(int numberOfSeats, String brandName) {
this.numberOfSeats = numberOfSeats;
this.brandName = brandName;
}
}
In Kotlin it's a bit better, but nothing beats the ML-like langs (and Rust/ReScript/etc):
type Truck = { loadCapacity: int }
type Car = { numberOfSeats: int, brandName: string }
type Vehicle = Truck | Car
Ah my mistake. It’s been at least 5 years since I’ve written it. I’m honestly surprised that JS has moved no where on it considering all of the fancy things they’ve been adding.
It has been proposed, but since there is all the process on how features get added into the standard, someone needs to champion it, and then there is the "at least two implementations" factor.
Yeah, anything with nulls ends up with Option<this> and Option<that> which means unwraps or matches. There is a comment above about good bedrock and Rust works OK with nulls but it works really well with unsparse databases (avoiding joins).
The bar for web services is low, so pretty much anything works as long as it's easy. I wouldn't call them a success story.
When things get complex, you start missing Rust's type system and bugs creep in.
In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.
The fact that people love the language is an unexpected downside. In my experience the rust ecosystem has an insanely high churn rate. Crates are often abandoned seemingly for no reason, often before even hitting 1.0. My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game. Once all the fun parts are solved, they leave it for dead.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
> My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game.
So you mean, Rust is more of an intellectual playground, than an actual workbench? I'm curious how high the churn rate of packages in other languages is, like python or ruby (let's not talk about javascript). Could this be the result of rust being still rather young and moving fast?
> Conversely and ironically, this is why I love Go.
Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
The official `go` command does dep management, (cross) compilation, testing (including benchmarks and coverage reports), race detection, profiling reports, code generation (metaprogramming alternative), doc generation etc. Build times are insanely fast too.
The only tooling I use personally outside of the main CLI is building iOS/Android static libraries (gomobile). It’s still first party, but not in the go command.
I haven't tried Go in a while, but 8 years ago, I felt the tooling was a disaster. The V1 ways of doing things were really janky, and the improved versions didn't seem to be universally adopted yet. It's nice to hear that seems to have changed.
Yes, it used to be horrible with GOPATH hell, because Google didn’t care much about deps since they had their own monorepo. They got their shit together years ago. IMO today it’s better tooling than Rust (and Rust is pretty great already). Give it a try.
I find it interesting how the software industry has done everything it can to ignore F#. This is me just lamenting how I always come back to it as the best general purpose language.
Probably the intersection of people who (a) want an advanced ML-style language and (b) are interested in a CLR-based language is very small. But also, doesn't it do some weird thing where it matters in what order the files are included in the compilation? I remember being interested in F# but being turned off by that, and maybe some other weird details.
I don’t want to use a language with unknown ecosystem. If I need a library to do X, I’m confident I can find it for Go, Java, Python etc. But I don’t know about F#.
I also don’t want to use a language with questionable hireability.
Haven't used F# too much myself but one of the strong points is because it shares the CLR with C# you can use any of the many packages meant for C# and it'll work because of the shared runtime.
Huh? Usually languages that are ”ignored” turns out to be for reasons such as poor or proprietary tooling. As an ignorant bystander, how are things like
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
It's been around for a long time and sponsored by Microsoft. I don't know its exact status, but the only reason for it to lack in any of those areas is lack of will.
I think this is a problem of using the right abstractions.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
GHC has an -fdefer-type-errors option that lets you compile and run this code:
a :: Int
a = 'a'
main = print "b"
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.
This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
> I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.
Some Common Lisp implementations like SBCL have supported this style of development for many years. Everything is dynamically typed by default but as you specify more and more types the compiler uses them to make the generated code more efficient.
I quite like common lisp but I don't believe any existing implementation gets you anywhere near the same level of compile time safety. Maybe something like typed racket but that's still only doing a fraction of what rust does.
You should give Julia a shot.
That’s basically that. You can start with super dynamic code in a REPL and gradually hammer it into stricter and hyper efficient code. It doesn’t have a borrow checker, but it’s expressive enough that you can write something similar as a package (see BorrowChecker.jl).
Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
> Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
> Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
It is indeed great for creating a prototype. After that, one can gradually migrate to Rust go benefit from faster execution times. The Rust bindings are in a pretty decent shape by now
Nowadays we have the luxury of LLMs to help migrate projects/code from one language to another. I would imagine a pipeline with Rust as an intermediate “compiled” step might be possible. LLM accuracy isn’t there yet, but I can dream.
It is not that complicated or time-consuming to do the transformation manually. On the contrary, it's even fun and a good practice (but admittedly, I do have a rather conservative view on the matter)
I like it better than python now, but it's still got some quirks. The lack of structs and typed callables are the biggest holes right now imo but you can work around those
This could be different in game dev, but in the last years of writing rust (outside of learning the language) I very rarely need to index any collection.
There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.
I'm no game dev but I have had friends who do it professionally.
Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.
> Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.
The underlying issue with game engine coding is that the problem is shaped in this way:
* Everything should be random access(because you want to have novel rulesets and interactions)
* It should also be fast to iterate over per-frame(since it's real-time)
* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways
* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have
* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.
* Congratulations, you are now the maintainer of a bespoken database engine
You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.
It's not at all easy to implement as an optimisation, because it changes a lot of semantics, especially around references and pointers. It is something that you can e.g. implement using rust procedural macros, but it's far from transparent to switch between the two representations.
(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)
Meh. I've tried "SIMD magic wand" tools before, and found them to be verschlimmbessern.
At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.
I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.
I'm not a game dev, but what's a straightforward way of adjusting some channel of a pixel at coordinate X,Y without indexing the underlying raster array? Iterators are fine when you want to perform some operation on every item in a collection but that is far from the only thing you ever might want to do with a collection.
Game dev here. If you’re concerned about performance the only answer to this is a pixel shader, as anything else involves either cpu based rendering or a texture copy back and forth.
A compute shader could update some subset of pixels in a texture. It's on the programmer to prevent race conditions though. However that would again involve explicit indexing.
In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.
You can manipulate texture coordinate derivatives in order to just sample a subset of the whole texture on a pixel shader and only shade those pixels (basically the same as mipmapping, but you can have the "window" wherever you want really).
This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).
Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.
That is true. Hadn't occurred to me because I'd had in mind pixel sorting stuff I did in the past where the fetches and stores aren't contiguous.
Interestingly enough the derivative functions are available to compute shaders as of SM 6.6. [0] Oddly SPIR-V only makes the associated opcodes [1] available to the fragment execution model for some reason. I'm not sure how something like DXVK handles that.
I'm not clear if the associated DXIL or SPIR-V opcodes are actually implemented in hardware. I couldn't immediately find anything relevant in the particular ISA I checked and I'm nowhere near motivated enough to go digging through the Mesa source code to see how the magic happens. Relevant because since you mentioned it I'm curious how much of a perf hit rolling your own is.
You're right - I should have just said "shader" and left it at that.
> There is some subset of problems that absolutely requires indexing to express efficiently.
Sure. But it's almost certainly quicker to run a shader over them, and ignore the values you don't want to operate on than it is to copy the data back, modify it in a safe bounds checked array in rust, and then copy it again.
> run a shader over them, and ignore the values you don't want to operate on
Use a compute shader. Run only as many invocations as you care about. Use explicit indexing in the shader to fetch and store.
Obviously that doesn't make sense if you're targeting 90% of the slots in the array. But if you're only targeting 10% or if the offsets aren't a monotonic sequence it will probably be more efficient - and it involves explicit indexing.
This is getting downvoted but it's kind of true. Indexing collections all the time usually means you're not using iterators enough. (Although iterators become very annoying for fallible code that you want to return a Result, so sometimes it's cleaner not to use them.)
However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.
An iterator works if you're sequentially visiting every item in the collection, in the order they're stored. It's terrible if you need random access, though.
Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).
Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)
Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)
When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.
While you maybe "shouldn't" be indexing collections often (which I also don't agree with, there is a reason that we have more collections then linked lists, lookup is important) even just getting the size of a collection which is often very related to business logic can be quite annoying.
For data that needs to be looked up mostly I want a hashtable. Not always, but mostly. It's rare that I want to look up something but its position in a list.
Please correct me if I'm wrong, but I don't think this would let me, say, pass an i32 returned from one method directly as an f64 argument in another method.
One of the smartest devs I know built his game from scratch in C. Pretty complex game too - 3D open-world management game. It's now successful on steam.
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
I agree that the game is amazing from a technical point of view, but look at the reviews and the pace of development. The updates are sparse and slow, and if there's an update, it's barely an improvement. This is one the of disadvantages of creating a game engine from scratch: more time is spent on the engine than the game itself, which may or may not be bad depending on which perspective you look at it from.
Most likely because they don't use Linux. Or because it's kind of a mine field to support with bugs that occur on different distros. Even Unity has their own struggles with Linux support.
They're distributing their game on Steam too so Linux support is next to free via Proton.
> it's kind of a mine field to support with bugs that occur on different distros
Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.
Why is it terrible? It gives a concrete target that the build is tested on. If someone cares they can most likely create an environment on their system that matches it. I don't see how that's any different from providing (for example) a flatpak.
I worked on games for 20 years and was always interested in alternative languages to C and C++ for the purpose.
Java was my first hope. It was a bit safer than C++ but ultimately too verbose and the GC meant too much memory is wasted. Most games were very sensitive to memory use because consoles always had limited memory to keep costs down.
Next I spent years of side projects on Common Lisp based on Andy Gavin’s success there with Crash Bandicoot and more, showing it was possible to do. However, reports from the company were that it was hard to scale to more people and eventually a rewrite of the engine in C++ came.
I have explored Rust and Bevy. Bevy is bleeding edge and that’s okay, but Rust is not the right language. The focus on safety makes coding slow when you want it to be fast. The borrow checker frowns when you want to mutate things for speed.
In my opinion Zig is the most promising language for triple A game dev. If you are mid level stick to Godot and Unity, but if you want to build a fast, safe game engine then look at Zig first.
I did the same for my project and moved to Go from Rust. My iteration is much faster, but the code a bit more brittle, esp. for concurrency. Tests have become more important.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
> but the code a bit more brittle, esp. for concurrency
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
Yep, I do use that, but after getting used to Rust's Send/Sync traits it feels wild and crazy there are no guardrails now on memory access between threads. More a feel thing than reality, but I just find I need to be a bit more careful.
No, it is not all that fast after the CGo call marshaling (Rust would need to compile to the C ABI). I would essentially call in to Rust to start the code, run it in its own thread pool and then call into Rust again to stop it. The time to start and stop don't really matter as this is code that runs from minutes to hours and is embarrassingly parallel.
I have no experience with FFI between C and Go, could anyone shed some light on this? They are both natively compiled languages – why would calls between them be much slower than any old function call?
• Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.
• Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.
Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.
This seems like the right call. When it comes to projects like these, efficiency is almost everything. Speaking about my own experiences, when I hit a snag in productivity in a project like this, it's almost always a death-knell.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
The advantages of correctness, memory safety, and a rich type system are worth something, but I expect it's a lot less when you're up against the value of a whole game design ecosystem with tools, assets, modules, examples, documentation, and ChatGPT right there to tell you how it all fits together.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
One of the challenges I never quite got over completely, was that I was always fighting rust fundamentals, which tells me I never fully assimilated into thinking like a rustacean.
This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.
I bet, and that's particularly difficult when so much of modern game dev is just repeating extremely well-worn patterns— moving entities around and providing for scripted and emergent interactions between those entities and the player(s).
That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.
Personally, I don’t think of it as fighting, more like “compiler assistance” —
you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.
Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere
It is a question of tradeoffs. Indie studios should be happy to trade off some performance in exchange for more developer productivity (since performance is usually good enough anyway in an indie game, which usually don't have millions of entities, meanwhile developer productivity is a common failure point).
I love Bevy, but Unity is a weapon when it comes to quickly iterating and making a game. I think the Bevy developers understand that they have a long way to go before they get there. The benefits of Bevy (code-first, Rust, open source) still make me prefer it over Unity, but Unity is ridiculously batteries-included.
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
Not a game dev, but thought I'd mess around with Bevy and Rust to learn a bit more about both. I was surprised that my code crashed at runtime due to basics I expected the type system to catch. The fancy ECS system may be great for AAA games, but it breaks the basic connections between data and use that type systems rely on. I felt that Bevy was, unfortunately, the worst of both worlds: slow iteration without safety.
I've always liked the concept of ECS, but I agree with this, although I have very limited experience with Bevy. If I were to write a game in Rust, I would most likely not choose ECS and Bevy because of two reasons: 1. Bevy will have lots of breaking changes as pointed in the post, and 2. ECS is almost always not required -- you can make performant games without ECS, and if with your own engine then you retain full control over breaking changes and API design compromises.
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
This is a personal project that had the specific goal of the person's brother, who was not a coder, being able to contribute to the project. On top of that, they felt the need to continuously upgrade to the latest version of the underlying game engine instead of locking to a version.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.
Love to have this comparison analysis. Huge LOC difference between Rust and C# (64k -> 17k!!!) though I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust.
> I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
I worked in a regulated space at one time, and my understanding is that this is a big reason they chose .NET over Java. Java relies a lot more on third-party libraries, which makes getting things certified harder.
Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.
The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...
None of this makes any sense. There is no waiting. You just do it. In no universe can you justify using a vulnerable log4j version. You force gradle to use the patched log4j and be done with it.
Five years has nothing to do with Java. It means nobody cares about security in the first place. Outsourcing such a trivial security problem to Microsoft is just another nail in the coffin. "I have no capacity to develop secure software, better make myself dependent on someone who can".
In sectors that are critical here in the EU, nobody allows c# and microsoft due to licensing woes longterm. It's java and foss all the way down. SaaS also is not a thing unless it runs on prem.
What kind of nonsense is this? EU is perfectly happy to use .NET-based languages as all of them, and the platform itself, are MIT (in fact, it's pretty popular out here).
C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
> The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM).
> It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.
Less time wasted sifting through half-baked solutions.
> Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.
---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
> C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way
One negative aspect is that if you haven't kept up, that terseness can be a bit of a brick wall. Many of the newer features, especially things where the .Net framework just takes over and solves your problem for you in a "convention over configuration" kinda way, are extremely terse. Modern C# can have a bit of a learning curve.
C# is an underrate language for sure and once you get going it is an absolute joy to work in. The .Net platform also gives you all the cross-platform and ease of deployment features of languages like Go. Ignoring C#/.Net because it's Microsoft is a bit of a mistake.
C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.
C# is a great language, but it's been hampered by slow transition towards AOT.
My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?
I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.
C# (well, .NET, because that's what does JIT/AOT compilation of the bytecode) is not transitioning to AOT. NativeAOT is just one of the ways to publish .NET applications for scenarios where it is desirable. Having JIT is a huge boon to a number of scenarios too, for example it is basically impossible to implement a competitive Regex engine with JIT compilation for the patterns in Go (aside from other limitations like not having SIMD primitives).
> C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...
But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.
People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.
> People tend to be influenced by opinions online but often the real world is completely different.
Unfortunately, my experience has been that C#'s lack of popularity online translates into a lot of misunderstandings about the language and thus many teams simply do not consider it.
Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.
You don't need to use Visual Studio, but it really makes a difference in the overall experience.
I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.
Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
> Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
That's just the way it is, especially with startups whom I think would benefit the most from C# because -- believe it or not -- I actually think that most startups would be able to move faster with C# on the backend than TypeScript.
I still think Visual Studio is better, but you can easily work on small to mid-size projects in VSCode. Could you use Vim? I probably wouldn't, but you can say the same for Java.
I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.
I love Rust and wanted to use it for gamedev but I just had to admit to myself that it wasn't a good fit. Rust is a very good choice for user space systems level programming (ie. compilers, proxies, databases etc.). For gamedev, all of the explicitness that Rust requires around ownership/borrowing and types tends to just get in the way and not provide a lot of value. Games should be built to be fast, but the programmer should be able to focus almost completely on game logic rather than low-level details.
Bevy solves the ownership/borrowing issues entirely with its ECS design though.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
Unity is still probably the best game engine for smaller games with Unreal being better for AAA.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
> I failed to fairly evaluate my options at the start of the project.
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
Do you think you needed to have those times to play around in the engine? Can a beginner possibly even know what to plan for if they don't fully understand the game engine itself? I am older so I know the benefits of planning, but I sometimes find that I need to persuade myself to plan a little less, just to get myself more in tune with the idioms and behaviors of the tool I am working in.
I think even if you don't have much experience with tools, you can still plan effectively, especially now with LLMs that can give you an idea of what you're in for.
But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)
GC isn't a big problem for many types of apps/games, and most games don't care about memory safety. Rust's advantages aren't so important in this domain, while its complexity remains. No surprise he prefers C# for this.
Disagree on both points. Anyone who has shipped a game in unity has dealt with object pooling, flipping to structs instead of classes, string interpolation, and replacing idiomatic APIs with out parameters of reused collections.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
This is a mostly Unity-specific issue. Unity unfortunately has a potato for a GC. This is not even an exaggeration - it uses Boehm GC. Unity does not support Mono's better GC (SGen). .NET has an even better GC (and JIT) that Unity can't take advantage of because they are built on Mono still.
Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.
Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.
GC isn't something to be afraid of, it's a tool like any other tool. It can be used well or poorly. The defaults are just that - defaults. If I was going to write a rhythm game in Unity, I would use some of the options to control when GC happens [0], and play around with the idea of running a GC before and after a song but having it disabled during the actual interactive part (as an example).
Not just GC -- performance in general is a total non-issue for a 2d tile-based game. You just don't need the low-level control that Rust or C++ gives you.
I wouldn't say it's a non-issue. I've played 2D tile-based, pixel art games where the framerate dropped noticeably with too many sprites on screen, even though it felt like a 3DS should have been able to run it, and my computer isn't super low-end, either. You have more leeway, but it's possible to badly make optimized 2D games to the point where performance becomes an issue again.
It sounds to me that it may have been better to limit performance-critical parts to Rust and write the actual game in something like Lua (embedded in Rust)?
That's the approach I've been taking with a side project game for the very reason alone that the other contributors are not system programmers. I.e. a similar situation as the author had with his brother.
Rust was simply not an option -- or I would be the only one writing code. :]
And yeah, as others mentioned: Fyrox over Bevy if you have few (or one) Rust dev(s). It just seems Fyrox is not on the radar of many Rust people even. Maybe because Bevy just gets a lot more (press) coverage/enthusiasm/has more contributors?
Using Rust in a project felt less like implementing ideas and more like committing to learning the language in depth.
Most projects involve messy iteration and frequent failure. Doing that in Rust is painful.
Starting a greenfield project in it feels more like a struggle with the language than progress on the actual idea unless you're a Rust enthusiast.
I love Rust, but I would not try to make a full fledged game with it without patience. This post is not so much a moving away from Rust as much as Bevy is not enjoyable in its current form.
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
That's an excellent article - it's great when people share not only their victories, but mistakes, and what they learned from them.
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
It does solve the gameplay development use case too. Bevy encourages using lots of small 'systems' to build out logic. These are functions that can spawn entities or query for entities in the game world and modify them and there's also a way to schedule when these systems should run.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
>I wanted UI to be easy to build, fast to iterate, and moddable. This was an area where we learned a lot in Rust and again had a good mental model for comparison.
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
To which extent was the implementation in C# benefitting off both the clarified requirements (so the Rust experience could be seen more as prototyping mixed with production)?
Was it actually in major parts just a major refactor in a different language (admittedly with much more proven elements)?
Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.
I completely understand, and it's not the first time I've heard of people switching from Bevy to Unity. btw Bevy 0.16 just came out in case you missed the discussion:
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
> Bevy is young and changes quickly. Each update brought with it incredible features, but also a substantial amount of API thrash
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
For my going on 5 year side game project, this is why I can only write in vanilla tools (java, typescript) and with small libraries that are easy to replace. I would loose all motivation if I had to refactor my game and update the engine every time I come back to it. But also, I don't have the pressure of ever finishing the game...
Unity is predatorial. I work in a small studio which is part of a larger company (only 5 of us use Unity) and they have suddenly decided to hold our accounts hostage until we upgrade to Industry license because of the revenue our parent company makes even though that's completely separate cash flow versus what our studio actually works with. Industry license is $5000 PER SEAT PER YEAR. Absolute batshit crazy expensive for a single piece of software. We will never be able to afford that. So we are switching over to Unreal. It's really sad what Unity has become.
Definitely not cheap, but I assume developer cost and migrating to unreal is probably not cheap either. I'm not too familiar with either engine, are they similar enough that it's "cheaper" to migrate? I imagine that sets back release dates as well.
Yeah, I actually recently tried making a game in Lua using LOVE2D, and then making the same one in C with Raylib, and I didn't feel like Lua itself gave me all that much. I don't think Lua is best for game logic so much as it's the easiest language to embed in a game written in C or C++. That said, maybe some of its unique features, like its coroutines, or stuff relating to metatables, could be useful in defining game logic. I was writing very boring, procedural, occasionally somewhat object-oriented code either way.
Lua would definitely help with iteration times vs. C/C++/Rust but C# compiles very quickly. Especially in Unity where you have an editor that keeps assets cached and can hot reload code changes (with a plugin).
Coroutines can definitely be very useful for games and they're also available in C#.
This can be summarized in a simple way: UI is totally, another world.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
I disagree. The issue, which the article mentions, is iteration time. They were having issues iterating on gameplay, not UI.
My own experiences with game dev and Rust (which are separate experiences, I should add) resonate with what the article is expressing. Iterating systems is common in gamedev and Rust is slow to iterate because its precision ossifies systems. This is GREAT for safety, it's crap for momentum and fluidity
This is why game engines embedded scripting languages. Who gives a crap if the engine takes 12 hours to compile if 80% of the team are writing lua in a hot reload loop.
Would you happen to have (sample) or open-source Rust code out there demonstrating this approach? I'm very curious to learn more.
For example; if you have a progressbar that needs to be updated continuously, you do what? Upon every `tick` of your Rust engine you send a new struct with `ProgressBar(percentage=x)`? Or do the structs have unique identifiers so that the UI code can just update that one element and its properties instead of re-rendering the entire screen?
> I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
Are scripting languages not a thing in gamedev anymore?
I feel most of the things mentioned (rapid prototyping, ease of use for new programmers, modability) would be more easily accomplished by embedding a Lua interpreter in the rust project.
Glad C# is working out for them though, but if anyone else finds themselves in this situaton in Rust, or C, C++, Zig, whatever - embedding lua might be something else to consider, that requires less re-writing.
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
Professional high-performance C++ game engine dev here. At a glance, their game looks great. But, to be frank, it also looks like it could have been made in the DOS era with sufficient effort.
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
> We wrote extensive pros and cons, emphasizing how each option fared by the criteria above: Collaboration, Abstraction, Migration, Learning, and Modding.
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
I also would have liked to have seen the pro/con lists for each of the potential choices.
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
The key is, you gotta be pretty cold in the analysis. It's probably more important to avoid what you hate than to lean in too hard to what you love, unless your terminal goal is to work in $FAVE_LANG. Too many people claim they want to make a game, but their actions show that their terminal goal was actually to work in their favorite language. I don't care if your goal is just to work in your favorite language, I just think you need to be brutally honest with yourself on that front.
Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.
Thanks, I appreciate the comment! I'm certain that my goal is not to work in a specific language, but to bring a long-time idea to life, and ideally minimize the amount of avoidable headaches along the way.
You're probably right that it'd be best to just jump in and get going with a few of them rather than analyze the choice to death (as I am prone to do when starting anything).
This is goes for a lot of things in tech unfortunately. For example, being stuck in a SRE/devops amusement park can be incredibly frustrating and surprisingly resource intense.
Sometimes it feels like we could use some kind of a temperance movement, because if one can just manage to walk the line one can often reap great rewards. But the incentives seem to be pointing in the opposite direction.
I'm beginning to develop a heuristic around the concept of "amount of the library you use". It's intrinsically fuzzy and still something I'm working on, but in general, it's bad to use only a tiny fraction of a library or framework, and really bad to have a code base in which a large number of things are pulled in that you only use small fractions of.
There are some exceptions, e.g., pulling in your languages best-of-breed image library to load some JPGs even though it supports literally a dozen other formats is less disastrous to a code base than pulling in an industrial-strength web framework just to provide two API calls with some basic auth of some sort. But there's something to the concept in general, I think.
One of the complaints in the article was using a framework early in it's dev cycle. I imagine they were just picking what is safe at that point and didn't want to get burned again.
Related: just tried to switch to Rust when starting a new project. The main motivation was the combination of fearless concurrency and exhaustive error handling - things that were very painful in the more mature endeavor.
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling
How long ago was this and did you try JetBrains RustRover? While not quite as mature as some other JetBrains tools, I've found the latest version really quite good.
About 15 hours ago. I was switching between RustRover and VS Code + Rust Analyzer. Not quite mature is an understatement. All said above applies to RustRover.
No, the new project that I tried Rust for is a voice API (VAD, Whisper, etc). Got disappointed because, for example, the codec is just a wrapper around libopus. So it doesn't provide safety guarantees, and finding a crate that would build without issues was a challenge.
> 3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?
Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.
Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.
Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.
So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.
I haven't coded in C/C++ in years but friends who do and worked on non-trivial codebase in those languages had a really crappy experience with LLMs too.
A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.
My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.
Rust is fine as a low-level systems programming language. It's a huge improvement over C and (because memory safety) a decent improvement over C++. However, most applications don't need a low-level systems programming language, and trying to shoehorn one where it doesn't belong just leads to sadness without commensurate benefit. Rust does not
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
I completely agree with you—Rust is not well-suited for application development. Application development requires rapid iteration, acceptable performance, and most importantly, a large developer community and a rich software ecosystem.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
Sorry but this engine had(s) problems rendenring a simple rectangle with alpha channel texture, not longer than 3 months ago (I'm assuming it was fixed).
Is it normal for Rust ecosystem to suggest software with this level of maturity?
Rust is not good for video game gameplay logic. The ownership model of Rust can not represent the vast majority of allocations.
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
Gameplay code is a big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.
The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.
Which part of that architecture is impossible in Rust? Actually an honest question, I'm wondering if I'm missing something.
From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.
I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.
In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.
The headline is a bit sensational here and shall have been rather called "Migrating away from Bevy" .. That's not (really) comparing C# to Rust (and Luna but that one is missing), but rather comparing game engine where the language is secondary. Obviously Unity is the leader here (with Unreal) - despite all its flaws.
No. No one plays Veloren. It’s a toy project for programmers.
No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
> Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
Sony mandates you use their toolchain. You don’t get to ship whatever you want on their console. They have a very thorough TRC check you must pass before you get to ship.
> Rust can not represent the vast majority of allocations
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
The "Learning" point drives home a concern my brother-in-law and I were talking about recently. As LLMs become more entrenched as a tool, they may inevitably become the crutch that actually holds back innovation. Individuals and teams may be hesitant to explore or adopt bleeding edge technologies specifically because LLMs don't know about them or don't know enough about them yet.
I see this quite a bit with Rust. I honestly cringe when people get up in arms about someone taking their project out of the rust community.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me"
"Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
Rust attracts a religious fervour that you'll almost never see associated with any other language. That's why posts like this make the front page and receive over 200 comments.
If you switched from Java to C# or vice versa, nobody would care.
How is that different from choosing not to adopt a technology because it’s not widely used therefore not widely documented? It’s the timeless mantra of “use boring tech” that seems to resurface every once in a while. It’s all about the goal: do you want to build a viable product, quickly, or do you want to learn and contribute to a specific tech stack? That’s the trade off most of the time.
It's a lot worse. A high quality project can have great documentation and guides that make it easy to use for a human, but an LLM won't until there's a lot of code and documents out there using it.
And if it's not already popular, that won't happen.
No, this doesn't ring true: long before there were LLMs, people were selecting languages and stacks because of the quality and depth of their community.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
Isn't this article an example of that. There might be a lot of rust code but if the apis are changing frequently it's all outdated and leads to unusable outputs.
I was actually meaning to post this as an Ask HN question, but never found the time to word it well. Basically, what happens to new frameworks and technologies in the age of widespread LLM-assisted coding? Will users be reluctants to adopt bleeding-edge tools because the LLMs can't assist as well? Will companies behind the big frameworks put more resources towards documenting them in a way that makes it easy for LLMs to learn from?
Actually, here in my corner of EU, only the prominent big tech backed well documented and battle tested tools are most marketable skills. So, React, 50 new jobs, but you worked with Svelte/Solidjs, what is that? Java/PHP/Python/Ruby/JS, adequate jobs. Go/Rust/Zig/Crystal/Nim, what are these? While Go has some popularity in recent years and I can spot Rust once in a blue moon. Anything involving requiring near metal work is always C/C++.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
This is just reality outside of the early stage startup. The US tech industry and its social networks are very dominated by trendy startup ideas, but the reality is still the major tried-and-true platforms.
Another way to look at it: working bleeding edge will become a competitive advantage and a signal to how competent the team is. „Do they consume it” vs „do they own it”.
Constantly chasing the latest tech trends has probably done more harm than good, because more often than not, it turns out that the latest hype technology actually does not deliver what the marketing had promised. Look at NoSQL and MongoDB especially as recent examples. Most people who blindly jumped on the MDB bandwagon would have probably been better off just using Postgres, and they later had to spend a lot of resources migrating away from Mongo.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
This already happens. Is your new framework popular on GitHub and on Stack Overflow is a metric people use. LLMs are currently mostly capable of just adapting documentation, blog posts, and answers on SO. So they add a thin veneer on top of those resources.
I expect it will wind up like search engines where you either submit urls for indexing/inclusion or wait for a crawl to pick your information up.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
Hopefully, tools can adapt to integrate documentation better. I've already run into this with GitHub Copilot, trying to use Svelte 5 with it is a battle despite it being released most of a year ago.
There’s another future where reasoning models get better with larger context windows, and you can throw a new programming language or framework at it and it will do a pretty good job.
We already have quite a lot of that effect with tooling. A language can't really get much traction until its got a build, packaging and all the IDE support we expect or however productive the language is it looses out in practice because its hard to work with and doesn't just fit into our CI/CD systems.
Doesn't this mean that new tech will have to demonstrate material advantages, such that outweigh the LLM inertia, in order to be adopted? This sounds good to me; so much framework churn seems to be code fashion rather than function. Now if someone releases a new framework, they need to demonstrate real value first. People that are smart enough to read the docs and absorb the material of a new, better, framework will now have a competitive advantage; this all seems good.
I think it's a good point and I experienced the same thing when playing with SDL3 the other day. So even established languages with new API's can be problematic.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
I had the same issue a few months ago when I was trying to ask LLMs about Box2D 3.0. I kept getting answers that were either for Box2D 2.x, or some horrific mashup of 2.x and 3.0.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
Almost every time I've run into similar problems with LLMs I've mostly managed to solve them by uploading the documentation to the version of the library I'm using to the LLM and instructing it do use that documentation when answering questions about the library.
I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
I don’t think we will have a lack of people who explore and know beyond others how to things.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
What innovation? Languages with curly braces versus BEGIN/END? There is no innovation going on in computer languages. Rust is C with better ergonomics and rigorous memory management. This was made possible with better processors which made more elaborate compilers practical. It all gets compiled by LLVM down to the same object code. I think we are moving to an era of "read-only" languages. Languages that have horrible writing ergonomics yet are easy to understand when read. Humans won't write code. They will review code.
I've noticed this effect even with well established tech but just in degrees of popularity. I've recently been working on a Swift/SwiftUI project and the experience with LLM's compared to something like web dev stuff with React, etc is noticeably different/worse which I mostly attribute to there probably being at least 20 times less Swift specific content on the web in comparison.
There are a ton of Swift /SwiftUI tutorials out there for every new technology.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
Is that different from what is happening already? A lot of people won't adopt a language/technology unless it has a huge repository of answers on StackOverflow, mature tooling, and a decent hiring pool.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
New languages / packages / frameworks may need to collaborate with LLM providers to provide good training material. LLM-able training material may be the next important documentation thing.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
How can it compete with vast amount of trained codebases on Github? For LLMs, more data equals better results, so people will naturally be driven to better completion with already established frameworks and languages.
It would be hard to produce organic data on all ways your technology can be (ab)used.
Allegedly one of the ways they've been training LLMs to get better at logic and reasoning, as well as factual accuracy, is to use LLMs themselves to generate synthetic training data. The idea here would be similar: generate synthetic training data. Generating this could be aided by LLMs, perhaps with a "playground" of some sort where LLMs could compile / run / render various things, to help select out things that work and things that don't work (as well as if you see error X, what the problem might be).
It’s the same now. I’ve spent arguably too much time trying to avoid Python and it has cost me a whole lot of time. You keep running into bugs and have to implement much more yourself if you go off the beaten path (see also [1]). I don’t regret it since I learned a lot, but it’s definitively not always the easiest path. To this day I wonder whether maybe I should have taken the simple route.
A showerthought I had recently was that newly-written software may have a perverse incentive to be intentionally buggy such that there will be more public complaints/solutions for said software, which gives LLMs more training data to work with.
Its not even innovation. I had a new Laravel project that i was chopping around to play with some new library and I couldn't the the dumbest stuff to work. Of course I went back to read the docs and - ah Laravel 19 or whatever is using config/boostrap.php again and no matter what chatgpt, or myself had figured, could understand why it wasnt working.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
The article title is half-true. It wasn't so much they migrated away from Rust, but that they migrated away from Bevy, which is an alpha quality game engine.
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
Not OP, but it seems that there is still a huge sentiment that Unity is not a "safe" platform to migrate to because of their relatively antagonistic approach to monetization guidelines compared to other open source game engines. I do think it makes sense to also consider Godot given his coworker is his brother who is stated to be new to game development, it has a scripting language even simpler than C#, more like python. Additionally, one might expect that someone more into Rust might prefer the C++ integration that Unreal offers. I think the timeline had an effect here too, as it's not been until recently that people have been taking Godot more seriously.
People forget that Unity and Unreal are industry darlings for a reason.
The amount of platforms they support, the amount of features they support, many of which could be a PhD thesis in graphics programming, the tooling, the store,....
Personally, literally anything except Unity. The fact that they tried to retroactively change terms on developers means that it will be a long time before I feel comfortable trusting they won't try it again.
I mean, we already have a sort-of answer, because the "Bedrock Edition" of Minecraft is written in C++, and it is indeed less popular on PC (on console, it's the only option, so _overall_ it might win out) and does lack any real modding scene
Indeed. Java is sufficiently dynamic/decompilable a game written in it can be heavily modded without adding specific support. C++ is much harder (depending on the game engine), though not impossible. If you do add modding support then everything is much better regardless of language, though (see Factorio, written in C++ and with a huge modding scene, because it was basically written with modding in mind. Lua is certainly helping with that, of course).
I actually disagree with that. Decompilation based mods can completely change anything and everything about the game. Scripting based mods can only change things within the boundaries allowed by the devs of the original game.
True, a limited modding API can be a problem. But in something like minecraft it's not a a free-for-all with mods either, it's just that the community writes their own modding API, but has to deal with breakage whenever the game updates.
The problem with Rust is that almost everything is still at an alpha stage. The vast majority of crates are at version 0.x and are eventually abandoned, replaced, or subject to constant breaking changes
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.
> I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).
Borrow checker is mostly a strawman for this discussion, the post is about using Bevy as an engine and Bevy uses an ECS than manages the lifetime of objects for you automatically. You will never have an issue with the borrow checker when using Bevy, not even once.
Everything in every ECS system is done with handles, but the parent comment is correct that many games use hairballs of pointers all over the place (and they are handles with ECS). There is never a borrow checker issue with handles since they divorce the concept of a pointer from the concept of ownership.
I wouldn't say 'almost everything', but there are some areas which require a huge amount of time and effort to build a mature solution for, UI and game engines being one, where there are still big gaps.
I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.
With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.
Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.
Well, on the flip side with C++ some of it hasn't been updated beyond very basic maintenance and you can't even understand the code if you are just familiar with more modern C++…
> The problem with Rust is that almost everything is still at an alpha stage.
Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.
As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.
But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.
Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects. Rust language and library features themselves often spend years in nightly before making it to a release build.
You can also always go from 1.0 to 2.0 if you want to make breaking changes.
> Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects
Yes. Because it makes a promise about backwards compatibility.
> Rust language and library features themselves often spend years in nightly before making it to a release build.
So did Java's. And I Rust probably has a fraction of its budget.
In defense of long nightly feature more than once, stabilizing some feature like negative impl and never types early would have caused huge backwards breaking changes.
> You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yeah, just like Python!
And split the community and double your maintenance burden. Or just pretend 2.0 is 1.1 and have the downstream enjoy the pain of migration.
> And split the community and double your maintenance burden.
If you choose to support 1.0 sure. But you don't have to. Overall I find that the Rust community is way too leery of going to 1.0. It doesn't have to be as big a burden as they make it out to be, that is something that comes down to how you handle it.
Godot launched 0.1 in February 2014 and got to 1.0 in December 2014.
The distance in time between the launches of Unreal Engine 4 and Unreal Engine 5 was 8 years (April 2014 to April 2022). Unreal Engine 5 development started in May 2020 and had an early access release in May 2021.
Bevy launched 0.1 in 2020 and is at 0.16 now in 2025. 5 years later and no 1.0 in sight.
If you want people to use your OSS projects (maybe you don't), you have to accept that perfect is the enemy of good.
At this point, regulators and legislators are trying to force people to use the Rust ecosystem - if you want a non-GC language that is "memory safe," it's pretty much the de facto choice. It is long past time for the ecosystem to grow up.
> Godot has been an in-house engine for a long time and the priority of new features were always linked to what was needed for each game and the priorities of our clients.
I checked the history and it was known by another name Larvita.
> If you want people to use your OSS project
Seeing how currently I have about 0.1 parts of me working on it, no I don't want to give people false sense of security.
> At this point, regulators and legislators are trying to force people to use the Rust ecosystem
Not ecosystem. Language. Ecosystem is a plus.
Further more the issue Bevy has is more of there aren't any good mature GUI libraries for Rust. Because cross OS GUIs were, are and will be a shit show.
Granted it's a shit show that can be directed with enough money.
In absolute terms yes, but relative to the CPU speed memory is ridiculously slow.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
This comment might not be liked by the usual commenters in these threads, but I think it is worth stressing:
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
The way I read about Bevy in online discussions obfuscates this. Someone who is new to game development could be confused into thinking Bevy is a fair competitor with the other engines you mentioned. And equate Bevy with Rust, or Bevy with Rust in game dev. I think stomping this out is critical to expectation management, and perhaps rust's future in game dev.
As someone who has used Bevy in the past, that was my reading as well. It is an incredible tool, but some of the things mentioned in the article like the gnarly function signature and constant migrations are known issues that stop a lot of people from using it. That's not even to mention the strict ECS requirement if your game doesn't work well around it. Here is a good reddit thread I remember reading about some more difficulties other people had with Bevy:
Structs in C# or F# are not low-level per se, they simply are a choice and used frequently in gamedev. So is stackalloc because using it is just 'var things = (stackalloc Thing[5])' where the type of `things` is Span<Thing>. The keyword is a bit niche but it's very normal to see it in code that cares about avoiding allocations.
Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).
Right, it depends on how far one wants to go to avoid allocations. structs and spans are safe. But one can go even deeper and pin pointers and do Unsafe.AsPointer and get a de-facto (unsafe) union out of it....
Imo the place for rust in game dev isnt in games at all, but base libraries and tools. Writing your proc generation library in rust that is an isolated package you can call in isolation, or similar is where its useful.
I agree. [Unless fully adopted by a serious game engine, of course.]
Rust's "superpower" is substituting critical C++ code in-place, with the goal of ensuring correctness and soundness. And increasing the development velocity as a result.
Sounds like "Migrating away from Bevy towards Unity"; the Rust to C# transition is mostly a technical consequence.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.
Anyone else get an empty page on mobile Firefox when they try to go the article? All that renders for me is a comment entry box. If I go back to news I can see the article list just fine.
I experienced the same, I had to disable my adblocker to view it, it seems the content is inside a tag `<article class="social-sharing">` but I am unsure whether this triggered my adblocker.
Another failed game project in Rust. This is sad.
I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
[1] https://www.animats.com/sharpview/index.html
I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
So in D, is it now natural to mix borrow checking and garbage collection? I think some kind of "gradual memory management" is the holy grail, but like gradual typing, there are technical problems
The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness
---
So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?
Compared with GC, borrow checking affects every function signature
Compared with manual memory management, GC also affects every function signature.
IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there
The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.
Likewise, type safety is a global property of a program
---
(good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)
> So in D, is it now natural to mix borrow checking and garbage collection?
I think "natural" is a bit loaded, there is native support in the frontend for doing both. You have to go out of your way to annotate functions with @live and it is still experimental(https://dlang.org/spec/ob.html). The garbage collection is natural and happens if you do nothing, but you can turn it off with proper annotations like @nogc(https://dlang.org/spec/function.html#nogc-functions) or using betterC(https://dlang.org/spec/betterc.html). There is also @safe, @system and @trusted(https://dlang.org/spec/memory-safe-d.html).
So natural is a stretch at the moment, but you can use all kinds of different techniques, what is needed is more community and library standardization around some solutions.
I agree with you.
For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.
And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.
Hey, thank you for spreading the joy of the borrow checker beyond Rust; awesome stuff, sounds very interesting, challenging, and useful!
One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?
Cheers!
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
#4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.
Not a game dev, but based on what I do know of it, some of this sounds to me like it's just a severe mismatch between Rust's memory model and the needs of games.
Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.
But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.
> > The lower levels are buggy and have a lot of churn > > The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan
The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.
The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.
Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.
> > Back-references are difficult > > A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.
We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.
Arc is a very slow and primitive tool compared to a GC. If you are writing Arc everywhere, you would probably have better performance switching to a JVM language, C#, or Go.
This is incorrect if you are using Rc exclusively for back references. Since the back reference is weak, the reference count is only incremented once when you are creating the datatype. The problem isn't that it's slow, it's that it consumes extra memory for book keeping.
Objects are cheaper than Arc<T>. Otherwise using GC would suck a lot more than it does today (for certain types of data structures like trees accessed concurrently it is also a massive optimization).
Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.
One thing that struck me was the lavish praise heaped on the ECS of the game engine being migrated away from; this is extremely common.
I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).
People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.
I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
> It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
>Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
Cache locality matters, but so does having less allocator pressure. Use 32-bit unsigned ints as indices, and you get improvements on that as well.
>The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
I'd always try to avoid that type of allocation pattern in C++, FWIW :-).
Having gone full-in on this approach before, with some good success, it still feels wrong to me today. Contiguous storage may work for reasonable numbers of elements, but it's potentially blocking a huge contiguous chunk of address space especially for large numbers of elements.
I probably say this because I still have to main 32-bit binaries (only 2G of address space), but it can potentially be problematic even on 64-bit machines (typically 256 TB of address space), especially if the data structure should be a reusable container with unknown number of instances. If you don't know a reasonable upper bound of elements beforehand, you have to reallocate later, or drastically over-reserve from the start. The former removes a pointer stability guarantee, the later is uneconomical, it may even be uneconomical on 64-bit depending on how many instances of the data structures you plan to have. And having to reallocate when overflowing the preallocated space makes operations less deterministic with regards to execution time.
GC languages like C# don't need these tricks, because it is feature rich enough to do C++ style low level programming, and has value types.
One can also use this array-index approach in C++, utilize the `at` methods and have "memory safety guarantees", no ?
> Recently I rewrote the b-tree to simply use a vec of internal nodes
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
Could std::rc::Weak solve the backreference problem?
Weak is very helpful in preventing ownership loops which prevent deallocation. Weak plus RefCell lets you do back pointers cleanly. You call ".borrow()" to get access to the data protected by a RefCell. The run-time borrow panics if someone else is using the data item. This prevents two mutable pointers to the same data, which Rust requires.
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself. There's been some work on Rust static deadlock analysis, but it's still a research topic.
I didn't consider that. Looking at how weak references work, that might work. It would reduce the need for raw pointers and unsafe code. But in exchange, it would add 16 bytes of overhead to every node in my data structure. That's pure overhead - since the reference count of all nodes should always be exactly 1.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
> What it doesn't protect you from is use-after-free bugs.
How about using hash maps/hash tables/dictionaries/however it's called in Rust? You could generate unique IDs for the elements rather than using vector indices.
But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).
But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.
I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.
That's a good comment.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser. It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood. Content is constantly coming in, being displayed, and then discarded as the user moves around the big world. All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.
This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.
Given my experience with Bevy this doesn't happen very often, if ever.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
Nobody is going to be writing code in 2035
> fight the borrow checker
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
The ground floor in lifts isn't "1", it is "G". Same thing.
Country dependent. Like there are 1-based indexing languages (Lua, Matlab, et al)
And others like Pascal linage (Pascal, Object Pascal, Extended Pascal, Modula-2, Ada, Oberon,...), that have flexible bounds, they can be whatever numeric subranges we feel like using, or enumeration values.
Not in Lua.
Most languages have abstractions for iterating over an array so that you don’t need to use 0 or length-1 these days
Because the math books are the ones being weird. https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...
Maths books aren't being weird. They are counting in a way most people learn to count. One apple, two apples, three apples. You don't start zeroth apple, one apple, two apples, then respond the set of apple contains three apples.
But computers are not actually counting array elements, it's more accurate to compare array indexing with distance measurement. The pointer (memory address) puts you at the start of the array, so the first element is right there under your feet (i.e. index 0). The other elements are found by measuring how far away from the start they are:
I believe it’s a practicality to simplify pointer arithmetic
Yes but why does no one talk here about fighting the 0 indices. Or how they are switching to Lua, because 0 indices are hard?
Am I the only person that remembers how hard it was to wrap your head around numbers starting at 0, rather than 1?
I find indices starting from zero much easier. Especially when index/pointer arithmetic is involved like converting between pixel or voxel indices and coordinates, or indexing in ring buffers. 1-based indexing is one of the reasons I eventuallz abandoned Mathematica, because it got way too cumbersome.
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
> 0 indices are hard?
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
Interesting path. I went Basic, and Pascal and then C in college. Honestly it was such a mind twist.
For languages with 0-based array element numbering, say what the numbers are: they're offsets. 0-based arrays have offsets, 1-based arrays have indices.
Yes, I think you are. The challenges people describe with Rust look more difficult than remembering to start from 0 instead of 1…
I don't think so. One based numbering is barring few particular (spoken) languages the default. You have to had to change your counting strategies when going from regular world to 0 based indices.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.
> Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
Given most languages since at least C have 0-based indexing... I would think most engineers picked it up early? I recall reading The C Programming Language 20 years ago, reading the reason and just following what it says. I don't think it's as complex as the descriptions people put forward of "fighting the borrow checker." One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust." I know which one I'm going to find more challenging when I get round to trying to learn Rust...
> Given most languages since at least C have 0-based indexing.
As I mentioned I started Basic on C64, and schools curriculum was in Pascal. I didn't learn about C until I got to college.
> One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust."
In practice they are, you start writing code. At first you trip on your feet, read stuff carefully, then try again until you succeed.
Then one day, you wake up and realize I know 0 indices and/or borrow checker. You don't know how you know, you just know you don't make those mistakes anymore.
I sometimes work on creating my own programming language (because there aren't enough of those already) and one of the things I want to do in it is 1-based indexing. Just so I can do:
...and get "Alex" instead of "Kim".Or take a lesson from languages where this isn't a religious question and do
or if being flexible, Bonus points for adding a macro or function, to make the second form available as first as well.You can't do possibly-erroneous pointer math on a C# object reference. You don't need to deal with the game life cycle AND the memory life cycle with a GC. In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
To say it's the same as using array indices is just not true.
> You can't do possibly-erroneous pointer math on a C# object reference.
Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.
> You don't need to deal with the game life cycle AND the memory life cycle with a GC.
I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.
> In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.
More than the trying to find another object kind of math, I was mostly thinking about address aliasing ie cleared handles pointing to re-used space and now live but different objects. You could just say "don't screw up your handle/alloc code" but it's just something you don't have to worry about when you don't roll your own.
The live C# but dead Unity object trick is mostly only useful for dangling handles and IDs and such. It's more that memory won't be ripped out from under you for none Unity data and the usual GC rules apply.
And again the difference between using the GC and rolling your own implementation is pretty big. In your hash map example you still have to solve the issue of how long you keep entries in that map. The GC answers that question.
While we don't need, we can, that is the beauty of languages like C#, that offer the productivity of automatic memory management, and the tools to go low level if desired/needed.
At least in terms of doing math on indices, I have to imagine you could just wrap the type to make indices opaque. The other concerns seem valid though.
Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.
pointers sure are useful
That sounds like Jonathan blow's "rant" on the subject. You can watch it on YouTube https://youtu.be/4t1K66dMhWk
> These crates also get "refactored" every few months, with breaking API changes
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).
It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
"I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use"
Well, I always thought it is the key in every kind of development, JS or else.
I'm finding most of the modern React ecosystem to be made of liabilities.
The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.
I wish for ecosystems that would let maintainers ship deprecations with auto-fixing lint rules.
Yeah, not only is the structure of business workflows often resistant to mature software dev workflows, developers themselves increasingly lack the discipline, skills or interest in backwards compatibility or good initial designs anyway. Add to this the trend that fast changing software is actually a decent strategy to keep LLMs befuddled, and it’s probably going to become an unofficial standard to maintain support contracts.
On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.
Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..
In the Java universe, there is OpenRewrite for this: https://github.com/openrewrite/rewrite
eg: https://docs.openrewrite.org/recipes/java/migrate/joda/jodat...
I occasionally notice libraries or frameworks including OpenRewrite rules in their releases. I've never tried it, though!
Modelica, which is a DSL for modelling DAE systems, has a facility of automated conversions. You can provide a script that automatically modifies user's code then they upgrade to newer version of your lib, or prints the message if automatic migration is not possible.
It is very strange that more mainstream languages do not have such features (and I am not talking about 3rd party tools; in Modelica conversions are part of the language spec).
Kotlin has some limited support for that:
Only works for simple cases but better than nothing. For more, there's OpenRewrite.They do. They're called stable, versioned interfaces and work in any language
the Js ecosystem is by far the worst offender in this area
I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.
Hmmm.. strange. Don’t have issues like that. Can you show us your package json?
And some critical rust issues for games are not dealt with: on tiny glade with the devs did hit a libgcc issue on the native elf/linux build, and we did discovered that the rust toolchain for elf/linux targets does not support the static linking of libgcc (which is mandatory for games, any closed source binary). The issue is opened on rust github since 2015...
But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.
Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
I've always thought about this. In my mind there are two ways a language can guarantee memory safety:
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
Rust has plenty of constructs that do runtime checks in part to get around the fact that not everything can be expressed in a manner that the borrow checker can understand at compile time. IMO Rust should treat the array/index case in the same manner as these and provide a standard interface that prevents "use after free" and so on.
If you're looking for a stable GUI toolkit, there is Slint
Unless you would like to have a tree view widget.
You can have a tree view Slint. There is one example in https://github.com/slint-ui/cargo-ui And also in https://github.com/LibrePCB/LibrePCB/pull/1504
Was not familiar with this - looks great!
> Someone else set out to do something similar in C#/Unity and had something going in less than two years.
But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.
For a while now Unity has an incremental garbage collector where you pay a small amount of time per frame instead of introducing large pauses every time the GC kicks in.
Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.
You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.
Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.
Finding the fun and time to market are top priority for games development.
At this point I really wonder why anyone would use Rust for anything other than low-level system tools/libraries or kernel development ...
Anything with a graphical shell is probably better written in a GC'd language, but I'd love to hear some counter-arguments.
It depends on the kind of game you’re making.
If it’s a really logic-intensive game like Factorio (C++), or RollerCoaster Tycoon (Assembly), then I don’t think you can get away with something like Unity.
For simpler things that have a lot of content, I don’t think you can get away with Rust, until its ecosystem grows to match the usual game engines of today.
I mean, you could write your logic engine in Rust (as a library), and do all the rest in a more ergonomic language, with a GC.
We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
> We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
[dead]
> Nobody has really pushed the performance issues.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
[1]: https://bevyengine.org/news/bevy-0-16/
I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.
Wonderful work!
...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.
Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.
[dead]
[dead]
A owns B, and B can find A
I think you should think less like Java/C# and more like database.
If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.
So I'll probably use Box here to refer to the parent
?? the whole point of Box<T> is to be an owning reference, you can’t have multiple children refer to the same parent object if you use a Box
Why is this sad? He's realized that the best language is C# and the best platform for games is Unity! This is progress, and that's good.
Pin and unpin handle circular references, sort of.
std::rc::Weak?
The GP does mention Rc/Arc.
Rc & Arc don't have the same behavior as Weak
Can you use Weak without either Rc or Arc?
not relevant but yes, you can.
Not really, they are just tools to expose circular references (or self-references) that are *already managed by unsafe code*.
More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
For the 4 people on HN not aware of it, this is a riff on Greenspun's tenth rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
> More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
But it also doesn't have even 10% of Unity features. Bevy docs themselves warn you that you are probably better off with something like Godot, at least while Bevy is still in early development.
Over the past year I've been working at my studio to add enough features to Bevy to ship real apps, and Bevy is at the point where one can reasonably do that, depending on your needs.
[flagged]
I think this has less to do with Rust and commercial game engines being better and more of a fetish that game programmers seem to have for entity component systems. One does not have to look far to see similar projects repeated in C++ years prior.
And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
If anything, making your own game engine makes process more frustrating, time consuming and leads to burnout quicker than ever, especially when your initial goal was just to make a game but instead you stuck figuring out your own render pipeline or inventing some other wheel. I have a headache just from thinking that at some point in engine development person would have to spend literal weeks figuring out export to Android with proper signage and all, when, again, all they wanted is to just make a game.
This seems entirely subjective, most importantly hinging on this part here: "all they wanted is to just make a game".
If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?
Speaking as someone who has made their own game engine for their indie game: it really depends on the game, and on the developer's personality and goals. I think you're probably right for the majority of cases, since the majority of games people want to make are reasonably well-served by general-purpose game engines.
But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:
I would bet that if you want to build a game engine and not the game, the game itself is probably not that compelling. Could still break out, like Minecraft, but if someone has an amazing game idea I would think they would want to ship it as fast as possible.
It is orders of magnitude easier to write an game engine for yourself than it is to create a monster like unity or unreal that needs to appeal to everyone and support every kind of game.
If we are talking 2d, it can be months to hack together a basic engine. 3d can be a bit harder but far from decades.
Thing is, if you designed your engine well and implemented great tooling, it should make it faster to implement the actual content of the game.
So upfront cost to be faster later. At least in theory. Obviously you might end up with subpar tooling that is worse than what a commercial could offers. But if you do something like an RPG with a lot of contend, every bit of extra efficiency in creating that content can help a lot.
Now, obviously from a purely commercial standpoint, not using an established engine makes nearly never sense. Super risky. Hard to hire outside talent. You are only justified when you have very, very specific needs that are hard to implement in a generic engine.
Also for us with an ADHD brain, hard things tend to be easier and easy things very hard, so yes the extra mental stimulation of writing an engine can help.
This is correct. If you want to build a game engine, you better know what kind of game it is by making at least a playable prototype in a conventional engine.
Making an actual indie game can take from 6 months (tiny) to 4-5years. If you multiply that by 10x, the upper bound would be 40-50 years. Of course, that's not how it would be but one has to consider whether their goal is to build a game engine OR a game, doing both at the same is almost guaranteed failure (statistically speaking).
Being intellectually stimulating doesn't translate into sales, gameplay might.
> And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
This does not apply just to games, but to most any application designed to be used by human beings, particularly complete strangers.
Typically the “itch is scratched” long before the application is done.
I'm in that camp. After shifting from commercial gamedev I've been itching to build something. I kept thinking "I wanna build a game" but couldn't really think what that came is. Then realised "Actually it's because I want to build an engine" haha
I disagree with this assessment.
After 30 years participating in Gamedev communities I feel like the "don't build an engine" was always an empty strawman aimed at nobody in reality.
The Venn diagram between the people interested in technical aspects of an engine and in also shipping a game is probably composed of a few hundred individuals, most of them working for studios.
The "kid that wants to make an engine to make an MMO" is gonna do neither. It's just a meme.
I shouldn't really care about it myself, but I do because Unity sucked the air out of every gamedev discussion and now there are almost no spaces to discuss anything advanced (even if it's applicable to Unity/Unreal/Godot).
This person develops
My experience is the opposite. Plenty of intellectual stimulation comes from actually making the game. Designing and refining gameplay mechanics, level design, writing shaders, etc.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
I wrote a few benchmarks a few years ago comparing JS vs C++ compiled to WASM vs C++ compiled to x64 with -O3.
I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.
The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).
Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.
I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.
I've also seen Cython used to this effect for hotspots or entire applications in scientific Python code.
I'd rather write rust than java, personally
If I have all the time in the world, sure. When I'm racing against a deadline, I don't want to wrestle with the borrow checker too. Sure, it's objections help with the long term quality of the code and reduce bugs but that's hard to justify to a manager/process driven by Agile and Sprints. Quite possible that an experienced Rust dev can be very productive but there aren't tons of those going around.
Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.
Java is incredibly productive - it's fast and has the best tooling out there IMO.
Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.
Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]
[1] https://www.youtube.com/watch?v=tzt36EGKEZo
He would vouch as great as he might be, he has a bias, and Mono GC was never a great implementation.
Also here is how great his new Swift love performs in reality against modern tracing GCs,
https://github.com/ixy-languages/ixy-languages
Interesting link, but that's a nearly 7 year old version of Swift (4.2) running on Linux.
I wonder how the performance would be with Swift 6.1 which has improved support for Linux.
Probably much better, given the improvements on the Swift optimizer, but just goes to show "tracing GC" bad, "reference counting GC" good isn't as straighforward as people make it to be, even if they are renowned developers.
It's a cherry picked, out-of-date counter-example. Swift isn't designed for building drivers.
In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.
Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.
It surely is, according to Apple's own documentation.
> Swift is a successor to the C, C++, and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics.
-- https://developer.apple.com/swift/
If developers have such a big problem glueing C libraries into Java JNI, or Panama, then maybe game industry is not where they are supposed to be, when even Assembly comes to play.
> GC pauses aren't really acceptable
Java has made great progress with low-pause (~1 ms) garbage collectors like ZGC and Shenandoah since ~5 years ago.
People have 240hz monitors these days, you have a bit over 4ms to render a frame. If that 1ms can be eliminated or amortised over a few frames it's still a big deal, and that's assuming 1ms is the worst case scenario and not the best.
I don’t think you need to work in absolutes here. There are plenty of games that do not need to render at 240hz and are capable of handling pauses up to 1ms. There’s tons of games that are currently written in languages that have larger GC pauses than that.
What about the C# garbage collector? Is it much better? Because Unity is in C#, right?
Unity uses aginging Mono runtime, because of politics with Xamarin, before its acquisition by Microsoft, migration to .NET Core is still in process.
Additionally they have HPC#, which is a C# subset for high performance code, used by the DOTS subsystem.
Many people mistake their C# experience in Unity, with what the state of the art in .NET world is.
Read the great deep dive blog posts from Stephen Toub on Microsoft DevBlogs on each .NET Core release since version 5.0.
Yes and it's impressive.
For the competitive Minecraft player, I suspect starting their VM with XX:+UnlockExperimentalVMOptions is normal.
A casual gamer is however not going to enjoy that.
Are you sure that enabling ZGC or Shenandoah requires UnlockExperimentalVMOptions ?
I have found that the ClassFactoryGeneratorFactories sneak up on you. Even if you don't want to the ecosystem slowly but surely nudges you that way.
That has not been my experience. Sure, you don't have any control over the third-party stuff but I haven't seen this issue being widespread in the mainstream third-party libraries I've used e.g. logback, jackson, junit, jedis, pgJDBC etc which are very well known/widely used. The only place I've actually seen proliferation of this was by a contractor, who I suspect, was trying to ensure job security behind impenetrability.
It is ironic how Java got that stigma and other systems that are just as bad, or worse, like Objective-C, have not.
Well I have never used Objective-C so I can't comment on it.
On Objective-C, due to the way the language works, besides ClassFactoryGeneratorFactories, you would need to add all parameter names to the identifier.
Here, enjoy https://github.com/Quotation/LongestCocoa
There is even a style guide on it,
https://developer.apple.com/library/archive/documentation/Co...
I'd have said the same thing 10 years ago (or, I would have if I were comparing 10-year-old Java with modern Rust), but Java these days is actually pretty ergonomic. Rust's borrow checker balances out the ML-style niceties to bring it down to about Java's level for me, depending on the application.
Note that I mentioned JVM languages. There is Scala, Kotlin and others. Kotlin is the default for Android, and it is really nice.
Kotlin is nice indeed. Most of the issues I had with it were in interop with Java code (those pesky platform types, that behave like non-nullable but are nullable: and you are back in the NPE swamp!)
I’d rather write Java than Rust, personally
Same here, and if I get bored with Java, there is also Scala, Kotlin and Clojure to chose from.
However, I would still prefer C# or F#.
Hence why I enjoy both stacks, lots of goodies to chose from, with great tooling.
I would do C#, but I don’t want to be in async/await hell.
Also it’s subjective but PascalCase really irks me.
PascalCase has been my favourite since MS-DOS days, I have been through most Borland products, and Microsoft ones, alongside many Pascal influenced languages, thus it feels like home. :)
But yeah it is subjective, also don't have much qualms with other alternatives.
Wow, way to be un-hip.
>I realized that what I'd boiled it down to looked an awful lot like Rust code
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
If I'd started in Rust I likely wouldn't have finished it at all. Java allowed me to start out just focused on the algorithm with very little regard for memory usage patterns and then refactor towards zero garbage collection. Rust can sort of allow the same thing by just sprinkling everything with clone and/or Rc/Arc, but it's much more in the way than just having a garage collector there automatically.
Yes but it would just be the hot loop in this case; the rest of the app can still be in idiomatic Java, and you still get the GC.
Exactly. Write it in Java, optimize what you need to, leave the rest alone.
As polyglot dev, I never understood this religious approach that it has to be 100% pure unadulterated in language XYZ for performance.
Nope, embrace the productivity of managed languages, if really needed, package that rest in a native library, done.
> "I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year."
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
You can't avoid a lot of this stuff, once libraries start using it or colleagues add it to your codebase then you need to know it. I'd argue you need to know it well before you decide to exclude it.
Then better be quite picky of what libraries one choses, because that is the thing, while we may not use them, the libraries migth impose them on us.
Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.
Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.
My experience with C++ is that it fundamentally "looks worse" and has worse tooling than more modern languages. And it feels like they keep adding new features that make it all even worse every year.
Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.
I'm a C++ developer and it's always great when we move to a newer language version, with all the language improvements that come with that.
>> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.
This said, they moved to Unity, which is C#, which is garbage collected, right?
C# also has "Value Types" which can be stack allocated and passed by value. They're used extensively in game dev.
You can already get halfway there with Java, by making use of Panama, even if not exposed at language level.
And lets be real, how many devs manage to sell as many copies as Minecraft?
Too much discussion about what language to use, instead of what game to make.
Hopefully that changes once Java releases their value types.
The core unity game engine is c++ that you can't access, but all unity games are written in c#.
And you could do that with any garbage collected language, right? You could reuse that C++ core with a JVM language.
Unity games are C#, the engine itself is C++.
C#/.NET has huge feature area for low-level/hands-on memory manipulation, which is highly relevant to gamedev.
I think the choice of C++ vs JVM depends on your project. If you're not using the benefits of "unsafe" languages then it probably doesn't matter.
But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.
I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.
While there are many technical reasons to use C++ over Java in game development, many commercial games could be easily done in Java, as they are A or AA level at most.
Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.
Minecraft was Indie then. And anyway, it's now in C++.
Many people dream to make it as indie, most don't even achieve that.
No it isn't, there are now two versions of Minecraft, the classical one, and Minecraft Bedrock, that is the one written in C++.
Minecraft Bedrock doesn't have half of the community that Minecraft classical enjoys, hence why Microsoft is trying to use JavaScript based extensions to bring the mod community into Minecraft Bedrock.
Finally without Minecraft classical market success, there wouldn't exist Minecraft Bedrock at all, so Java did serve well enough to Notch's fortunes.
I'm not knocking indie development, the scene is very very vibrant. But indies don't typically push the hardware to its limits the same way.
And Java was a perfectly good choice of language for Notch for the same reasons.
I don't play Minecraft so I guess I'm outta touch. I knew about Bedrock and I've heard kids call Java the "old one". I didn't realise there's still an active community. Thanks for the correction :)
Literally no one who has access to the Java version cares even a little bit about Minecraft bedrock edition.
> but there is a reason that commercial gamedev is typically C++.
Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.
But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.
We were using a modified Luajit, in assembly, with a bit of other assembly dotted around the place. That assembly takes a long time to write (to beat a modern C++ compiler).
Then we had C++ for all our low level code and Lua for gameplay.
We were floating a middle layer of Rust for Lua bindings and the glue code for our transformation pipeline, but there was always a little too much friction to introduce. What we were particularly interested in was memory allocation bugs (use after free and leaks) and speeding up development of the engine. So I could see it having a place.
The advantage C has over C++ is it won't let you use templates.
this!
This couldn't be any more accurate even if you compiled with CFLAGS='-march native ' and RUSTFLAGS='-C can't remember insert here'
VPS/Cloud providers skimp on RAM. The JVM sucks for any low RAM workload, where you want the smallest possible single server instance. The startup times of JVM based applications are also horrendous. How many gigabytes of RAM does Digital Ocean give you with your smallest instance? They don't. They give you 512MiB. Suddenly using Java is no longer an option, because you will be wasting your day carefully tuning literally everything to fit in that amount.
You can get decent startup times if you have fewer dependencies. The JVM itself starts fairly quickly (<200 ms), the problem is all the class loading. If your "app" is a bloated multi gigabyte monstrosity... good luck!
I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
What do you mean by 'a mix of Haskell and Rust'? Is that a per-project choice or do you use both in a single project? I'm interested in the latter. If so, could you point me to an example?
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
> C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
> C++ has been faster than C for a long time.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
This is really an “in theory” versus “in practice” argument.
Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.
I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.
I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.
None of this really supports your claim that "C++ has been faster than C for a long time."
You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.
In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.
For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).
> What is your basis for this claim?
Great question! Here's one answer:
Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.
Why doesn't that happen with C code?
C code is not plastic. It is brittle. It does not bend, it breaks.
This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.
A simple example:
Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.
Or let's change a linked list to an array. Aarrgghh again.
Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.
At a certain point, C++ compile time computation becomes something you really can’t do in C. https://codegolf.stackexchange.com/a/269772
I know you're going to reply with "BUT MY PREPROCESSOR", but template specialization is a big win and improvement (see qsort vs std::sort).
I have used the preprocessor to avoid this sort of slowdown in the past in a binary search function:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.
That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:
https://news.ycombinator.com/item?id=43827857
Honestly, I think these weaknesses are more severe than qsort being unable to inline the comparator.
A comparator can be inlined just fine in C. See here where the full example is folded to a constant: https://godbolt.org/z/bnsvGjrje
Does not work if the compiler can not look into the function, but the same is true in C++.
That does not show the comparator being inlined since everything was folded into a constant, although I suppose it was. Neat.
Edit: It sort of works for the bsearch() standard library function:
https://godbolt.org/z/3vEYrscof
However, it optimized the binary search into a linear search. I wanted to see it implement a binary search, so I tried with a bigger array:
https://godbolt.org/z/rjbev3xGM
Now it calls bsearch instead of inlining the comparator.
With optimization, it will really inline it with an unknown size array: https://godbolt.org/z/sK3nK34Y4
That's not the most general case, but it's better than I expected.
Nice catch. I had goofed by omitting optimization when checking this from an iPad.
That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:
https://en.algorithmica.org/hpc/data-structures/binary-searc...
That had been the entire motivation behind this commit to OpenZFS:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:
https://github.com/scandum/binary_search
My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110001
C and C++ do have very different memory models, C essentially follows the "types are a way to decode memory" model while C++ has an actual object model where accessing memory using the wrong type is UB and objects have actual lifetimes. Not that this would necessarily lead to performance differences.
When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.
The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.
Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.
That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.
One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.
Most C++ code I have seen uses the STL. As for “hyper optimized” data structures, you already have those in C. See the B-Tree code those binary search routine I patched to run faster. Nothing C++ adds improves upon what you can do performance wise in C.
You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.
C and C++ can be made to generate pretty much the same assembly, sure. I find it much easier to maintain a template function than a macro that expands to a function as you did in the B-Tree code, but reasonable people can disagree on that.
Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.
Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.
In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.
Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.
Having worked on performance sensitive code (OpenZFS), I have found less to be more.
While C code makes more heavy use of linked lists than C++ code, most of the C code I have helped maintain made even heavier use of balanced binary search trees and B-trees than linked lists. It also used SLAB allocation to amortize allocation costs. In the case of OpenZFS, most of the code operated in the kernel where external memory fragmentation makes dynamic arrays (and “large” arrays in general) unusable.
I think you have not seen the C libraries available to make C even better. libuutil and libumem from OpenSolaris make doing these things extremely nice. Some of the first code I wrote professionally (and still maintain) was written in C++. There really is nothing from C++ that I miss in C when I have such libraries. In fact, I have long wanted to rewrite that C++ code in C since I find it easier to maintain due to the reduced abstractions.
> Nothing C++ adds improves upon what you can do performance wise in C
Implementations of both languages provide inline asm, so this is trivially true. Yet it is an uninteresting statement.
This is not a convincing argument for C. None of this matches my experience across many companies. In particular, the specific things you cite — excessive dynamic memory usage, exceptions, bloat — are typically only raised by people who don’t actually use C++ in the kinds of serious applications where C++ is the tool of choice. Sure, you could write C++ the way you describe but that is just poor code. You can do that in any language.
For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.
This risks becoming a no true Scotsman, but it is indeed true that there is really no common idiomatic C++. Even the same code base can use vastly different styles in different areas.
Even regarding exceptions, I would not touch them anywhere close to the critical path, but, for example, during application setup I have no problem with them. And yet I know of people writing very high performance applications that are happy to throw on the critical path as long as it is a rare occurence.
> Sure, you could write C++ the way you describe but that is just poor code.
That is a problem with C++. C++ puts people into a sea of complexity and blames them when they do not get a good result. The purpose of high level programming languages is to make things easier for people, not make it even more likely to fail to write good code and then blame them when they do not.
If you try to follow the advice by the creators of C++, you often get further away from good code, and then when you complain, people say it is your fault. People who have actual success using C++ ignore the advice by the guys who made C++, which is an incredibly backward situation. This is a very different situation than you have with C where advice on good development practices does not conflict with reality.
> For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company.
Unfortunately, C++ does not make exceptions optional and even if you use a compiler flag to disable it, libraries can still throw them. Do you use the “non-throwing allocation functions” introduced in C++11 and avoid any library functions that can throw exceptions in your code to truly avoid exceptions? Given most people have been writing C++ code since before C++11, there is a good chance you do not. If you write code for Linux systems, you might be unaware that Linux can and will refuse to do allocations, even if most of the time it is willing to overcommit. This means that your C++ allocations can throw exceptions, even if you have used a compiler flag to “turn exception handling off”.
> It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
I have seen plenty of C++ software throw exceptions in wine, since it prints information about it to the console. It is amazing how often exceptions are used in the normal operation of such software. Of course, this goes unseen on the original platform, so the developers likely have no idea about all of the exceptions that their code throws.
I take it that you have never met Bjarne Stroustrup, who does not view exceptions as optional and will likely always tell you that you should not turn off exceptions, even if the compiler lets you.
> C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid.
https://en.wikipedia.org/wiki/No_true_Scotsman
Whenever anyone tries to point out C++’s flaws, someone else claims that they are doing it wrong. It is fallacious.
> In all of this you also failed to make an argument for why anyone should use C.
I was not trying to do that, but I will flip this on you and say that I do not see why you should use C++ over any other high level language given a choice. It is so bloated that it drowns people in choice, and when they inevitably make bad choices by trying to follow others’ advice (particularly Bjorne Stroustrup‘s) on how to make good choices, they are blamed for the mistake of doing that in the first place. I used to think so well of C++ based on its reputation, but these days, I think that the C++ language exists for masochists. It has no end of prescriptionists who will give bad advice on how to write “good code” and when following their advice turns out to produce bad code and you complain, there is no end of people telling you that the problems are your fault. The situation is the quintessence of masochism.
Just the other day, a guy on hacker news said that there was no point to using C structures of pointers over C++ classes, and that all C code should be compiled as C++. I replied with an explanation of why this is wrong:
https://news.ycombinator.com/item?id=43701516
One person could not see why you would want to have member functions that are undefined in instantiated objects:
https://news.ycombinator.com/item?id=43701974
Of course, C++ does not support that in member functions. You need to do such things via member function pointers if you want them, but advocates for C++ are largely prescriptionists who try to dissuade people from doing anything the way C does it and instead suggest whatever the latest C++ reinvention of things is instead, even though there was nothing wrong with doing it the C way.
> It isn’t like C++ can’t use C code.
It increasingly cannot. If C headers use variably modified types and do not have a guard macro an alternative for C++ that turns them into regular pointers, C++ cannot use the header. Here is an example of code using it that a C++ compiler cannot compile:
https://godbolt.org/z/T5T4Y1n68
The C preprocessor also now has generics which are not supported by C++ either:
https://godbolt.org/z/cof14W7vM
Unfortunately Stepanov and the STL are widely misunderstood. Stepanov core contributions is the set of concepts underlying the STL and the iterator model for generic programming. The set of algorithms and datastructures in the STL was only supposed to be a beginning, was never supposed to be a finished collection. Unfortunately many, if not most treat it that way.
But if you look beyond, you can find a whole world that extend the stl. If you are not happy, say, with unordered_map, you can find more or less drop in replacements that use the same iterator based interface, preserve value semantics and use the a common language to describe iterator and reference invalidation.
Regarding your specific use case, if you want intrusive lists you can use boost.intrusive which provides containers with STL semantics except it leaves ownership of the nodes to the user. The containers do not even need to be lists: you can put the same node in multiple lists linked list, binary trees (multiple flavors), and hash maps (although this is not fully intrusive) at the same time.
These days I don't generally need boost much, but I still reach for boost.intrusive quite often.
Except, nothing forbids me to use two linked lists in C++ using sys/queue.h, that is exactly one of the reason why Bjarne built C++ on top of C, and also unfortunely a reason why we have security pain points in C++.
Yet the C++ community is continually trying to get people to stay away from anything involving C. That said, newer C headers using _Generic for example are not usable from C++.
Because C++ was "TypeScript for C", plenty of room to improvement that WG 14 refuses to act on for the last 50 years.
Yes, most language features past the C89 subset are not supported, besides the C standard library, because C++ has much better alternatives, like why _Generic when templates are a much saner approach, than type dispatching with the pre-processor.
However that is besides the point, 99% of C89 code minus a few differences, is valid C++ code, and if the situation so requires, C++ code can be exactly the same way.
And lets not forget most FOSS projects have never moved beyond C89/C99 anyway, so stuff like _Generic is of relative importance.
In my experience, templates usually cause a lot of bloat that slows things down. Sure, in microbenchmarks it always looks good to specialize everything at compile time, whether this is what you want in a larger project is a different question. And then, also a C compiler can specialize a sort routine for your types just fine. It just needs to be able too look into it, i.e. it does not work for qsort from the libc. I agree to your point that C++ comes with fast implementations of algorithms out-of-the-box. In C you need to assemble a toolbox yourself. But once you have done this, I see no downside.
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time.
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
OO (implementation inheritance) is frowned upon in modern C++. Also, all production code bases I’ve seen pass -fno-exceptions to the compiler.
Ahh yes, now we are getting somewhere. "C++ is faster because it has all these features, no not those features nobody uses those. The STL, no, you rewrite that"
The poster you are responding to is correct. Modern C++ has established idiomatic code practices that are widely used in industry. Imagining how someone could use legacy language features in the most naive possible way, contrary to industry practice, is not a good faith argument. You can do that with any programming language.
You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.
> C++ has been faster than C for a long time.
Citation needed.
> If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
That's interesting, did ChatGPT tell you this?
I agree with you except for the JVM bit - but everyone's application varies
My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
The JVM is excellent for throughput, once the program has warmed up, but it always has much more jitter than a more systemsy language like C++ or Rust. There are definitely use cases where you need to consistently react fast, where Java is not a good choice.
It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.
Yet it made Notch rich, because he had the right idea for a game, and compeling gameplay.
You think the JVM is slow?
IME large linear algebra algos run like molasses in a jvm compared to compiled solutions. You're always fighting the gc.
Do you have any benchmarks to show, out of curiosity?
Ok. But we have plenty of C libraries to bind to that for.
They're far slower in Python but that hasn't stopped anyone.
Depends. JVM is fast once hotspot figures things out - but that means the first level is slow and you lose your users.
You can always load JIT caches if you can’t wait for warm up.
What about AOT?
Install Gentoo
As I said, I use Gentoo already ;-).
Quite.
I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.
It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.
@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.
Errrm, sorry, I seem to have subverted this thread.
What about the binary packages now supported in Gentoo?
You have approximately described guix.
Gentoo Silverblue?
Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
Building a proper ORM is hard. Querying a database is not. See the postgres crate for an example.
Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.
Sqlx is completely lacking in the query composability department, and leads to a very large amount of boilerplate.
You can derive FromRow for your structs to cut down the boilerplate, but if you need to join two tables that happen to have a column with the same name it stops working, unless you remember to _always_ alias one of the columns to the same name, every time you query that table from anywhere (even when the duplicate column names would not be present). If a table gets added later that happens to share a column name with another table? Hope you don't ever have to join those two together.
Doing something CRUD-y like "change ordering based on a parameter" is not supported, and you have to fall back to sprintf("%s ORDER BY %s %s") style concatenation.
Gets even worse if you have to parameterize WHERE clauses.
My feeling is that rust makes easy things hard and hard things work.
I'm not going to deny your experience. But is Rust really that hard? It's a very smooth experience for me - sometimes enough for me to choose it instead of Python.
I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.
> This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
Most languages used with DBs are just as safe. This idea about Rust being more safe than languages with GC needs a rather big [Citation Needed] sign for the fans.
If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
For me personally, doing the clone-everything style of Rust for a first pass means I still have a graceful incremental path to go pursue the harder optimizations that are possible with more thoughtful memory management. The distinction is that I can do this optimization pass continuing to work in Rust rather than considering, and probably discarding, a potential rewrite to a net-new language if I had started in something like Ruby/Python/Elixir. FFI to optimize just the hot paths in a multi-language project has significant downsides and tradeoffs.
Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.
> generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages
What about e.g. Kotlin or Swift?
I don't find the single-vendor governance / commercial origins of those two languages very reassuring, but that's not something that will trouble everyone equally if at all.
Yeah only in Scala, Kotlin, F#, Standard ML, OCaml, Haskell, and all others that derive from them.
None of those are to my personal taste and I think Kotlin is the only one with unambiguously strong adoption in industry. I'm trying not to make value-judgment statements about others that do like them.
Agree in this, i enjoy Rust and use the same approach.
People are saying rust is harsh, i would day its not that much harder then other languages just more verbose and demanding.
Rust is actually quite suitable for a number of domains where it was never intended to excel.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
edit: The downvoters are missing out.
To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?
I am absolutely convinced I can find success story of web backends built with all those languages.
There are 3 cases. The first is that you are comfortable with Rust and you just choose it for that. The second is that you're not comfortable with Rust and you choose something else that works for you.
The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.
This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.
> Rust is an obvious improvement over the interpreted languages.
Do we agree that most of the languages I mentioned above are not interpreted languages? You seem to only consider Go as a non-interpreted alternative...
Other than Go it's just C/C++ and Swift.
Yeah, "web services backend" really means "code exercising APIs pioneered by SunOS in 1988". It's easy to be rock solid if your only dependency is the bedrock.
Perhaps. But a comparable Rust backend stack produces a single binary deployable that can absorb 50,000 QPS with no latency caused by garbage collection. You get all of that for free.
The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.
Yep, that's precisely it! When dealing with other languages I miss the "match" keyword and being able to open a block anywhere. Sure, sometimes Rust allows you to write terse abominations if you don't exercise a dose of caution and empathy for future maintainers (you included).
Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.
But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.
Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.
Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.
Thanks for coming to my TED talk.
>Anyway, would I recommend Rust to everyone?
For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.
What language are you using that doesn’t have match? Even Java has the equivalent. The only ones I can think of that don’t are the scripting languages.. Python and JS.
Does Java have sum types now?
Yes via sealed classes. It also has pattern matching.
So they are there, but ugly to define:
In Kotlin it's a bit better, but nothing beats the ML-like langs (and Rust/ReScript/etc):You could use Java records to make things more concise:
I think Java 21 does. Scala and Kotlin do as well.
Python has it as well.
Ah my mistake. It’s been at least 5 years since I’ve written it. I’m honestly surprised that JS has moved no where on it considering all of the fancy things they’ve been adding.
It has been proposed, but since there is all the process on how features get added into the standard, someone needs to champion it, and then there is the "at least two implementations" factor.
https://github.com/tc39/proposal-pattern-matching
[dead]
Yeah, anything with nulls ends up with Option<this> and Option<that> which means unwraps or matches. There is a comment above about good bedrock and Rust works OK with nulls but it works really well with unsparse databases (avoiding joins).
The bar for web services is low, so pretty much anything works as long as it's easy. I wouldn't call them a success story.
When things get complex, you start missing Rust's type system and bugs creep in.
In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.
Tokio + Axum + SQLx has been a total game-changer for me for web dev. It's by far the most productive I've been with any backend web stack.
I prefer rusqlite over SQLx; the latter is too bloated.
People that haven't tried this are downvoting with prejudice, but they just don't know.
Rust is an absolute gem at web backend. An absolute fucking gem.
We know, it stil isn't at Spring/ASP.NET level, coupled with Scala/Kotlin/F#.
I hate Spring(Boot): too much magic due to overuse of annotations.
On the JVM I'd prefer Kotlin/http4k/SQLDelight any day over {Java,Kotlin}/Spring(Boot)/{Hibernate,sql-in-strings}.
Because macro magic, or compiler plugins, is so much better, I guess.
What do you mean? Where are the "macro magic or compiler plugins"?
Most Rust frameworks, which was the point of this thread,
> Rust is an absolute gem at web backend. An absolute fucking gem.
Nothing beats vertx on JVM!
Curious, do you mind going into more detail on why?
The fact that people love the language is an unexpected downside. In my experience the rust ecosystem has an insanely high churn rate. Crates are often abandoned seemingly for no reason, often before even hitting 1.0. My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game. Once all the fun parts are solved, they leave it for dead.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
This.
Feeling passionate about a programming language is generally bad for the products made with that language.
> My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game.
So you mean, Rust is more of an intellectual playground, than an actual workbench? I'm curious how high the churn rate of packages in other languages is, like python or ruby (let's not talk about javascript). Could this be the result of rust being still rather young and moving fast?
> Conversely and ironically, this is why I love Go.
Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
Agreed. For the same reason I unironically prefer Java, Go, C++, JS/TS to solve real problems.
Can you speak more of this best in class tooling?
The official `go` command does dep management, (cross) compilation, testing (including benchmarks and coverage reports), race detection, profiling reports, code generation (metaprogramming alternative), doc generation etc. Build times are insanely fast too.
The only tooling I use personally outside of the main CLI is building iOS/Android static libraries (gomobile). It’s still first party, but not in the go command.
I haven't tried Go in a while, but 8 years ago, I felt the tooling was a disaster. The V1 ways of doing things were really janky, and the improved versions didn't seem to be universally adopted yet. It's nice to hear that seems to have changed.
Yes, it used to be horrible with GOPATH hell, because Google didn’t care much about deps since they had their own monorepo. They got their shit together years ago. IMO today it’s better tooling than Rust (and Rust is pretty great already). Give it a try.
I find it interesting how the software industry has done everything it can to ignore F#. This is me just lamenting how I always come back to it as the best general purpose language.
Probably the intersection of people who (a) want an advanced ML-style language and (b) are interested in a CLR-based language is very small. But also, doesn't it do some weird thing where it matters in what order the files are included in the compilation? I remember being interested in F# but being turned off by that, and maybe some other weird details.
I don’t want to use a language with unknown ecosystem. If I need a library to do X, I’m confident I can find it for Go, Java, Python etc. But I don’t know about F#.
I also don’t want to use a language with questionable hireability.
Haven't used F# too much myself but one of the strong points is because it shares the CLR with C# you can use any of the many packages meant for C# and it'll work because of the shared runtime.
Huh? Usually languages that are ”ignored” turns out to be for reasons such as poor or proprietary tooling. As an ignorant bystander, how are things like
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
It's been around for a long time and sponsored by Microsoft. I don't know its exact status, but the only reason for it to lack in any of those areas is lack of will.
F# compiler is cross os and allows cross compilation (dotnet build --runtime xxx), its packaged in most Linux distros as dotnet.
Ok that helps! So where does F# shine? Any particular domains?
I think this is a problem of using the right abstractions.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
GHC has an -fdefer-type-errors option that lets you compile and run this code:
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
> I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.
Some Common Lisp implementations like SBCL have supported this style of development for many years. Everything is dynamically typed by default but as you specify more and more types the compiler uses them to make the generated code more efficient.
I quite like common lisp but I don't believe any existing implementation gets you anywhere near the same level of compile time safety. Maybe something like typed racket but that's still only doing a fraction of what rust does.
You should give Julia a shot. That’s basically that. You can start with super dynamic code in a REPL and gradually hammer it into stricter and hyper efficient code. It doesn’t have a borrow checker, but it’s expressive enough that you can write something similar as a package (see BorrowChecker.jl).
Unless you would like to AOT-deploy your code, then good luck with using this 3rd party package with scarce documentation.
Or even enums, which are a joke in Julia.
Julia had so much potential, and such poor implementation.
I think OCaml could be such a language personally. Its like rust-lite or a functional go.
Xen and Wall St. folks use it.
Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
> Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
1: https://web.archive.org/web/20220719130541mp_/https://commun...
2: https://web.archive.org/web/20240202140023/https://amethyst....
> Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.
I used to hate the language but statically typed GDscript feels like the perfect weight for indie development
It is indeed great for creating a prototype. After that, one can gradually migrate to Rust go benefit from faster execution times. The Rust bindings are in a pretty decent shape by now
https://godot-rust.github.io/
Nowadays we have the luxury of LLMs to help migrate projects/code from one language to another. I would imagine a pipeline with Rust as an intermediate “compiled” step might be possible. LLM accuracy isn’t there yet, but I can dream.
It is not that complicated or time-consuming to do the transformation manually. On the contrary, it's even fun and a good practice (but admittedly, I do have a rather conservative view on the matter)
Yeah I haven't really used it much but from what I've seen it's kind of what Python should have been. Looks way better than Lua too.
I like it better than python now, but it's still got some quirks. The lack of structs and typed callables are the biggest holes right now imo but you can work around those
What numeric types typically need conversions?
The fact you need a usize specifically to index an array (and most collections) is pretty annoying.
This could be different in game dev, but in the last years of writing rust (outside of learning the language) I very rarely need to index any collection.
There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.
I'm no game dev but I have had friends who do it professionally.
Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.
> Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.
The underlying issue with game engine coding is that the problem is shaped in this way:
* Everything should be random access(because you want to have novel rulesets and interactions)
* It should also be fast to iterate over per-frame(since it's real-time)
* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways
* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have
* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.
* Congratulations, you are now the maintainer of a bespoken database engine
You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.
It's not at all easy to implement as an optimisation, because it changes a lot of semantics, especially around references and pointers. It is something that you can e.g. implement using rust procedural macros, but it's far from transparent to switch between the two representations.
(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)
Meh. I've tried "SIMD magic wand" tools before, and found them to be verschlimmbessern.
At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.
I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.
Very much agree and love your analogy but there is a third option - make a sock knitting machine.
I'm not a game dev, but what's a straightforward way of adjusting some channel of a pixel at coordinate X,Y without indexing the underlying raster array? Iterators are fine when you want to perform some operation on every item in a collection but that is far from the only thing you ever might want to do with a collection.
Game dev here. If you’re concerned about performance the only answer to this is a pixel shader, as anything else involves either cpu based rendering or a texture copy back and forth.
A compute shader could update some subset of pixels in a texture. It's on the programmer to prevent race conditions though. However that would again involve explicit indexing.
In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.
You can manipulate texture coordinate derivatives in order to just sample a subset of the whole texture on a pixel shader and only shade those pixels (basically the same as mipmapping, but you can have the "window" wherever you want really).
This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).
Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.
That is true. Hadn't occurred to me because I'd had in mind pixel sorting stuff I did in the past where the fetches and stores aren't contiguous.
Interestingly enough the derivative functions are available to compute shaders as of SM 6.6. [0] Oddly SPIR-V only makes the associated opcodes [1] available to the fragment execution model for some reason. I'm not sure how something like DXVK handles that.
I'm not clear if the associated DXIL or SPIR-V opcodes are actually implemented in hardware. I couldn't immediately find anything relevant in the particular ISA I checked and I'm nowhere near motivated enough to go digging through the Mesa source code to see how the magic happens. Relevant because since you mentioned it I'm curious how much of a perf hit rolling your own is.
[0] https://microsoft.github.io/DirectX-Specs/d3d/HLSL_SM_6_6_De...
[1] https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.htm...
You're right - I should have just said "shader" and left it at that.
> There is some subset of problems that absolutely requires indexing to express efficiently.
Sure. But it's almost certainly quicker to run a shader over them, and ignore the values you don't want to operate on than it is to copy the data back, modify it in a safe bounds checked array in rust, and then copy it again.
> run a shader over them, and ignore the values you don't want to operate on
Use a compute shader. Run only as many invocations as you care about. Use explicit indexing in the shader to fetch and store.
Obviously that doesn't make sense if you're targeting 90% of the slots in the array. But if you're only targeting 10% or if the offsets aren't a monotonic sequence it will probably be more efficient - and it involves explicit indexing.
This is getting downvoted but it's kind of true. Indexing collections all the time usually means you're not using iterators enough. (Although iterators become very annoying for fallible code that you want to return a Result, so sometimes it's cleaner not to use them.)
However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.
An iterator works if you're sequentially visiting every item in the collection, in the order they're stored. It's terrible if you need random access, though.
Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).
History lesson for the cheap seats in the back:
Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)
Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)
When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.
> compressing files and then taring them
Just use squashfs if that is the functionality that you need.
While you maybe "shouldn't" be indexing collections often (which I also don't agree with, there is a reason that we have more collections then linked lists, lookup is important) even just getting the size of a collection which is often very related to business logic can be quite annoying.
For data that needs to be looked up mostly I want a hashtable. Not always, but mostly. It's rare that I want to look up something but its position in a list.
The actual problem with this is how to add it without breaking type inference for literal numbers.
What I mean is, I want to be able to use i32/i64/u32/u64/f32/f64s interchangeably, including (and especially!) in libraries I don't own.
I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).
I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".
It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.
Sounds like a good use of Num
[1] https://docs.rs/num-traits/latest/num_traits/trait.Num.html
Please correct me if I'm wrong, but I don't think this would let me, say, pass an i32 returned from one method directly as an f64 argument in another method.
String conversions too
One of the smartest devs I know built his game from scratch in C. Pretty complex game too - 3D open-world management game. It's now successful on steam.
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/
I agree that the game is amazing from a technical point of view, but look at the reviews and the pace of development. The updates are sparse and slow, and if there's an update, it's barely an improvement. This is one the of disadvantages of creating a game engine from scratch: more time is spent on the engine than the game itself, which may or may not be bad depending on which perspective you look at it from.
The cause could be an art bottleneck and less to do with the game's code.
Do you know why he supports MacOS, but not Linux?
Most likely because they don't use Linux. Or because it's kind of a mine field to support with bugs that occur on different distros. Even Unity has their own struggles with Linux support.
They're distributing their game on Steam too so Linux support is next to free via Proton.
> it's kind of a mine field to support with bugs that occur on different distros
Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.
This is a terrible solution, you're better off just making it Windows only and ensuring it can be run via Proton/Wine.
Why is it terrible? It gives a concrete target that the build is tested on. If someone cares they can most likely create an environment on their system that matches it. I don't see how that's any different from providing (for example) a flatpak.
It probably supports Linux via proton. Done. Official valve recommendation a few years ago not sure if still active.
Wine works really well on linux but not on macos.
This confused me as well. The scripting / engine divide is old and long standing.
I worked on games for 20 years and was always interested in alternative languages to C and C++ for the purpose.
Java was my first hope. It was a bit safer than C++ but ultimately too verbose and the GC meant too much memory is wasted. Most games were very sensitive to memory use because consoles always had limited memory to keep costs down.
Next I spent years of side projects on Common Lisp based on Andy Gavin’s success there with Crash Bandicoot and more, showing it was possible to do. However, reports from the company were that it was hard to scale to more people and eventually a rewrite of the engine in C++ came.
I have explored Rust and Bevy. Bevy is bleeding edge and that’s okay, but Rust is not the right language. The focus on safety makes coding slow when you want it to be fast. The borrow checker frowns when you want to mutate things for speed.
In my opinion Zig is the most promising language for triple A game dev. If you are mid level stick to Godot and Unity, but if you want to build a fast, safe game engine then look at Zig first.
I did the same for my project and moved to Go from Rust. My iteration is much faster, but the code a bit more brittle, esp. for concurrency. Tests have become more important.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
> but the code a bit more brittle, esp. for concurrency
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
Yep, I do use that, but after getting used to Rust's Send/Sync traits it feels wild and crazy there are no guardrails now on memory access between threads. More a feel thing than reality, but I just find I need to be a bit more careful.
Is calling Rust from Go fast? Last time I checked the interface between C and Go is very slow
No, it is not all that fast after the CGo call marshaling (Rust would need to compile to the C ABI). I would essentially call in to Rust to start the code, run it in its own thread pool and then call into Rust again to stop it. The time to start and stop don't really matter as this is code that runs from minutes to hours and is embarrassingly parallel.
Rust is no different from C in that respect.
I have no experience with FFI between C and Go, could anyone shed some light on this? They are both natively compiled languages – why would calls between them be much slower than any old function call?
There are two reasons:
• Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.
• Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.
Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.
[1] https://shane.ai/posts/cgo-performance-in-go1.21/
And P/Invoke call can be as cheap as a direct C call, at 1-4ns
In Unity, Mono and/or IL2CPP's interop mechanism also ends up in the ballpark of direct call cost.
There's some type translation and the Go runtime needs to turn some things off before calling out to C
it's reasonably fast now
> I still plan to write about 5% of the project in Rust and call it from Go, if required
And chances are that it won't be required.
This seems like the right call. When it comes to projects like these, efficiency is almost everything. Speaking about my own experiences, when I hit a snag in productivity in a project like this, it's almost always a death-knell.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
The advantages of correctness, memory safety, and a rich type system are worth something, but I expect it's a lot less when you're up against the value of a whole game design ecosystem with tools, assets, modules, examples, documentation, and ChatGPT right there to tell you how it all fits together.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
One of the challenges I never quite got over completely, was that I was always fighting rust fundamentals, which tells me I never fully assimilated into thinking like a rustacean.
This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.
I bet, and that's particularly difficult when so much of modern game dev is just repeating extremely well-worn patterns— moving entities around and providing for scripted and emergent interactions between those entities and the player(s).
That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.
[dead]
This was my experience with Rust. I've bounced off it a few times and I think I've decided its just not for me.
Personally, I don’t think of it as fighting, more like “compiler assistance” —
you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.
Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere
It is a question of tradeoffs. Indie studios should be happy to trade off some performance in exchange for more developer productivity (since performance is usually good enough anyway in an indie game, which usually don't have millions of entities, meanwhile developer productivity is a common failure point).
I love Bevy, but Unity is a weapon when it comes to quickly iterating and making a game. I think the Bevy developers understand that they have a long way to go before they get there. The benefits of Bevy (code-first, Rust, open source) still make me prefer it over Unity, but Unity is ridiculously batteries-included.
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
Not a game dev, but thought I'd mess around with Bevy and Rust to learn a bit more about both. I was surprised that my code crashed at runtime due to basics I expected the type system to catch. The fancy ECS system may be great for AAA games, but it breaks the basic connections between data and use that type systems rely on. I felt that Bevy was, unfortunately, the worst of both worlds: slow iteration without safety.
I've always liked the concept of ECS, but I agree with this, although I have very limited experience with Bevy. If I were to write a game in Rust, I would most likely not choose ECS and Bevy because of two reasons: 1. Bevy will have lots of breaking changes as pointed in the post, and 2. ECS is almost always not required -- you can make performant games without ECS, and if with your own engine then you retain full control over breaking changes and API design compromises.
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
Related: https://news.ycombinator.com/item?id=40172033 - Leaving Rust gamedev after 3 years (982 comments) - 4/26/2024
https://loglog.games/blog/leaving-rust-gamedev/#hot-reloadin...
Hot reloading! Iteration!
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
This is a personal project that had the specific goal of the person's brother, who was not a coder, being able to contribute to the project. On top of that, they felt the need to continuously upgrade to the latest version of the underlying game engine instead of locking to a version.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
Good for them.
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.
Love to have this comparison analysis. Huge LOC difference between Rust and C# (64k -> 17k!!!) though I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust.
> I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
I worked in a regulated space at one time, and my understanding is that this is a big reason they chose .NET over Java. Java relies a lot more on third-party libraries, which makes getting things certified harder.
Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.
The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...
None of this makes any sense. There is no waiting. You just do it. In no universe can you justify using a vulnerable log4j version. You force gradle to use the patched log4j and be done with it.
Five years has nothing to do with Java. It means nobody cares about security in the first place. Outsourcing such a trivial security problem to Microsoft is just another nail in the coffin. "I have no capacity to develop secure software, better make myself dependent on someone who can".
Hugely underrated aspect of .NET. If a CVE surfaces, there's a team a Microsoft that owns the code and is going to patch and ship a fix.
In sectors that are critical here in the EU, nobody allows c# and microsoft due to licensing woes longterm. It's java and foss all the way down. SaaS also is not a thing unless it runs on prem.
C# and Microsoft are in all critical places in Europe. What are you talking about
As European, in a polyglot agency, I have no idea of what you're talking about.
What kind of nonsense is this? EU is perfectly happy to use .NET-based languages as all of them, and the platform itself, are MIT (in fact, it's pretty popular out here).
C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM). Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.Less time wasted sifting through half-baked solutions.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
[0] https://typescript-is-like-csharp.chrlschn.dev/
> C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way
One negative aspect is that if you haven't kept up, that terseness can be a bit of a brick wall. Many of the newer features, especially things where the .Net framework just takes over and solves your problem for you in a "convention over configuration" kinda way, are extremely terse. Modern C# can have a bit of a learning curve.
C# is an underrate language for sure and once you get going it is an absolute joy to work in. The .Net platform also gives you all the cross-platform and ease of deployment features of languages like Go. Ignoring C#/.Net because it's Microsoft is a bit of a mistake.
C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.
Java current version is 24.
Java's current version is Kotlin /joke
Sure, where can I download the Kotlin Virtual Machine, with a standard library implemented in Kotlin?
If you feel like replying with ART, there are plenty of .java files in AOSP.
> Java 8
Why would you use Java 8?
C# is a great language, but it's been hampered by slow transition towards AOT.
My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?
I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.
Transition to better AOT, .NET has had AOT support since version 1.0, even if using NGEN is a bit clunky.
C# (well, .NET, because that's what does JIT/AOT compilation of the bytecode) is not transitioning to AOT. NativeAOT is just one of the ways to publish .NET applications for scenarios where it is desirable. Having JIT is a huge boon to a number of scenarios too, for example it is basically impossible to implement a competitive Regex engine with JIT compilation for the patterns in Go (aside from other limitations like not having SIMD primitives).
> C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.
C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...
But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.
People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.
Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.
You don't need to use Visual Studio, but it really makes a difference in the overall experience.
I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.
Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
[0] https://www.microsoft.com/en-us/d/visual-studio-professional...
> Some folks think you need to use Visual Studio
How's the LSP support nowadays? I remember reading a lot of complaints about how badly done the LSP is compared to Visual Studio.
I still think Visual Studio is better, but you can easily work on small to mid-size projects in VSCode. Could you use Vim? I probably wouldn't, but you can say the same for Java.
Pretty good.
I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.
The article says it's 64k -> 17k.
Updated, good catch haha
That's not unexpected they went from Bevy which is more of a game framework, than a proper GUI engine.
I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.
I love Rust and wanted to use it for gamedev but I just had to admit to myself that it wasn't a good fit. Rust is a very good choice for user space systems level programming (ie. compilers, proxies, databases etc.). For gamedev, all of the explicitness that Rust requires around ownership/borrowing and types tends to just get in the way and not provide a lot of value. Games should be built to be fast, but the programmer should be able to focus almost completely on game logic rather than low-level details.
Bevy solves the ownership/borrowing issues entirely with its ECS design though.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
Unity is still probably the best game engine for smaller games with Unreal being better for AAA.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
> I failed to fairly evaluate my options at the start of the project.
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
Do you think you needed to have those times to play around in the engine? Can a beginner possibly even know what to plan for if they don't fully understand the game engine itself? I am older so I know the benefits of planning, but I sometimes find that I need to persuade myself to plan a little less, just to get myself more in tune with the idioms and behaviors of the tool I am working in.
I think even if you don't have much experience with tools, you can still plan effectively, especially now with LLMs that can give you an idea of what you're in for.
But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)
GC isn't a big problem for many types of apps/games, and most games don't care about memory safety. Rust's advantages aren't so important in this domain, while its complexity remains. No surprise he prefers C# for this.
Disagree on both points. Anyone who has shipped a game in unity has dealt with object pooling, flipping to structs instead of classes, string interpolation, and replacing idiomatic APIs with out parameters of reused collections.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
This is a mostly Unity-specific issue. Unity unfortunately has a potato for a GC. This is not even an exaggeration - it uses Boehm GC. Unity does not support Mono's better GC (SGen). .NET has an even better GC (and JIT) that Unity can't take advantage of because they are built on Mono still.
Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.
Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.
I'm shocked that Beat Saber is written in C# & Unity. That's probably the most timing sensitive game in the world, and they've somehow pulled it off.
GC isn't something to be afraid of, it's a tool like any other tool. It can be used well or poorly. The defaults are just that - defaults. If I was going to write a rhythm game in Unity, I would use some of the options to control when GC happens [0], and play around with the idea of running a GC before and after a song but having it disabled during the actual interactive part (as an example).
[0] https://docs.unity3d.com/6000.0/Documentation/Manual/perform...
Devil May Cry for the Playstation 5 is written in C#, but not Unity.
Capcom has their own fork of .NET.
"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"
https://www.youtube.com/watch?v=tDUY90yIC7U
There's another highly sensitive to timing game - Osu!, which is written in C# too (on top of custom engine).
Not just GC -- performance in general is a total non-issue for a 2d tile-based game. You just don't need the low-level control that Rust or C++ gives you.
I wouldn't say it's a non-issue. I've played 2D tile-based, pixel art games where the framerate dropped noticeably with too many sprites on screen, even though it felt like a 3DS should have been able to run it, and my computer isn't super low-end, either. You have more leeway, but it's possible to badly make optimized 2D games to the point where performance becomes an issue again.
These are gross, macro-level design problems; not the kind of thing where C# vs C++/Rust makes any difference.
Except that C# is memory safe.
great summary
[dead]
It sounds to me that it may have been better to limit performance-critical parts to Rust and write the actual game in something like Lua (embedded in Rust)?
That's the approach I've been taking with a side project game for the very reason alone that the other contributors are not system programmers. I.e. a similar situation as the author had with his brother.
Rust was simply not an option -- or I would be the only one writing code. :]
And yeah, as others mentioned: Fyrox over Bevy if you have few (or one) Rust dev(s). It just seems Fyrox is not on the radar of many Rust people even. Maybe because Bevy just gets a lot more (press) coverage/enthusiasm/has more contributors?
Man, they seems kinda cracked. He migrated each of the subsystem experiments in about one day each having never used Unity before?
I've ported code between engines, and that makes my productivity feel very... leisurely.
Also, it's endearing that he builds things with his brother including that TF2 map that he linked from years ago.
Using Rust in a project felt less like implementing ideas and more like committing to learning the language in depth. Most projects involve messy iteration and frequent failure. Doing that in Rust is painful. Starting a greenfield project in it feels more like a struggle with the language than progress on the actual idea unless you're a Rust enthusiast.
I love Rust, but I would not try to make a full fledged game with it without patience. This post is not so much a moving away from Rust as much as Bevy is not enjoyable in its current form.
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
That's an excellent article - it's great when people share not only their victories, but mistakes, and what they learned from them.
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
It does solve the gameplay development use case too. Bevy encourages using lots of small 'systems' to build out logic. These are functions that can spawn entities or query for entities in the game world and modify them and there's also a way to schedule when these systems should run.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
Bevy warns about stability:
https://bevyengine.org/learn/quick-start/introduction/
>I wanted UI to be easy to build, fast to iterate, and moddable. This was an area where we learned a lot in Rust and again had a good mental model for comparison.
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
Throwing someone who is new to coding straight into rust AND game dev is pretty ambitious
But yeah my first thought here was Lua too like others said
To which extent was the implementation in C# benefitting off both the clarified requirements (so the Rust experience could be seen more as prototyping mixed with production)? Was it actually in major parts just a major refactor in a different language (admittedly with much more proven elements)?
I bet a C# to C# rewrite would also have been quick and led to a cleaner codebase. Especially from C#-as-written-by-beginners...
Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.
I completely understand, and it's not the first time I've heard of people switching from Bevy to Unity. btw Bevy 0.16 just came out in case you missed the discussion:
https://news.ycombinator.com/item?id=43787012
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
So I still have 100% confidence in Rust.
> and even agent systems
Is there a Rust equivalent of openai-agents-sdk?
> Bevy is young and changes quickly. Each update brought with it incredible features, but also a substantial amount of API thrash
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
I like not getting paged at night, so I like APIs written in Rust.
For my going on 5 year side game project, this is why I can only write in vanilla tools (java, typescript) and with small libraries that are easy to replace. I would loose all motivation if I had to refactor my game and update the engine every time I come back to it. But also, I don't have the pressure of ever finishing the game...
Unity is predatorial. I work in a small studio which is part of a larger company (only 5 of us use Unity) and they have suddenly decided to hold our accounts hostage until we upgrade to Industry license because of the revenue our parent company makes even though that's completely separate cash flow versus what our studio actually works with. Industry license is $5000 PER SEAT PER YEAR. Absolute batshit crazy expensive for a single piece of software. We will never be able to afford that. So we are switching over to Unreal. It's really sad what Unity has become.
That's BS, does your team of 5 work for free?
Imagine you all cost 100k/year to employ by the larger company (since you all apparently don't make money).
Then imagine you are all now cost 105k a year to the parent company.
It's no difference.
Definitely not cheap, but I assume developer cost and migrating to unreal is probably not cheap either. I'm not too familiar with either engine, are they similar enough that it's "cheaper" to migrate? I imagine that sets back release dates as well.
Such a crappy thing for a company to do.
The best language for game logic is lua, switching to C# probably isnt going to help any.... IMHO.
What makes Lua the best for game logic? You don't even have types to help you out with Lua.
Stuff that hooked me:
you integrate it tightly with the engine so it only does game logic, making files small and very quick and easy to read.
platform independent, no compiling, so can modify it in place on a release build of the game.
the "everything is a table" approach is very easy to concept mentally means even very inexperienced coders can get up and running quickly.
great exception handling, which makes most release time bugs very easy to diagnose and fix.
All of which means you spend far more time building game logic and much, much less time fighting the programming language.
Heres my example of a 744 flight data recorder (the rest of the 744 logic is in the containing folders)
https://github.com/mSparks43/747-400/blob/master/plugins/xtl...
All asynchronously multithreaded, 100% OS independent.
Yeah, I actually recently tried making a game in Lua using LOVE2D, and then making the same one in C with Raylib, and I didn't feel like Lua itself gave me all that much. I don't think Lua is best for game logic so much as it's the easiest language to embed in a game written in C or C++. That said, maybe some of its unique features, like its coroutines, or stuff relating to metatables, could be useful in defining game logic. I was writing very boring, procedural, occasionally somewhat object-oriented code either way.
Lua would definitely help with iteration times vs. C/C++/Rust but C# compiles very quickly. Especially in Unity where you have an editor that keeps assets cached and can hot reload code changes (with a plugin).
Coroutines can definitely be very useful for games and they're also available in C#.
This can be summarized in a simple way: UI is totally, another world.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
I disagree. The issue, which the article mentions, is iteration time. They were having issues iterating on gameplay, not UI. My own experiences with game dev and Rust (which are separate experiences, I should add) resonate with what the article is expressing. Iterating systems is common in gamedev and Rust is slow to iterate because its precision ossifies systems. This is GREAT for safety, it's crap for momentum and fluidity
This is why game engines embedded scripting languages. Who gives a crap if the engine takes 12 hours to compile if 80% of the team are writing lua in a hot reload loop.
Yeah but no-one is recompiling the engine. This is just about gameplay code
Which is why I said
> this is why game engines embedded scripting languages
Would you happen to have (sample) or open-source Rust code out there demonstrating this approach? I'm very curious to learn more.
For example; if you have a progressbar that needs to be updated continuously, you do what? Upon every `tick` of your Rust engine you send a new struct with `ProgressBar(percentage=x)`? Or do the structs have unique identifiers so that the UI code can just update that one element and its properties instead of re-rendering the entire screen?
> I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
Are scripting languages not a thing in gamedev anymore?
I feel most of the things mentioned (rapid prototyping, ease of use for new programmers, modability) would be more easily accomplished by embedding a Lua interpreter in the rust project.
Glad C# is working out for them though, but if anyone else finds themselves in this situaton in Rust, or C, C++, Zig, whatever - embedding lua might be something else to consider, that requires less re-writing.
Excellent write-up.
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
https://archive.is/6gTdc
Professional high-performance C++ game engine dev here. At a glance, their game looks great. But, to be frank, it also looks like it could have been made in the DOS era with sufficient effort.
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
API churn is so expensive, largely unnecessary, and rarely value-add. It's an anti-pattern that makes things otherwise promising things unusable.
I wonder why Godot wasn't picked. Did I miss the points in the article?
Here is what I (not the article author) ran into when trying to use Godot to make a 2D game: https://forum.godotengine.org/t/shadows-go-over-the-sprite-w...
I rarely touch game dev but that made me think Godot wasn't very suitable
> We wrote extensive pros and cons, emphasizing how each option fared by the criteria above: Collaboration, Abstraction, Migration, Learning, and Modding.
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
I also would have liked to have seen the pro/con lists for each of the potential choices.
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
The key is, you gotta be pretty cold in the analysis. It's probably more important to avoid what you hate than to lean in too hard to what you love, unless your terminal goal is to work in $FAVE_LANG. Too many people claim they want to make a game, but their actions show that their terminal goal was actually to work in their favorite language. I don't care if your goal is just to work in your favorite language, I just think you need to be brutally honest with yourself on that front.
Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.
Thanks, I appreciate the comment! I'm certain that my goal is not to work in a specific language, but to bring a long-time idea to life, and ideally minimize the amount of avoidable headaches along the way.
You're probably right that it'd be best to just jump in and get going with a few of them rather than analyze the choice to death (as I am prone to do when starting anything).
This is goes for a lot of things in tech unfortunately. For example, being stuck in a SRE/devops amusement park can be incredibly frustrating and surprisingly resource intense.
Sometimes it feels like we could use some kind of a temperance movement, because if one can just manage to walk the line one can often reap great rewards. But the incentives seem to be pointing in the opposite direction.
I'm beginning to develop a heuristic around the concept of "amount of the library you use". It's intrinsically fuzzy and still something I'm working on, but in general, it's bad to use only a tiny fraction of a library or framework, and really bad to have a code base in which a large number of things are pulled in that you only use small fractions of.
There are some exceptions, e.g., pulling in your languages best-of-breed image library to load some JPGs even though it supports literally a dozen other formats is less disastrous to a code base than pulling in an industrial-strength web framework just to provide two API calls with some basic auth of some sort. But there's something to the concept in general, I think.
I wondered the same - the separate C# build might be a bit of a hassle still though.
But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.
One of the complaints in the article was using a framework early in it's dev cycle. I imagine they were just picking what is safe at that point and didn't want to get burned again.
wow every rust topics have uncountable number of comments, it's indeed a successful language
Related: just tried to switch to Rust when starting a new project. The main motivation was the combination of fearless concurrency and exhaustive error handling - things that were very painful in the more mature endeavor.
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling
How long ago was this and did you try JetBrains RustRover? While not quite as mature as some other JetBrains tools, I've found the latest version really quite good.
About 15 hours ago. I was switching between RustRover and VS Code + Rust Analyzer. Not quite mature is an understatement. All said above applies to RustRover.
Curious what kind of project that was. Were you making a GUI by any chance?
No, the new project that I tried Rust for is a voice API (VAD, Whisper, etc). Got disappointed because, for example, the codec is just a wrapper around libopus. So it doesn't provide safety guarantees, and finding a crate that would build without issues was a challenge.
> 3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
The big advantage of precompiled is that hundreds of people who downloaded the package don't have to figure out building steps over and over again.
Risks are real though.
Why not the awesome Gamemaker engine?
Congrats on the rewrite!
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?
Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
First argument sounds like a major fallacy to me. It doesn't surprise me, but it find it extremely wrong.
Why?
Because it's a discouragement of learning based on mediocrity of AI. I find such idea perpetuating the mediocrity (not just of AI itself but of whatever it's used for).
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
It's not at all clear to me what this has to do with the practical delivery of software. In languages that LLMs handle well, with a careful user (ie, not a vibe coder; someone reading every line of output and subjecting most of it to multiple cycles of prompting) the code you end up with is basically indistinguishable from the replacement-level code of an expert in the language. It won't hit that human expert's peaks, but it won't generally sink below their median. That's a huge accelerator for actually delivering projects, because, for most projects, most of the code need only be replacement-grade.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
> Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing?
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
No plausible advance in nanotechnology could produce a violin small enough to capture how badly I feel about out professional being "plagiarized" after decades of rationalizing about the importance of Star Wars to the culture justifying movie piracy.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
I'm not making an argument from grievance about my own code being plagiarized. I actually don't care if my own code is used without even the attribution required by the permissive licenses it's released under; I just want it to be used. I do also write proprietary code, but that's not in the training datasets, as far as I know. But the training datasets do include code under a variety of open-source licenses, both permissive and copyleft, and some of those developers do care how their code is used. We should respect that.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
https://arxiv.org/abs/2202.07646
https://dl.acm.org/doi/10.1145/3447548.3467198
https://papers.nips.cc/paper/2020/hash/1e14bfe2714193e7af5ab...
It's not about delivery of software, it's about avoidance of learning based on mediocrity of AI. I.e. original post literally brings LLMs being poor at suggestions for Rust as a reason to avoid it.
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
Your premise here being that any software not written in Rust must be mediocre? Wouldn't it be more productive to just figure out how to evolve LLM tooling to work well with Rust? Most people do not write Rust, so this is not a very compelling argument.
Rust is just an example in this case, not essential to the point. If someone will evolve LLM to work with Rust better, it will still be mediocre at something else, and using this as an excuse to avoid it is problematic in itself, that's what I'm saying.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.
I agree that such trend exists, but it's extremely unhealthy and if anyone, developers should have more clue how bad it is.
Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.
So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.
I haven't coded in C/C++ in years but friends who do and worked on non-trivial codebase in those languages had a really crappy experience with LLMs too.
A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.
My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.
Rust is fine as a low-level systems programming language. It's a huge improvement over C and (because memory safety) a decent improvement over C++. However, most applications don't need a low-level systems programming language, and trying to shoehorn one where it doesn't belong just leads to sadness without commensurate benefit. Rust does not
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
I completely agree with you—Rust is not well-suited for application development. Application development requires rapid iteration, acceptable performance, and most importantly, a large developer community and a rich software ecosystem.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
I signed up for the mailing list. The game looks interesting, I hope there is a Mac version in the future.
Expect many more commits like #12. ;)
Awww that's not fair.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
For anyone considering Rust for gamedev check out the Fyrox engine
https://fyrox.rs/
here's a web demo
https://fyrox.rs/assets/demo/animation/index.html
Sorry but this engine had(s) problems rendenring a simple rectangle with alpha channel texture, not longer than 3 months ago (I'm assuming it was fixed).
Is it normal for Rust ecosystem to suggest software with this level of maturity?
https://github.com/FyroxEngine/Fyrox/discussions/725
[flagged]
Very useful writeup, thank you for taking the time to do it.
PS: I love the art style of the game.
You might like Don't Starve, then.
Somehow I can't read this with uBlock Origin on. Hm.
Strange, I had no such issue.
Me neither. Default uBlock Origin settings though, maybe the OP is more strict.
Migrating away from Bevy is the main thrust.
Rust is a niche language, there is no evidence it is going to do well in the game space.
Unity and C# sound like a much better business choice for this. Choosing a system/language....
> My love of Rust and Bevy meant that I would be willing to bear some pain
....that is not a good business case.
Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.
Rust is not good for video game gameplay logic. The ownership model of Rust can not represent the vast majority of allocations.
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
> No Tiny Glade doesn’t count.
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
Yeah in my haste I mixed up my rants. The bane of typing at work inbetween things.
Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.
Egg on my face. Regrets.
(the) Gnorp Apologue is written in Rust and did pretty well: https://store.steampowered.com/app/1473350/the_Gnorp_Apologu...
And for something like Gnorp, Rust is probably a decent choice.
> The ownership model of Rust can not represent the vast majority of allocations.
What allocations can you not do in Rust?
Gameplay code is a big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.
The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.
Which part of that architecture is impossible in Rust? Actually an honest question, I'm wondering if I'm missing something.
From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.
I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.
In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.
> ... big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.
I'm sorry, but I still don't understand. There are myriad heap collections and even fancy stuff like Rc<Box<T>> or RefCell<T>. What am I missing here?
Is it as simple as global void pointers in C? No, but it's way safer.
Somehow I doubt Unity uses global void pointers in C. Not that one would have to use global void pointers when using C.
You could probably write the core in Rust and use some sort of scripting for gameplay logic. Warframe's gameplay logic is written in Lua.
The headline is a bit sensational here and shall have been rather called "Migrating away from Bevy" .. That's not (really) comparing C# to Rust (and Luna but that one is missing), but rather comparing game engine where the language is secondary. Obviously Unity is the leader here (with Unreal) - despite all its flaws.
Isn't Veloren doing pretty good?
No. No one plays Veloren. It’s a toy project for programmers.
No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.
> No Tiny Glade doesn’t count.
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
> Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
Sony mandates you use their toolchain. You don’t get to ship whatever you want on their console. They have a very thorough TRC check you must pass before you get to ship.
Rust being forbidden on a platform, and Rust being unsupported out-of-the-box with the SDK toolchain, seem to me like they're rather different things?
...why does Tiny Glade not count?
> Rust can not represent the vast majority of allocations
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
The "Learning" point drives home a concern my brother-in-law and I were talking about recently. As LLMs become more entrenched as a tool, they may inevitably become the crutch that actually holds back innovation. Individuals and teams may be hesitant to explore or adopt bleeding edge technologies specifically because LLMs don't know about them or don't know enough about them yet.
It was all in science fiction in 1957: "Profession" by Isaac Asimov http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
Excellent read
I see this quite a bit with Rust. I honestly cringe when people get up in arms about someone taking their project out of the rust community.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
Rust attracts a religious fervour that you'll almost never see associated with any other language. That's why posts like this make the front page and receive over 200 comments.
If you switched from Java to C# or vice versa, nobody would care.
A religious fervor against it: no one is in the comments telling the OP he’s wrong.
How is that different from choosing not to adopt a technology because it’s not widely used therefore not widely documented? It’s the timeless mantra of “use boring tech” that seems to resurface every once in a while. It’s all about the goal: do you want to build a viable product, quickly, or do you want to learn and contribute to a specific tech stack? That’s the trade off most of the time.
It's a lot worse. A high quality project can have great documentation and guides that make it easy to use for a human, but an LLM won't until there's a lot of code and documents out there using it.
And if it's not already popular, that won't happen.
No, this doesn't ring true: long before there were LLMs, people were selecting languages and stacks because of the quality and depth of their community.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
Isn't this article an example of that. There might be a lot of rust code but if the apis are changing frequently it's all outdated and leads to unusable outputs.
It's not Rust in particular, but Bevy the game engine which is much newer than Rust and still has many breaking changes between version.
It's a bit like Rust in 2014, you would never have had enough material for LLMs to train on.
I was actually meaning to post this as an Ask HN question, but never found the time to word it well. Basically, what happens to new frameworks and technologies in the age of widespread LLM-assisted coding? Will users be reluctants to adopt bleeding-edge tools because the LLMs can't assist as well? Will companies behind the big frameworks put more resources towards documenting them in a way that makes it easy for LLMs to learn from?
Actually, here in my corner of EU, only the prominent big tech backed well documented and battle tested tools are most marketable skills. So, React, 50 new jobs, but you worked with Svelte/Solidjs, what is that? Java/PHP/Python/Ruby/JS, adequate jobs. Go/Rust/Zig/Crystal/Nim, what are these? While Go has some popularity in recent years and I can spot Rust once in a blue moon. Anything involving requiring near metal work is always C/C++.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
This is just reality outside of the early stage startup. The US tech industry and its social networks are very dominated by trendy startup ideas, but the reality is still the major tried-and-true platforms.
Maybe it is not the regulations what is holding EU back.
Another way to look at it: working bleeding edge will become a competitive advantage and a signal to how competent the team is. „Do they consume it” vs „do they own it”.
Or a signal that, someone did not think about the bus factor and future of the project when most of the teams jumped ship.
Constantly chasing the latest tech trends has probably done more harm than good, because more often than not, it turns out that the latest hype technology actually does not deliver what the marketing had promised. Look at NoSQL and MongoDB especially as recent examples. Most people who blindly jumped on the MDB bandwagon would have probably been better off just using Postgres, and they later had to spend a lot of resources migrating away from Mongo.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
This already happens. Is your new framework popular on GitHub and on Stack Overflow is a metric people use. LLMs are currently mostly capable of just adapting documentation, blog posts, and answers on SO. So they add a thin veneer on top of those resources.
I expect it will wind up like search engines where you either submit urls for indexing/inclusion or wait for a crawl to pick your information up.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
Hopefully, tools can adapt to integrate documentation better. I've already run into this with GitHub Copilot, trying to use Svelte 5 with it is a battle despite it being released most of a year ago.
There’s another future where reasoning models get better with larger context windows, and you can throw a new programming language or framework at it and it will do a pretty good job.
We already have quite a lot of that effect with tooling. A language can't really get much traction until its got a build, packaging and all the IDE support we expect or however productive the language is it looses out in practice because its hard to work with and doesn't just fit into our CI/CD systems.
Doesn't this mean that new tech will have to demonstrate material advantages, such that outweigh the LLM inertia, in order to be adopted? This sounds good to me; so much framework churn seems to be code fashion rather than function. Now if someone releases a new framework, they need to demonstrate real value first. People that are smart enough to read the docs and absorb the material of a new, better, framework will now have a competitive advantage; this all seems good.
I think it's a good point and I experienced the same thing when playing with SDL3 the other day. So even established languages with new API's can be problematic.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
I had the same issue a few months ago when I was trying to ask LLMs about Box2D 3.0. I kept getting answers that were either for Box2D 2.x, or some horrific mashup of 2.x and 3.0.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
Almost every time I've run into similar problems with LLMs I've mostly managed to solve them by uploading the documentation to the version of the library I'm using to the LLM and instructing it do use that documentation when answering questions about the library.
I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
I don’t think we will have a lack of people who explore and know beyond others how to things.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
What innovation? Languages with curly braces versus BEGIN/END? There is no innovation going on in computer languages. Rust is C with better ergonomics and rigorous memory management. This was made possible with better processors which made more elaborate compilers practical. It all gets compiled by LLVM down to the same object code. I think we are moving to an era of "read-only" languages. Languages that have horrible writing ergonomics yet are easy to understand when read. Humans won't write code. They will review code.
I've noticed this effect even with well established tech but just in degrees of popularity. I've recently been working on a Swift/SwiftUI project and the experience with LLM's compared to something like web dev stuff with React, etc is noticeably different/worse which I mostly attribute to there probably being at least 20 times less Swift specific content on the web in comparison.
There are a ton of Swift /SwiftUI tutorials out there for every new technology.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
Is that different from what is happening already? A lot of people won't adopt a language/technology unless it has a huge repository of answers on StackOverflow, mature tooling, and a decent hiring pool.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
New languages / packages / frameworks may need to collaborate with LLM providers to provide good training material. LLM-able training material may be the next important documentation thing.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
How can it compete with vast amount of trained codebases on Github? For LLMs, more data equals better results, so people will naturally be driven to better completion with already established frameworks and languages. It would be hard to produce organic data on all ways your technology can be (ab)used.
Allegedly one of the ways they've been training LLMs to get better at logic and reasoning, as well as factual accuracy, is to use LLMs themselves to generate synthetic training data. The idea here would be similar: generate synthetic training data. Generating this could be aided by LLMs, perhaps with a "playground" of some sort where LLMs could compile / run / render various things, to help select out things that work and things that don't work (as well as if you see error X, what the problem might be).
It’s the same now. I’ve spent arguably too much time trying to avoid Python and it has cost me a whole lot of time. You keep running into bugs and have to implement much more yourself if you go off the beaten path (see also [1]). I don’t regret it since I learned a lot, but it’s definitively not always the easiest path. To this day I wonder whether maybe I should have taken the simple route.
[1]: https://huijzer.xyz/posts/killer-domain/
A showerthought I had recently was that newly-written software may have a perverse incentive to be intentionally buggy such that there will be more public complaints/solutions for said software, which gives LLMs more training data to work with.
Unity was a better choice for game engine long before the existence of LLMs.
Its not even innovation. I had a new Laravel project that i was chopping around to play with some new library and I couldn't the the dumbest stuff to work. Of course I went back to read the docs and - ah Laravel 19 or whatever is using config/boostrap.php again and no matter what chatgpt, or myself had figured, could understand why it wasnt working.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
[dead]
The article title is half-true. It wasn't so much they migrated away from Rust, but that they migrated away from Bevy, which is an alpha quality game engine.
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
What are some non-alpha quality Rust game engines? If the answer is "there are none", then I'd say the title is accurate.
More surprising part for me is not migrating from Rust/Bevy, but migrating _to_ C#/Unity.
Although points mentioned in the post are quite valid.
Where would you migrate to?
Not OP, but it seems that there is still a huge sentiment that Unity is not a "safe" platform to migrate to because of their relatively antagonistic approach to monetization guidelines compared to other open source game engines. I do think it makes sense to also consider Godot given his coworker is his brother who is stated to be new to game development, it has a scripting language even simpler than C#, more like python. Additionally, one might expect that someone more into Rust might prefer the C++ integration that Unreal offers. I think the timeline had an effect here too, as it's not been until recently that people have been taking Godot more seriously.
Maybe godot? The unity scandal recently is not great for developers.
People forget that Unity and Unreal are industry darlings for a reason.
The amount of platforms they support, the amount of features they support, many of which could be a PhD thesis in graphics programming, the tooling, the store,....
https://news.ycombinator.com/item?id=43825086
Personally, literally anything except Unity. The fact that they tried to retroactively change terms on developers means that it will be a long time before I feel comfortable trusting they won't try it again.
They mentioned ABI and the ability to create mods, which are Rust things.
Here's a thought experiment: Would Minecraft have been as popular if it had been written in Rust instead of Java?
I mean, we already have a sort-of answer, because the "Bedrock Edition" of Minecraft is written in C++, and it is indeed less popular on PC (on console, it's the only option, so _overall_ it might win out) and does lack any real modding scene
Indeed. Java is sufficiently dynamic/decompilable a game written in it can be heavily modded without adding specific support. C++ is much harder (depending on the game engine), though not impossible. If you do add modding support then everything is much better regardless of language, though (see Factorio, written in C++ and with a huge modding scene, because it was basically written with modding in mind. Lua is certainly helping with that, of course).
I actually disagree with that. Decompilation based mods can completely change anything and everything about the game. Scripting based mods can only change things within the boundaries allowed by the devs of the original game.
True, a limited modding API can be a problem. But in something like minecraft it's not a a free-for-all with mods either, it's just that the community writes their own modding API, but has to deal with breakage whenever the game updates.
The problem with Rust is that almost everything is still at an alpha stage. The vast majority of crates are at version 0.x and are eventually abandoned, replaced, or subject to constant breaking changes
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.
> I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.
I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).
Borrow checker is mostly a strawman for this discussion, the post is about using Bevy as an engine and Bevy uses an ECS than manages the lifetime of objects for you automatically. You will never have an issue with the borrow checker when using Bevy, not even once.
Everything in every ECS system is done with handles, but the parent comment is correct that many games use hairballs of pointers all over the place (and they are handles with ECS). There is never a borrow checker issue with handles since they divorce the concept of a pointer from the concept of ownership.
I wouldn't say 'almost everything', but there are some areas which require a huge amount of time and effort to build a mature solution for, UI and game engines being one, where there are still big gaps.
I have totally disagree here.
I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.
With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.
Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.
They might be unsafe, but there is enough tooling to pick from 60 and 50 years of industrial use, approximately.
Well, on the flip side with C++ some of it hasn't been updated beyond very basic maintenance and you can't even understand the code if you are just familiar with more modern C++…
Well it is upon each one to be good with their craft.
If not, the language they pick doesn't really make a difference in the end.
It is like complaining playing a music instrument to be in band or orchestra requires too much effort, naturally.
Except here you are a trained pianist and the tour manager gave you a pipe organ or a harpsichord.
Speaking as someone with musical background, that is where we discover those that actually understand music, from those that kind of get by.
Great musicians make a symphony out of what they can get their hands on.
It's still true for game dev indeed, but for back-end or CLI tools it hasn't been true in like 7 years or so.
> The problem with Rust is that almost everything is still at an alpha stage.
Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.
As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.
But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.
Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects. Rust language and library features themselves often spend years in nightly before making it to a release build.
You can also always go from 1.0 to 2.0 if you want to make breaking changes.
> Unfortunately, it is a failing of many projects in the Rust sphere that they spend quite a lot longer in 0.x than other projects
Yes. Because it makes a promise about backwards compatibility.
> Rust language and library features themselves often spend years in nightly before making it to a release build.
So did Java's. And I Rust probably has a fraction of its budget.
In defense of long nightly feature more than once, stabilizing some feature like negative impl and never types early would have caused huge backwards breaking changes.
> You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yeah, just like Python!
And split the community and double your maintenance burden. Or just pretend 2.0 is 1.1 and have the downstream enjoy the pain of migration.
> And split the community and double your maintenance burden.
If you choose to support 1.0 sure. But you don't have to. Overall I find that the Rust community is way too leery of going to 1.0. It doesn't have to be as big a burden as they make it out to be, that is something that comes down to how you handle it.
> If you choose to support 1.0 sure.
If you choose not to, then people wait for x.0 where x approaches infinity. I.e. they lose confidence in your crates/modules/libraries.
I mean, a big part of why I don't 1.x my OSS projects (not just Rust) is that I don't consider them finished yet.
Godot launched 0.1 in February 2014 and got to 1.0 in December 2014.
The distance in time between the launches of Unreal Engine 4 and Unreal Engine 5 was 8 years (April 2014 to April 2022). Unreal Engine 5 development started in May 2020 and had an early access release in May 2021.
Bevy launched 0.1 in 2020 and is at 0.16 now in 2025. 5 years later and no 1.0 in sight.
If you want people to use your OSS projects (maybe you don't), you have to accept that perfect is the enemy of good.
At this point, regulators and legislators are trying to force people to use the Rust ecosystem - if you want a non-GC language that is "memory safe," it's pretty much the de facto choice. It is long past time for the ecosystem to grow up.
> Godot launched 0.1 in February 2014 and got to 1.0 in December 2014.
Yeah because that's when it was open sourced, NOT DEVELOPED.
See https://godotengine.org/article/first-public-release/
> Godot has been an in-house engine for a long time and the priority of new features were always linked to what was needed for each game and the priorities of our clients.
I checked the history and it was known by another name Larvita.
> If you want people to use your OSS project
Seeing how currently I have about 0.1 parts of me working on it, no I don't want to give people false sense of security.
> At this point, regulators and legislators are trying to force people to use the Rust ecosystem
Not ecosystem. Language. Ecosystem is a plus.
Further more the issue Bevy has is more of there aren't any good mature GUI libraries for Rust. Because cross OS GUIs were, are and will be a shit show.
Granted it's a shit show that can be directed with enough money.
>”reverting to more conservative options”
From what I’ve heard about the Rust community, you may have made an unintentionally witty pun.
It’s incredible how many projects and articles have been written around ECS with very little results.
Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?
The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:
Hype as usual, too many people waste time on how to implement engines, instead of how to make a game fun to play.
The important part of ECS (IMO) is more that it's a pattern that others recognize and less that it's necessarily the best pattern to use.
Quake 1-3 were written for computers where memory was not much slower than the CPU as is the situation today.
But yeah, probably you don't need an ECS for 90% of the games.
Memory is sometimes faster today!
In absolute terms yes, but relative to the CPU speed memory is ridiculously slow.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
> Quake struggled with the number of objects even in its days.
Because of memory bandwidth of Iterating the entities? No way. Every other part - rendering, culling, network updates, etc is far worse.
Let’s restate. In 1998 this got you 1024 entities at 60 FPS. The entire array could no fit in L2 cache of a modern desktop.
And I already advised a simple change to improve memory layout.
> Quake 1 updated the world state at 10 ticks per secondo
That’s not a constraint in Quake 3 - which has the same architecture. So it’s not relevant.
> users' expectations have increased too
Your game is more complex than quake 3? In what regard?
[dead]
This comment might not be liked by the usual commenters in these threads, but I think it is worth stressing:
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
The way I read about Bevy in online discussions obfuscates this. Someone who is new to game development could be confused into thinking Bevy is a fair competitor with the other engines you mentioned. And equate Bevy with Rust, or Bevy with Rust in game dev. I think stomping this out is critical to expectation management, and perhaps rust's future in game dev.
Not only Bevy. In this very thread someone is suggesting an even less mature Rust game engine: https://news.ycombinator.com/item?id=43825564
From my experience one has to take Rust discussions with a grain of salt because often shortcomings and disclosures are handwaved and/or ommited.
I've learned to do the same. I see this in the embedded world as well.
And within rust, I've learned to look beyond the most popular and hyped tools; they are often not the best ones.
As someone who has used Bevy in the past, that was my reading as well. It is an incredible tool, but some of the things mentioned in the article like the gnarly function signature and constant migrations are known issues that stop a lot of people from using it. That's not even to mention the strict ECS requirement if your game doesn't work well around it. Here is a good reddit thread I remember reading about some more difficulties other people had with Bevy:
https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...
I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).
[0] https://macroquad.rs/
>if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.
You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.
** value types/unsafe/pointers/stackalloc etc.
Structs in C# or F# are not low-level per se, they simply are a choice and used frequently in gamedev. So is stackalloc because using it is just 'var things = (stackalloc Thing[5])' where the type of `things` is Span<Thing>. The keyword is a bit niche but it's very normal to see it in code that cares about avoiding allocations.
Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).
Right, it depends on how far one wants to go to avoid allocations. structs and spans are safe. But one can go even deeper and pin pointers and do Unsafe.AsPointer and get a de-facto (unsafe) union out of it....
>https://em-tg.github.io/csborrow/
Oooh... I didn't know scoped refs existed.
Imo the place for rust in game dev isnt in games at all, but base libraries and tools. Writing your proc generation library in rust that is an isolated package you can call in isolation, or similar is where its useful.
I agree. [Unless fully adopted by a serious game engine, of course.] Rust's "superpower" is substituting critical C++ code in-place, with the goal of ensuring correctness and soundness. And increasing the development velocity as a result.
Sounds like "Migrating away from Bevy towards Unity"; the Rust to C# transition is mostly a technical consequence.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.
Anyone else get an empty page on mobile Firefox when they try to go the article? All that renders for me is a comment entry box. If I go back to news I can see the article list just fine.
Works for me on mobile Chrome
Same on mobile safari
Don’t see any content on that article for some reason (from iPhone)
I experienced the same, I had to disable my adblocker to view it, it seems the content is inside a tag `<article class="social-sharing">` but I am unsure whether this triggered my adblocker.
Adblocking seems to cause issues with the site. Disabling uBlock Origin worked for me as did readability mode in Firefox.
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Honey, a new incantation to summon Cthulhu just dropped.