this post was submitted on 27 Aug 2023
54 points (92.2% liked)
Programming
17503 readers
8 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Swift has little to no use outside the apple ecosystem, and even if you are currently using Apple, you have to consider your targets as well. Writing in Swift means your code will only be usable by other Apple users, which is canonically a rather small fraction of technology users. Rust on the other hand is multiplatform and super low level, there's very few other languages out there that can match the potential of applications of rust code. Thus you will, in time, be introduced to many other technologies as well, like AI/ML, low level programming, web, integrations between languages, IoT, those are only a few of all the possibilities. On the other hand, even if Swift has a much more mature ecosystem, it's still only good for creating UIs in all things Apple, which is pretty telling; Apple is not willing to put in the time and effort to open it's language to other fields, because it sees no value in them being the ones providing the tooling for other purposes. They pretty much only want people to code web apps for them, and Swift delivers just fine for that. So if your current purpose is making Apple UIs, you could learn Swift, but be warned that either you'll either be doing that your whole life or will eventually be forced to change languages again.
Then again, most languages nowadays aren't that different from each other. I can code in a truckload of languages, not because I actually spent time making something coherent and complete with each one of them, but because I know some underlying concepts that all programming languages follow, like OOP, or functional programming, and whatever those entail. If you learn those you will not be afraid to switch languages on a whim, because you'll know you can get familiar with any of them within a day.
Just a nit: swift is opensource and there is a swift ecosystem outside of apple UI things. Here's a swift http server that you can totally run on linux.
Don't get me wrong, Swift is OSS and there are things you can do with it apart from front-end dev, but there are usually better options out there for those other things. For example if I want an HTTP server, I'd choose JS, Kotlin, Rust, etc.
I wouldn't. Swift is definitely better than any of those choices... and I say that as someone with decades of experience writing HTTP services.
I don't currently use Swift for any of my HTTP servers - but only because it's a relatively immature for that task and I'm generally a late adopter (also, I work in an industry where bugs are painfully expensive). But I do use Swift client side, and I definitely intend to switch over to Swift for my server side work at some point in the near future and it's what I recommend for someone starting out today.
By far - my favourite feature in Swift is the memory manager. It uses an "Automatic Reference Counter" which is essentially old school C or Assembly style memory management... except the compiler writes all of the memory management code for you. This often results in your code using significantly less RAM and better sustained performance than other languages and it's also just plain easier to work with - as an experienced developer I can look at Swift and know what it's going to do at a low level with the memory. In modern garbage collected languages, even though I have plenty of experience with those, I don't really know what it's doing under the hood and often I'm surprised by how much memory it uses. On server side code, memory is expensive and traffic can burst to levels drastically higher than your typical baseload activity levels, using less memory and using predictable amounts of memory is really really nice.
At one point, years ago, Apple had a compiler flag to use Garbage Collection or Automatic Reference Counting. The Garbage Collector worked just as well as in any other language... but there was no situation, ever, where it worked better than ARC so Apple killed their GC implementation. ARC is awesome and I don't understand why it's uniquely an Apple thing. Now that Swift is open source, it's available everywhere. Yay.
I find compared to every other language I've ever used, with Swift I tend to catch mistakes while writing the code instead of while testing the code, because the language has been carefully designed to ensure as many common mistakes are compile time errors or at least warnings which require an extra step (often just a single operator) to tell the compiler that, yes, you did intend to write it like that.
That feature is especially beneficial to an inexperienced developer like OP.
The other thing I love about swift is how flexible it is. For example, compare these two blocks of code - they basically do the same thing and they are both Swift:
I'm not a performance expert by any means, but...it seems like the bit about there being "no situation, ever" in which a garbage collector that "worked just as well as in any other language" outperformed reference-counting GC. The things I've read about garbage collection generally indicate that a well-tuned garbage collector can be fast but nondeterministic, whereas reference-counting is deterministic but generally not faster on average. If Apple never invested significant resources in its GC, is it possible it just never performed as well as D's, Java's, or Go's?
Check out this interview with Chris Lattner — one of the world's best compiler engineers and the founder of not only the Swift language but also LLVM which backs many other languages (including Rust). It's a high level and easily understood discussion (you don't need to be a language expert) but it also goes into quite a few technical details.
https://atp.fm/205-chris-lattner-interview-transcript#gc
Chris briefly talks about the problems in the Apple GC implementation, but quickly moves onto comparing ARC to the best GC implementations in other languages. The fact is they could have easily fixed the flaws in their GC implementation but there just wasn't any reason to. ARC is clearly better.
Apple's GC and ARC implementations were both implemented at about the same time, and when ARC was immature there were situations where GC worked better. But as ARC matured those advantages vanished.
Note: that interview is six years old now - when Swift was a brand new language. They've don a ton of work on ARC since then and made it even better than it was, while GC was already mature and about as good as it's ever going to et at the time. The reality is garbage collection just doesn't work well for a lot of situations, which is why low level languages (like Rust) don't have a "proper" garbage collector. Arc doesn't have those limitations. The worst possible scenario is every now and then you need to give the compiler a hints to tell it to do something other than the default - but even that is rare.
Thanks for sharing the interview with Lattner; that was quite interesting.
I agree with everything he said. However, I think you're either misinterpreting or glossing over the actual performance question. Lattner said:
That's optimistic, but certainly not the same as saying there are no scenarios in which GC has performance wins.
Swift only treats Apple OSes as first class citizens - even though technically you can use it on other platforms it's a painful and limited experience.
Not to nitpick here, (I agree with pretty much everything you said) but I wouldn't go around calling Rust super low level as it is garbage collected. The borrow checker acts as a abstraction over the actual malloc and free calls that are happening under the hood.
I think you don't know what garbage collection is. Allocations and Deallocations is how the heap works in memory, and is one of the two main structures in it, the stack being the other one. No matter what language you are using, you cannot escape the heap, except if you don't use a modern multitasking OS. ARC is a type of garbage collection that decides when to free a reference after it is allocated (malloc), by counting how many places refer to it. When it reaches 0, it frees the memory (free). With ARC you don't know when a reference will be freed on compile time.
In Rust, the compiler makes sure, using the Borrow checker, that there is only one place in your entire program where a reference can be freed, so that it can insert the free call at that place AT COMPILE TIME. That way, when the program runs there is no need for a garbage collection scheme or algorithm to take care of freeing up unused resources in the heap. Maybe you thought the borrow checker runs at compile time, taking care of your references, but that's not the case, the borrow checker is a static analysis phase in the Rust compiler (rustc). If you want to use a runtime borrow checker, it exists, it's called RefCell, but it's not endorsed to use. Plus, when you use RefCell, you also usually use Reference Counting (Rc RefCell)
Perhaps garbage collection is the wrong term to use as it dosen't happen at runtime (I wasn't sure what other term to call what Rust does). But Rust does provide a abstraction over manual manual memory management and if you are experienced with Rust sure you can probably visualize where the compiler would put the malloc and free calls so it is kind of a mix where you do technically have control it is just hidden from you.
Edit: It seems the term is just compile-time garbage collection so maybe you could consider it falling under garbage collection as an umbrella term.
Isn't that basically the same as how C++ RAII works?
Essentially although there are a few key differences:
You raised an issue that the other bulletpoint has the solution for, I really don't see how these are "key differences".
That's what unique_ptr would be for. If you don't want to leak ownership, unique pointer is exactly what you are looking for.
Well yeah, because that's what shared_ptr is for. If you need to borrow references, then it's a shared lifetime. If the code doesn't participate in lifetime, then ofcourse you can pass a reference safely even to whatever a unique_ptr points to.
The last bulletpoint, sure that's a key difference, but it's partially incorrect. I deal with performance (as well as write Rust code professionally), this set of optimizations isn't so impactful in an average large codebase. There's no magical optimization that can be done to improve how fast objects get destroyed, but what you can optimize is aliasing issues, which languages like C++ and C have issues with (which is why vendor specific keywords like
__restrict
exists). This can have profound impact in very small segments of your codebase, though the average programmer is rarely ever going to run into that case.Pretty much, with some atomic additions like "you cannot mutate a reference when it is borrowed immutably elsewhere" or "you cannot borrow a reference mutably multiple times".