

All the others are not very butthole-ish, though.
All the others are not very butthole-ish, though.
There are definitely more experienced programmers using it. I can’t find the post at the moment, but there was a recent-ish blog post citing a bunch of examples. [edit: found it: https://registerspill.thorstenball.com/p/they-all-use-it ]
Personally, I don’t use AI much, but I do occasionally experiment with it (for instance, I recently gave Claude Sonnet the same live-coding interview I give candidates for my team; it…did worse than I expected, tbh). The experimenting is sufficient for me to recognize these phrases.
I discovered a fun one the other day: there is literally no way to represent word-boundary anchors that’s valid in both GNU sed
and BSD sed
. https://unix.stackexchange.com/a/393968/38050
It absolutely can be constructive. The reason people respond that way on SO is because it is genuinely common for people to think they need something because they’ve misinterpreted the core problem they have.
The one part of that that sounds weird to me is needing to change integration tests frequently when changing the code. Were you changing the behavior of existing functionality a lot?
If I explain myself or add nuance, it won’t be a “hot take” anymore, but here goes…
I definitely agree that they can be useful as both usage examples and as a way to enforce some rules (and consistency) in the API. But I’m not sure I’d go so far as to call that a “spec”, because it’s too easy to make a breaking change that isn’t detected by unit tests. I also feel that mocking is often needed to prevent unit tests from becoming integration tests, but it often hides real errors while excessively limiting the amount of code that’s actually being exercised.
I also think that actual integration tests are often a better demonstration of your API than unit tests are.
I haven’t used any of these methods as much as I’d like to, but I suspect that each of them is strictly more useful than standard unit testing:
I think TypeScript has a pretty good type system, and it’s not too hard to learn. Adding sum types (i.e. enums or tagged unions) to Go would be a huge improvement without making it much harder to learn. I also think that requiring nullability to be annotated (like for primitives in C#, but for everything) would be a good feature for a simple type system. (Of course that idea isn’t compatible with Go for various reasons.)
I also think that even before “proper” generics were added, Go should have provided the ability to represent and interact with “a slice (or map) of some type” in some way other than just interface{}
. This would have needed dedicated syntax, but since slice and map are the only container types and already have special syntax, I don’t think it would have been that bad.
The programming languages you use, and the variety of languages you learn, deeply influence how you think about software design.
Software would be much more reliable (in general) if Erlang had become one of the dominant languages for development.
Go sacrifices too much for superficial simplicity; but I would like to see a language that’s nearly as easy to learn, but has a better type system and fewer footguns.
Unit testing is often overrated. It is not good for discovering or protecting against most bugs.
Build/test/deploy infrastructure is a genuinely hard problem that needs better tooling, particularly for testability.
OOP is classes, and their accompanying language features (primarily inheritance) and design patterns (e.g. factories).
Those benefits both make sense, but are those really the original motivation for Microsoft designing the Blue Screen of Death this way? They sound more like retroactive justifications, especially since BSODs were around well before security and internationalization were common concerns.
That, and the use of gitignore
and other heuristics to ignore many files by default.
Plus, unlike grep
, find
is just…awkward. The directories to search must be prior to any search arguments (and .
is not the default, it must be specified explicitly), and using a search pattern is treated as one of many special cases requiring one of a variety of flags rather than the obvious default operation.
It’s a powerful DSL, but…not a convenient one by any stretch of the imagination.
I suspect this is more a symptom of “enterprise” design patterns than the language itself. Though I do think the standard library in Java is a bit more verbose than necessary.
Thanks for sharing the interview with Lattner; that was quite interesting.
I agree with everything he said. However, I think you’re either misinterpreting or glossing over the actual performance question. Lattner said:
The performance side of things I think is still up in the air because ARC certainly does introduce overhead. Some of that’s unavoidable, at least without lots of annotations in your code, but also I think that ARC is not done yet. A ton of energy’s been poured into research for garbage collection… That work really hasn’t been done for ARC yet, so really, I think there’s still a a big future ahead.
That’s optimistic, but certainly not the same as saying there are no scenarios in which GC has performance wins.
GNU ls
has those features too (except knowing about Git). I’d be surprised if BSD ls
doesn’t at least have color support.
…not that I’m not going to check out eza
and probably switch to it! But it’s often worth knowing what features the GNU/BSD coreutils do or do not support…especially when comparing other tools against them.
Edit: I just checked, and this set of options works on both BSD and GNU ls
, in case anyone wants better ls
behavior on a system where you can’t install eza
for some reason:
ls -FH --color=auto
F
appends sigils indicating executables, symlinks, or directories, and H
follows any symlinks in the argument list.
fd
saves me so much time. I actually understand find
better than I wish I did, and fd
is just so, so much easier.
I’m not a performance expert by any means, but…it seems like the bit about there being “no situation, ever” in which a garbage collector that “worked just as well as in any other language” outperformed reference-counting GC. The things I’ve read about garbage collection generally indicate that a well-tuned garbage collector can be fast but nondeterministic, whereas reference-counting is deterministic but generally not faster on average. If Apple never invested significant resources in its GC, is it possible it just never performed as well as D’s, Java’s, or Go’s?
The discussion was about sum types. The top-level comment, the one to which you originally responded, says:
It’s a shame that sum type support is still so lacking in C++. Proper
Result
types (ala Haskell or Rust) are generally much nicer to deal with, especially in embedded contexts.
Was this analogy actually wrong, though? The internet is more like tubes than like trucks. Tubes captures the concept of bandwidth, as well as infrastructure needing to be in place prior to sending anything.
Well, in Rust, it’s a sum-type, with functions that also let you use it like a monad instead of using explicit pattern matching.
So…like an old fashioned camera iris?