

Do you mean Dan Luu, or one of the studies reviewed in the post?
Do you mean Dan Luu, or one of the studies reviewed in the post?
Yeah, I understand that Option and Maybe aren’t new, but they’ve only recently become popular. IIRC several of the studies use Java, which is certainly safer than C++ and is technically statically typed, but in my opinion doesn’t do much to help ensure correctness compared to Rust, Swift, Kotlin, etc.
I don’t know; I haven’t caught up on the research over the past decade. But it’s worth noting that this body of evidence is from before the surge in popularity of strongly typed languages such as Swift, Rust, and TypeScript. In particular, mainstream “statically typed” languages still had null
values rather than Option
or Maybe
.
Note that this post is from 2014.
Partly because it’s from 2014, so the modern static typing renaissance was barely starting (TypeScript was only two years old; Rust hadn’t hit 1.0; Swift was mere months old). And partly because true evidence-based software research is very difficult (how can you possibly measure the impact of a programming language on a large-scale project without having different teams write the same project in different languages?) and it’s rarely even attempted.
Notably, this article is from 2014.
It’s valid usage if you go waaay back, i.e. centuries. You also see it in some late 19th/early 20th century newsprint and ads.
No, because the thing they are naming is “The Github Dictionary”; they’re not applying scare-quotes to the word “dictionary” implying that what they’ve written is not really a “dictionary”.
The ribbon that was introduced around… 2007, I think? Or is there a substantially different one now?
“Scare quotes” definitely precede Austin Powers, though that may have spurred a rise in popularity of the usage. (Also, “trashy people never saw Austin Powers” is honestly a pretty weird statement, IMO.)
That said, in this case, arguably the quotes are appropriate, because “the github dictionary” isn’t something that happened (i.e. a headline), but a thing they’ve made up.
deleted by creator
deleted by creator
Most of those comments are actually just random people arguing about the merits of the experiment, not continued discussion with the bot.
Also, the bot is supposed to be able to run builds to verify its work, but is currently prevented from doing so by a firewall rule they’re trying to fix, so its feedback is limited to what the comments provide. Humans wouldn’t do great in that scenario either. (Not to say the AI is doing “great” here, just that we’re not actually seeing the best-case scenario yet.)
There is only one mention of Python being slow, and that’s in the form of a joke where Python is crossed out and replaced with “the wrong tool for the job.” Elsewhere in the post, Python is mentioned more positively; it just isn’t what’s needed for the kind of gamedev the author wants to do.
I’m addressing the bit that I quoted, saying that an interpreted language “must” have valid semantics for all code. I’m not specifically addressing whether or not JavaScript is right in this particular case of min()
.
…but also, what are you talking about? Python throws a type error if you call min()
with no argument.
Without one, the run time system, must assign some semantics to the source code, no matter how erroneous it is.
That’s just not true; as the comment above points out, Python also has no separate compilation step and yet it did not adopt this philosophy. Interpeted languages were common before JavaScript; in fact, most LISP variants are interpreted, and LISP is older than C.
Moreover, even JavaScript does sometimes throw errors, because sometimes code is simply not valid syntactically, or has no valid semantics even in a language as permissive as JavaScript.
So Eich et al. absolutely could have made more things invalid, despite the risk that end-users would see the resulting error.
The user who submitted the report that Stenberg considered the “last straw” seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it’s possible that by using an LLM to automate making reports, they’re making some money despite having a low success rate.
Every single time I’ve tried to work on a file using tabs, I’ve had to configure my tabstop to be the same width the original author used in order to make the formatting reasonable. I understand that in theory customizable tabstops is preferable, but I’ve yet to see it work well.
(For what it’s worth, I think that elastic tabstops, had they been the way tabs worked in text files to begin with, would have been far preferable.)
So…like an old fashioned camera iris?
https://askubuntu.com/q/641049
TL;DR: it’s supposed to send email to an administrator, but by default on some distros (including Ubuntu), it isn’t actually sent anywhere.