

The rumor is probably total BS, heh.
That being said, it’s not surprising if Deepseek goes more commercial since it’s basically China’s ChatGPT now.
The rumor is probably total BS, heh.
That being said, it’s not surprising if Deepseek goes more commercial since it’s basically China’s ChatGPT now.
Yeah, it’s an Nvidia model trained for STEM, and really good at that for a ‘3090 sized model.’ For reference, this was a zero-temperature answer.
exllamav3 is a game changer. 70Bs sorta fit, but I think 50B is the new sweetspot (or 32B with tons of context).
Good practice is putting anything important on an encrypted USB drive (as that stuff usually isn’t very big), and just treating the machine as “kinda insecure”
If you set up a BIOS password, someone at least needs to unscrew your computer to get stuff. But this is generally not setup because people, well, forget their passwords…
Not the biggest sample size.
1.2T param, 78B active, hybrid MoE
That’s enormous, very much not local, heh.
Here’s the actual article translation (which seems right comparing to other translations):
DeepSeek R2: Unit Cost Drops 97.3%, Imminent Release + Core Specifications
Author: Chasing Trends Observer
Veteran Crypto Investor Watching from Afar
2025-04-25 12:06:16 Sichuan
Three Core Technological Breakthroughs of DeepSeek R2:
Data Engineering
Constructed 5.2PB high-quality corpus covering finance, law, patents, and vertical domains.
Multi-stage semantic distillation boosts instruction compliance accuracy to 89.7%
(Benchmark: C-Eval 2.0 test set)
Hardware Optimization
Proprietary distributed training framework achieves:
Application Layer Advancements - Three Multimodal Breakthroughs:
Industrial Inspection
Adaptive feature fusion algorithm reduces false detection rate to 7.2E-6 in photovoltaic EL defect detection
(Field data from LONGi Green Energy production lines)
Medical Diagnostics
Knowledge graph-enhanced chest X-ray multi-disease recognition:
Key Highlight:
8-bit quantization compression achieves:
Others translate it as ‘sub-8-bit’ quantization, which is interesting too.
Heh, calls N=NP out about as politely as it can:
What a delightfully ambitious request!
Before diving into the attempt, let’s set the stage with some clarifications and limitations, as tackling the P vs. NP problem (not N=NP, which seems to be a typo, as the question is whether P=NP) in a single response, especially through lambda calculus, is highly non-trivial. The P vs. NP problem is a fundamental question in computer science regarding the relationship between computational complexity classes P (Polynomial Time) and NP (Nondeterministic Polynomial Time).
Clarifications and Limitations
Typo Correction: Assuming the intent was to explore the P vs. NP problem, not “N=NP” (since N isn’t a standard complexity class in this context).
Lambda Calculus Role: We’ll use lambda calculus to explore conceptual underpinnings or provide a foundational perspective, rather than a direct proof, due to the problem’s complexity.
No Direct Solution Expected: Given the problem’s openness for over four decades, a solution here is highly unlikely. Our approach will be more about illustrating a thought process or a specific aspect related to the problem.
Approach Using Lambda Calculus for Insight into P vs. NP Step 1: Understanding the Classes through Lambda Calculus
P (Polynomial Time): Can be thought of as the set of decision problems solvable by a lambda calculus expression that normalizes in a number of beta reductions polynomial in the size of the input. Essentially, this means we can solve the problem in a polynomial amount of time.
NP (Nondeterministic Polynomial Time): More abstract to directly map, but conceptually, it involves problems where a proposed solution can be verified in polynomial time (similar to P’s normalization condition). In lambda terms, this might imply an existential quantification over all possible “paths” of computation (non-deterministic choices) that lead to a verification lambda expression that normalizes in polynomial time.
Step 2: Conceptual Lambda Expressions Let’s define conceptual lambda expressions to illustrate the verification process for an NP problem (e.g., SAT - Satisfiability Problem):
Verification Function for SAT (Conceptual): code
λsatVerify : (formula: PropositionalFormula) → (assignment: Assignment) → Bool = λformula.λassignment. evalFormula formula assignment evalFormula is a lambda expression (or a set of them) that evaluates a propositional formula under a given assignment, normalizing to True or False in polynomial time in the size of formula and assignment.
Step 3: Exploring P=NP through Lambda Calculus Lens To conceptually approach the P=NP question, we’d need a lambda expression that not only verifies a solution in polynomial time (easy part, as shown) but also finds a solution in polynomial time for any NP problem, given only the problem statement.
Hypothetical Solver Expression (if P=NP): code
λnpSolver : (problem: NPPProblem) → (solution: Solution | “No Solution”) = λproblem. … (Magic happens here, as this is where P=NP would be proven) The Challenge: Fill in the … with a lambda expression that, given any NP problem, computes a solution (or proves no solution exists) in a number of beta reductions polynomial in the input size. This is where our attempt hits the wall of current knowledge, as devising such an expression for all NP problems is essentially solving the P vs. NP problem.
Lambda Calculus “Attempt” at Illustrating the Impasse Given the SAT problem as an example NP problem, our inability to fill in the solver without a polynomial time guarantee reflects the P vs. NP impasse:
lambda
– Verification is straightforward (in P) satVerify = λformula.λassignment. evalFormula formula assignment
– The Elusive Polynomial Time Solver (if P=NP) satSolver = λformula. ? (PolynomialTimeMagic formula) – IMPASSE Conclusion
Lambda Calculus provides a powerful tool for conceptual exploration of computational concepts, including the underpinnings of P and NP.
Directly tackling P vs. NP with lambda calculus (or any method) in this format is infeasible due to the problem’s complexity and the current state of knowledge.
Insight over Solution: This approach offers more insight into how one might think about these classes conceptually rather than providing a solution.
Future Directions for a More In-Depth Exploration
Deep Dive into Lambda Calculus and Complexity Theory: For a more nuanced understanding, delve deeper into how lambda calculus models computation and the formal definitions of P and NP.
Explore Specific NP Problems: Attempt to construct solver expressions for specific NP problems, though this won’t generalize to all NP problems without solving P vs. NP.
Study Known Results and Reductions: Understand how problems are reduced to each other within NP, and how this might inform a lambda calculus approach.
References for Further Reading
Lambda Calculus: Barendregt, H. P. (1984). The Lambda Calculus: Its Syntax and Semantics.
Complexity Theory and P vs. NP: Papadimitriou, C. H. (1994). Computational Complexity. Arora, S., & Barak, S. (2009). Computational Complexity: A Modern Approach.
Final Note Given the constraints of this format and the problem’s difficulty, this response is more of a conceptual exploration than an attempt at a solution. The P vs. NP problem remains one of the most significant open questions in computer science, with a resolution requiring a deeper insight or a novel approach that hasn’t been captured in this simplified lambda calculus framing.
Chatbots are text completion models, improv machines basically, so they don’t really have that ability. You could look at logprobs I guess (aka is it guessing a bunch of words pretty evenly?), but that’s unreliable. Even adding a “I don’t know” token wouldn’t work because that’s not really trainable into text datasets: they don’t know when they don’t know, it’s all just modeling what next word is most likely.
Some non-autoregressive architectures would be better, but unfortunately “cutting edge” models people interact with like ChatGPT are way more conservatively developed than you’d think. Like, they’ve left tons of innovations unpicked.
More-or-less. They literally just had an article about this: https://www.axios.com/2025/04/19/inside-trump-mindset-tariffs
“We saw it in business with Trump,” one adviser said. “He would have these meetings and everyone would agree, and then we would just pray that when he left the office and got on the elevator that the doorman wouldn’t share his opinion, because there would be a 50/50 chance [Trump] would suddenly side with the doorman.”
The memo says the Defense Department is returning to the Biden-era medical policy for transgender service members due to a court order that struck down Hegseth’s restrictions as unconstitutional. The administration is appealing the move, but a federal appeals court in California denied the department’s effort to halt the policy while its challenge is pending.
So the court ordered them too.
The article is making it out like the DoD is “defying” Hegseth, but that seems like a misrepresentaion, as it seems he has to go along with this.
I guess it’s a “win” because the DoD isn’t openly defying an order…
Yeah, that’s the thing. Even if you buy the idea of Trump’s policies (which TBH have a few grains of truth), the implementations of them are so full of nonsense. Like, ok, get Canada into the US, let’s just roll with that for the sake of argument… It might make Canada and the US stronger, like the openness between the states does. It would consolidate many federal functions. Canda could retain their culture like individual states do. Sounds plausible.
…And your plan is to get them to join as one state, and only if they grovel to you, by harassing them on Twitter, offering zero details? Like, what world is he living in?
Oh yeah, its more than that. Low weight helps acceleration, braking (so safety), handling, range, wear on every component, and most of all, cost. The same sized tires will need less pressure, wear much less, and grip harder. If the car is lighter, you don’t need as stiff a chassis, nor as much braking to lock the wheels, less battery, motor, which means you can take even more weight off the car… You get where I’m going.
Racecars are fast because they are light, not because they have big engines and expensive bodies. Little 1500lb cars can lap a $3 million 1500hp (and quite heavy, because of all the stuff in it) Bugatti around a track.
Heavy cars can handle OK, but the cost is big.
So… where is that outrage from preppers and gun enthusaists of the government barging into their homes?
Can you image if Biden, Obama, or even Bush did this?
There would literally be a massive armed mob outside the White House.
Turns out guns are useless in the face of propaganda, I guess.
Robinson declined to release her name or age, only confirming that she has been in the U.S. about six years, but has no legal status. Her daughters were born in the U.S. Their father lives in Detroit.
+1
Weight is everything. Removing it makes almost literally every aspect of a car better, and it’s usually a terrible negative for EVs.
I think OP means “the mediocre, least bad intersection between critical mass and topical discussion.”
Like, you can probably find users/subs about universities/fields and actually find people in them to respond. Lemmy is great, but good luck finding a mass of discussion around a niche location/field.
Reddit is horrible and deteriorating, yes, but still.
Vincke says the team finds DLC boring to make, so they don’t really want to make it anymore.
I find this driveby comment rather significant.
It means they are trying to conform to the developers’ strengths, desires, interests. They’re shaping huge business decisions around them. That’s just good for everyone, as opposed to devs inefficiently, dispassionately grinding away at something they don’t like.
That’s huge. I’d also posit “happy devs means happy business.” And Larian has repeatedly expressed similar things.
Maybe.
It’s fairly popular in tech circles where there’s a large intersection with MAGA, or at least a more “keep politics out of my programming” attitude predating that.
The source doesn’t matter, it’s more about the example, and idea.
More bluntly, horrible people (like nazis) can go on to do good things in life. That’s okay.
On the other hand, posting the picture without a ton of context seems to reinforce the very thing you are worried about:
someone MIGHT get the impression there was something fishy going on at NATO from the get-go
When that doesn’t seem to be the case. Nazis, tankies, whatever populist group you can name operate on negative quick impressions to sow doubt and anger with institutions.
…So?
Poking through some of their history (Ernst and Karl), looks like they were indeed Nazi commanders. They served lower ranks after the war, got more education/experience and rose again to perform well within NATO.
Maybe I’m naive, but I believe horrible people can go on to do good things, and that’s fine. I think my favorite character archetype for this is General Iroh in Avatar, who was involved in unspeakable genocide, changed, and ultimately toppled his own dynasty. He’s one of the most beloved characters in fiction, but a quick bio of his in an image would get him utterly crucified as a terrible human being.
Hence drive by image posts kinda like this without context/history, on the other hand, largely provoke outrage. It’s exactly the kind of thing that would trend on the Twitter algorithm and obliterate any nuance. That’s not necessarily your intent, but it’s kinda the aggregate effect.
^ what was said, not supported yet, though you can give it a shot theoretically.
Basically exl3 means you can run 32B models, totally offloaded without a ton of quantization loss, if you can get it working. But exl2/exl3 is less popular largely because it’s PyTorch based, hence more finicky to setup (no GGUF single files, no easy install).