• 0 Posts
  • 63 Comments
Joined 4 months ago
cake
Cake day: February 12th, 2025

help-circle












  • I said AI isn’t close in education. That was my entire claim

    I never said anything about any other company. I said AI in education isn’t happening soon. You keep pulling in other sectors.

    I’ve also had several comments in this thread before you came in saying that.

    EDIT: give me a citation that LLMs can reason for code. Because in my experience as someone that professionally codes with AI (copilot) it’s not capable at that. It’s guess what it thinks I want to write in small segments.

    https://x.com/leojr94_/status/1901560276488511759

    Espcially when it has a nasty habit of leaking secrets.

    EDIT2 forgot to say why I’m ignoring other fields. Because we’re not talking about AI in those fields. We’re talking education and search engines at best. My original comment was that AI generated educational papers still serve their original purpose.

    What the fuck does that have to do with anything to do with plaintair?



  • My larger point, AI replacing teachers is at least a decade away.

    You’ve given no evidence that it is. You’ve just said you hate my sources, while not actually making a single argument that it is.

    You said well it stores context, but who cares? I showed that it doesn’t translate to what you think, and you said you don’t like, without providing any evidence that it means anything beyond looking good on a graph.

    I’ve said several times, SHOW ME ITS CLOSE. I don’t care what law enforcement buys, because that has nothing to do with education.



  • Okay, here’s a non apple source since you want it.

    https://arxiv.org/abs/2402.12091

    5 Conclusion In this study, we investigate the capacity of LLMs, with parameters varying from 7B to 200B, to com- prehend logical rules. The observed performance disparity between smaller and larger models indi- cates that size alone does not guarantee a profound understanding of logical constructs. While larger models may show traces of semantic learning, their outputs often lack logical validity when faced with swapped logical predicates. Our findings suggest that while LLMs may improve their logical reason- ing performance through in-context learning and methodologies such as COT, these enhancements do not equate to a genuine understanding of logical operations and definitions, nor do they necessarily confer the capability for logical reasoning.