• WanderingThoughts@europe.pub
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    22 hours ago

    LLM are not going to be the future. The tech companies know it and are working on reasoning models that can look up stuff to fact check themselves. These are slower, use more power and are still a work in progress.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      21 hours ago

      Look up stuff where? Some things are verifiable more or less directly: the Moon is not 80% made of cheese,adding glue to pizza is not healthy, the average human hand does not have seven fingers. A “reasoning” model might do better with those than current LLMs.

      But for a lot of our knowledge, verifying means “I say X because here are two reputable sources that say X”. For that, having AI-generated text creeping up everywhere (including peer-reviewed scientific papers, that tend to be considered reputable) is blurring the line between truth and “hallucination” for both LLMs and humans

      • Aux@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Who said that adding glue to pizza is not healthy? Meat glue is used in restaurants all the time!