• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    Because in the end it is pattern matching, and if you nudge it towards one pattern probability more than another, it gives better results for your expectation. There’s no “knowing”.

    • circuitfarmer@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      This is the thing about LLMs and AI/ML in general that I think people need to understand (especially corporate higher-ups throwing money at AI to save costs of labor): it doesn’t learn anything. It doesn’t generate anything novel. It finds patterns and repeats those patterns in, ultimately, pseudorandom ways (which leads to nondeterminism, which makes things look more impressive than they actually are).

      We’ll see tremendous stagnation as AI adoption increases, because you simply can’t feed it enough data if so much is already coming from AI to begin with. The entropy among the patterns reduces.

  • mo_ztt ✅@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    LLMs are designed to complete the sequence. The prompt-following behavior is a hack that’s been applied on top of that; very successfully, but it’s a hack. I don’t really know, because I didn’t see skimming through this article enough information to reproduce the experiments, but I would guess that a lot of the prompt-instability the author is talking about would go away if the prompt was more geared to completion-of-the-sequence than instruction-following.

  • Renneder@sh.itjust.worksOPM
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago
    •  The development of large language models (LLMs) has led to a paradigm shift in NLP. 
    
    •  LLMs can learn in the context of new tasks using LLM prompts. 
    
    •  LLM's sensitivity to selection hints can affect their performance. 
    
    •  Structured motivation allows LLM to annotate linguistic structure. 
    
    •  Data contamination can affect LLM performance. 
    
    •  Changing labels for structured hints has a significant performance impact on LLM. 
    
    •  Understanding the behavior of LLMs is important for their safe use.