• mke@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Except LLMs don’t actually have real reasoning capacity. Hooking in different models that can translate more of the world to text could give the LLM a broader domain, but not an entirely new ability beyond its architecture. That might make it more convincing, but it would still fail in the same ways as it currently does.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      1 month ago

      You’re doing reasoning based on chemical reactions. Who says it can’t do reasoning based on text? Who says it’s not doing that already in some capacity? Can you prove that?