Meta “programmed it to simply not answer questions,” but it did anyway.

  • snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Hallucinating is a fancy term for BEING WRONG.

    Unreliable bullshit generator is still unreliable. Imagine that!

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      AI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.

      Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.

      Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It is impossible to mathematically determine if something is correct. Literally impossible.

        At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

        If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          It is impossible to mathematically determine if something is correct. Literally impossible.

          No, you’re wrong. You can indeed prove the correctness of a neural network. You can also prove the correctness of many things. It’s the most integral part of mathematics and computer-science.

          For example a very simple proof: with the conjecture that an even number is 2k of a number k, then you can prove that the addition of two even numbers is again an even number (and that prove is definite): 2a+2b=2(a+b), since a+b=k for some k.

          Obviously, proving more complex mathematical problems like AI is more involved. But that’s why we have scientists that work on that.

          At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

          That is correct. But it’s not a limitation. It’s by design. It’s the tradeoff for the efficiency of the models. It’s like lossy JPG compression. You accept some artifacts but in return you get much smaller images and much faster loading times.

          But there are indeed "AI"s and neural networks that have been proven correct. This is mostly applied to safety critical applications like airplane collision avoidance systems or DAS. But a language model is not safety critical; so we take full advantage.

          If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

          You’re completely misunderstanding the whole thing. The only reason why it’s so incredibly good in many applications is because it’s bad in others. It’s intentionally designed that way. There are exact algorithms and there approximation algorithms. The latter tend to be much more efficient and usable in practice.

          • snooggums@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            The only reason why it’s so incredibly good in many applications is because it’s bad in others. It’s intentionally designed that way.

            lolwut