Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Research on this topic exists, and it is possible to alter the output of an LLM in minor ways, that statistically “watermark” the results without drastically changing the quality of the output. OpenAI has probably implemented this into ChatGPT.

      https://www.youtube.com/watch?v=2Kx9jbSMZqA

      I think the tool exists, and is (at least close to) as good as they claim it is. They can’t release it, because once the public can tell with high accuracy whether ChatGPT wrote some text, another AI can be developed to circumvent detection from this method, making the tool useless.

  • MagicShel@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Am I the only one who rewrites most of ChatGPT’s output into my own words because it’s “voice” is garbage anyway? I ask it to write me a cover letter and that gives me a rough outline and some points to make, but I have to do massive editing to avoid redundancy, awkward phrasing, outright lies, etc.

    I can’t imagine turning in raw ChatGPT output. I had one of my developers use Bing AI to write code and submitted that shit raw and it was immediately obvious because some relatively simple code has really weird artifacts like overwriting a value that had no reason to even be touched.

    • ValenThyme@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      i use it to make outlines which are usually very good and then I use the class materials to flesh out the outlines in my own words. All my words but ChatGPT told me what to include and in what order.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Few years ago the output of GPT was complete gibberish and few years before that even producing such gibberish would’ve been impressive.

        It doesn’t take anyone’s job untill it does.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        LLMs aren’t going to take coding jobs, there are specific case AIs being trained for that. They write code that works but does not make sense to human eyes. It’s fucking terrifying but EVERYONE just keeps focusing on the LLMS.

        There are at least 2 more dangerous model types being used right now to influence elections and manipulate online spaces and ALL everyone cares about is their fucking parrot bots…

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    As someone who fiddled with Stable Diffusion which also has optional invisible watermarks this is a good feature. It is so that AI training will avoid content marking itself as AI generated. If people want to hide that their content is AI generated then, sadly, it’s harder to detect.

    • Todd Bonzalez@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Watermarking everything I digitally publish to keep my original content out of a training set.

      Publishing a website full of de-watermarked AI slop to ruin future LLMs.

  • TerkErJerbs@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It’s probably some type of cypher. Which will take people exactly one (1) afternoon to crack.