In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm.

When asked if it had used insider trading, it denied the fact.

Insider trading refers to when confidential company information is used to make trading decisions.

Firms and individuals are only allowed to use publicly-available information when buying or selling stocks.

The demonstration was given by members of the government’s Frontier AI Taskforce, which researches the potential risks of AI.

  • raoul@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    6
    ·
    1 year ago

    This article is dumb: they chated with a good damn chat bot 🤬

    This bullshit about “see, IA is totally sentient, let us put some regulation to stop competitors” is tiring.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        “Puddle-deep analyses” are all that’s required with LLMs because they’re not complicated. We’ve been living with the same tech for years through machine learning algorithms of regression models except no one was stupid enough to use the internet as their personal training model until OpenAI. ChatGPT is very good at imitating intelligence but that is not the same as actually being intelligent.

        OpenAI and by extension have done a wonderful job with their marketing by lowering the standards for what constitutes an AI.

        • alabasterhotdog@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Absolutely, everything you’ve stated is correct. My comment wasn’t intended as a comment on AI, but on the cynical and knee-jerk take offered.