Alt. Profile @Th4tGuyII

  • 0 Posts
  • 23 Comments
Joined 26 days ago
cake
Cake day: June 11th, 2024

help-circle
  • So providing a fine-tuned model shouldn’t either.

    I didn’t mean in terms of providing. I meant that if someone provided a base model, someone took that and but on of it, then used it for a harmful purpose - of course the person modified it should be liable, not the base provider.

    It’s like if someone took a version of Linux, modified it, then used that modified version for a similar person - you wouldn’t go after the person who made the unmodified version.


  • SB 1047 is a California state bill that would make large AI model providers – such as Meta, OpenAI, Anthropic, and Mistral – liable for the potentially catastrophic dangers of their AI systems.

    Now this sounds like a complicated debate - but it seems to me like everyone against this bill are people who would benefit monetarily from not having to deal with the safety aspect of AI, and that does sound suspicious to me.

    Another technical piece of this bill relates to open-source AI models. […] There’s a caveat that if a developer spends more than 25% of the cost to train Llama 3 on fine-tuning, that developer is now responsible. That said, opponents of the bill still find this unfair and not the right approach.

    In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that…

    But should the main developer still be liable if a bad actor does less than 25% fine tuning and uses exploits in the base model?

    One could argue that developers should be trying to examine their black-boxes for vunerabilities, rather than shrugging and saying it can’t be done then demanding they not be held liable.













  • The TL;DR for the article is that the headline isn’t exactly true. At this moment in time their PPU can potentially double a CPU’s performance - the 100x claim comes with the caveat of “further software optimisation”.


    Tbh, I’m sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.

    Of course that could happen, but it’s not very likely to - so I’ll believe it when I see it.

    Having said that they’re not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.




  • Trump’s lawyers are expected to argue that none of the memos should have been given to prosecutors on the crime-fraud exception, which allows prosecutors to see privileged communications between a defendant and a lawyer, if their legal advice was used in furtherance of a crime.

    They’re expected to argue on the basis that these memos didn’t amount to using Corcoran’s legal advice.

    But surely that’s a moot point, because while Trump didn’t use of any specific legal advice, he absolutely abused the privileged information obtained from Corcoran (such as the date of the inspection and the date of his return) in obstructing the return of classified documents to the whitehouse.

    The memos make quite clear that Trump abused attorney-client privilege in furtherance of a crime. Plain and simple.

    And I severely doubt Corcoran didn’t know what Trump’s intentions were with that information - and if not before, he certainly should’ve known afterwards when being “asked” to pluck out documents.


    I legitimately have to wonder how it can be legal for Trump to be trialled by judges he put into power, and that have shown such clear and demonstrable biases in his favour - even going so far as to deliberately delay cases as far as legally possible.

    As @ptz@dubvee.org put it, it’s like he’s got an extra special justice system especially for him - who knew all you needed to do was appoint your own judges.