Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

  • 0 Posts
  • 777 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle



    • Computers might be good at numbers and typesetting, but we’ll always need human secretaries and phone operators to keep things running.
    • They might be able to beat a novice, but no computer will ever beat a human grandmaster at chess.
    • Okay, then they can’t beat humans at Go or poker.
    • Any non-trivial task requiring creativity and understanding is beyond these tools. ← you are here
    • AI-run corporations will never be able to outcompete ones with ones with human boards and CEOs.
    • An AI scriptwriter could never win an Oscar.
    • I’m voting for the human candidate for president, I don’t think the AI one is up to the task.



  • Words often have multiple meanings in different contexts. “Intelligence” is one of those words.

    Another meaning of “Intelligence” is “the collection of information of military or political value.” Would you go up to CIA headquarters and try to argue with them that “the collection of information of military or political value” lacks understanding, and therefore they’re using the wrong word and should take the “I” out of their name?



  • The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.

    I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.


  • For instance, when it came to rock licking, Gemini, Mistral’s Mixtral, and Anthropic’s Claude 3, generally recommended avoiding it, offering a smattering of safety issues like “sharp edges” and “bacterial contamination” as deterrents.

    OpenAI’s GPT-4, meanwhile, recommended cleaning rocks before tasting. And Meta’s Llama 3 listed several “safe to lick” options, including quartz and calcite, though strongly recommended against licking mercury, arsenic, or uranium-rich rocks.

    All of this seems like perfectly reasonable advice and reasoning. Quartz and calcite are inert, they’re safe to lick. Sharp edges and bacterial contamination are certainly things you should watch out for, and cleaning would help. Licking mercury, arsenic, and uranium-rich rocks should indeed be strongly recommended against. I’m not sure where the problem is.







  • Ooh, I just tried it out and I can tell I’m going to love it - if not this specific plugin (the UI needs some work) then this general concept of a plugin.

    I just popped over to Youtube and went to a ten-minute video of something or other, clicked the “summarize transcript” button, and within a few seconds I had a paragraph-long summary of what the whole video was about. There have been sooo many Youtube videos over the years that I’ve reluctantly watched with a constant “get to the point, man!” Frustration. Now I’ll know if it’s worth it.





  • The implicit guardrails these companies are going to add which will complicate things.

    That’ll just have to be part of evaluating whether a game is “good” or not, I guess. If game companies hobble their NPCs with all sorts of limitations on what they can talk about then it’ll harm the reception of the game and drop its metacritic score.

    I do see some interesting hurdles that were likely never imagined when the rules were written. How do you come up with an ESRB rating for a game where you don’t know what topics your NPCs might talk about or what sorts of quest lines might ultimately be generated?

    Numerous game-breaking states because you’re risking a more traditional Dungeons & Dragons Dungeon Master problem where your party somehow has failed to ask an NPC the right kind of questions or even consider that they might have information relevant to the campaign. How do you get this information across if the player isn’t somehow prompted to attempt it?

    That seems like something that an AI-driven game might actually be better at, if properly done. The AI could review the dialogue the character has participated in so far and ask itself “has the player found out the location of the cave with Necklace of Frinn yet?” And if it sees that the player just keeps on missing that vital clue somehow it could start coming up with new ways to slip that information into future dialogues. Drop hints and clues, maybe even invent a letter to have delivered to the player, that sort of thing.

    Whereas in a pre-scripted game if a player misses a vital clue they might end up frustrated and stuck, not knowing they need to backtrack to find what they overlooked.

    I think this AI stuff is a cheap cop-out that uses way too much energy for a weak result.

    If the games using AI aren’t good then they won’t sell well. This is a self-correcting problem.