cross-posted from: https://lemmings.world/post/21993947
Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.
It is like I said. People on platforms like Reddit complain a lot about bots. This platform on the other hand is kind of supposed to be the better version of that. Hence not about the same negative dynamics. And I can still tell ChatGPT’s uniquie style and a human apart. And once you go into detail, you’ll notice the quirks or the intelligence of your conversational partner. So yeah, some people use ChatGPT without disclosing it. You’ll stumble across that when reading AI generated article summaries and so on. You’re definitely not the first person with that idea.
Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.
Is there anyway you can rule out survivorship bias? Plus I’m already doing preliminary stuff and I looking into making response shorter so that there’s less information to go on and trying different models
What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?
So far I’ve experimented with ollama3.2 (I don’t have enough ram for 3.3). Deepseek r1 7b( discovered that it’s verbose and asks a lot of questions) and I’ll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I’m thinking about making a genetic algorithm of prompt templates and a confidence check. It’s oddly meta