AI-generated replies

Is anyone else noticing AI-generated replies in topics asking for help? I don’t get the point. Here’s one example: IO cards added by ipad not shows up on desktop - #3 by brandon698sherrick
Maybe the login&signup page should have captchas?

Just found out that it might give the user a notif, lol.
Actually, maybe a good way to see if the reply was from ChatGPT or not.

On one hand, I don’t want to believe this is AI generated on the other hand this doesn’t completly sound like a human to me. Let’s see if they reply to this topic.

That one is not as obvious as others have been, but yes. It’s been happening in the subreddit too.

What bothers me is that they are vague to the point of uselessness – akin to someone responding to post “I don’t know” and then wander off-topic. The worst of them are simply incorrect advice.

[Screenshotting for purposes of discussion]


A bit off-topic but why are some people so confident that something we can’t control now we will be able to control in the future
This is honestly scary. AI should immediately be halted and we should start researching how to control this thing first.

This btw is also happening in Facebook, Instagram, YouTube. In YT you will find channels with AI generated videos that have hundreds of thousands of views and they even market these stuff as AI stories.

The usual pro-AI argument is people are always scared of new stuff. Well that’s just a blatant lie. People in the past always thought future would be better. At least the last couple centuries.

Anyways, I don’t think we will have good captchas in the future, AI generated reply would be as obvious as it is that this is being written by a person living in Asia.

1 Like

For now, I think we should be able to flag replies for being “Apparently AI generated”. I don’t know how the whole identification would work though.

(BTW would be kinda ironic if these bots are run by anti-AI group so that they can build negative public opinion about AI)

If you search his handle on Google, you will see a bunch of new posts starting from yesterday. So someone’s probably using this to create an online identity.
Feel like this is the case for the other users with AI-like replies. Trying to create an identity.
The reason, though? No idea…

1 Like

Not sure if AI needs any campaign like that. When I used ChatGPT for checking if that reply is generated by AI, it said that this was indeed the case because it was missing the entire context lol.

1 Like

There have been other situations where it’s been a naive-but-apparently-well-meaning new user who claims they “just want to help.” Still leaves me wondering how posting bad/vague advice is helpful.


I’ve deleted a few obvious ones in the past when I spot them. I’ve always assumed spam was their end game.


Well this is a good feature at least:

It can’t be abused… right?

Discourse gives different flag-powers by “trust level,” so it’s a pretty robust system.

  • “New” users at trust level 0 can’t flag posts.
  • “Basic” users at trust level 1 and “Member” users at trust level 2 can flag posts.
  • For “Regular” users at trust level 3, flagging a “New”-user post will immediately hide the post.

I learned about it only today. It’s better than what we have in other platforms. But I don’t understand why anyone should lose regular tag if he doesn’t come here 50% of the days. That’s a pretty arbitrary rule it seems. Not coming here for a few months won’t make the person less trust-able.

But it does make you less of a “regular” on the forums. It’s about what’s good for the forums – who is around and active, who knows what’s been going on here in the past few months. It’s not about aggrandizing particular users.


Yeah I agree. The criteria just needs to be a bit different. It should only be about how much time you spend reading/how many % of topics you read, etc. etc.

Topics-viewed and topics-read are both included in the criteria already.


I lost my trust level because of that. I was one of the first users to get the regular level when the Anki forums migrated to Discourse.


LOL I just found an old post of mine while trying to find the criteria for trust levels: