Description of the problem (please keep it simple and short):
I am developing a report system for one of my websites, I would like to use Replit AI Modelfarm to check if the report is legit or not. However, if the user puts the keyword Bullying/Bully in it an error is returned.
Is there any way around this?
I don’t think so unless you maybe make a ticket to allow bad words (I don’t know if that’ll do anything) or detect keywords without ai and automatically replace them with different words that are not part of the banned keywords ig
What do you mean by report system?
probably to report bad messages in a chat app or something along that
They can train their own mini AI using spaCy and NLP; it’s not difficult. I’ve done it myself. You’ll just need a large knowledge base (text, JSON, etc.).
I’m currently training a profanity detection model. It should be compact enough to run on Replit… I think?
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.