Description of the problem (please keep it simple and short):
I am developing a report system for one of my websites, I would like to use Replit AI Modelfarm to check if the report is legit or not. However, if the user puts the keyword Bullying/Bully in it an error is returned.
I don’t think so unless you maybe make a ticket to allow bad words (I don’t know if that’ll do anything) or detect keywords without ai and automatically replace them with different words that are not part of the banned keywords ig
They can train their own mini AI using spaCy and NLP; it’s not difficult. I’ve done it myself. You’ll just need a large knowledge base (text, JSON, etc.).