The SOBIE Conference Platform includes a comprehensive content moderation system designed to maintain professional academic standards while preventing inappropriate, harmful, or unprofessional content from being displayed in user profiles and community interactions.
<script>
, <iframe>
)The content moderation system automatically checks the following user profile fields:
profile.bio
- User biographyprofile.interests
- Array of interest tagsprofile.expertiseAreas
- Array of expertise areasname.prefixCustom
- Custom title prefixname.suffixCustom
- Custom suffix/credentialsname.pronounsCustom
- Custom pronounsnametag.preferredSalutation
- Conference nametag salutationnametag.displayName
- Alternative display nameaffiliation.jobTitle
- Professional job titleaffiliation.position
- Position descriptionprofile.socialLinks[].title
- Link titlesprofile.socialLinks[].description
- Link descriptionsprofile.socialLinks[].customCategory
- Custom category namesprofile.socialLinks[].url
- URLs (checked for suspicious patterns)Content moderation runs automatically during the Mongoose pre('save')
middleware:
// High-severity violations reject the entire save operation
if (moderationErrors.length > 0) {
return next(new Error('Content moderation failed: ' + moderationErrors.join(' ')));
}
// Medium-severity violations clean the content automatically
this.profile.bio = bioCheck.cleanedText;
Administrators can run manual content checks using the User model method:
const user = await User.findById(userId);
const moderationResult = user.runContentModerationCheck();
if (!moderationResult.isClean) {
console.log('Violations found:', moderationResult.violations);
console.log('User-friendly messages:', moderationResult.userMessages);
}
Content moderation can be used independently in API endpoints:
const contentModerator = require('../utils/contentModeration');
// Check individual content
const result = contentModerator.checkContent(userInput);
if (!result.isClean && result.severity === 'high') {
return res.status(400).json({ error: 'Content not acceptable' });
}
// Check arrays of content
const arrayResult = contentModerator.checkArray(interests);
// Check social links
const linksResult = contentModerator.checkSocialLinks(socialLinks);
Severity | Trigger Conditions | Automatic Action | User Impact |
---|---|---|---|
None | Clean content | Content saved as-is | No impact |
Low | Unprofessional language | Content saved, flagged for review | No immediate impact |
Medium | Profanity, personal info | Content cleaned automatically | Profanity replaced with asterisks |
High | Script injection, malicious content | Save operation rejected | User receives error message |
Edit src/utils/contentModeration.js
:
this.profanityPatterns = [
/\b(your|new|words|here)\b/gi,
// Add new patterns here
];
const suspiciousPatterns = [
/newspam\.domain/i,
// Add new suspicious domains
];
calculateSeverity(violations) {
if (violations.includes('your_new_high_severity_type')) return 'high';
// Add custom severity rules
}
# Test content moderation directly
node -e "
const cm = require('./src/utils/contentModeration');
console.log(cm.checkContent('test content here'));
"
Run the included test suite:
npm test -- --testPathPattern=contentModeration
For production use, consider integrating with professional content moderation services:
// Add to your logging system
if (!moderationResult.isClean) {
logger.warn('Content moderation violation', {
userId: user._id,
violations: moderationResult.violations,
severity: moderationResult.severity,
field: 'profile.bio'
});
}
Consider building admin interfaces for:
contentModerator.checkContent(text)
Parameters:
text
(string): Content to checkReturns:
{
isClean: boolean,
violations: string[],
cleanedText: string,
severity: 'none' | 'low' | 'medium' | 'high'
}
contentModerator.checkArray(items)
Parameters:
items
(string[]): Array of content to checkReturns:
{
isClean: boolean,
violations: string[],
cleanedItems: string[]
}
contentModerator.checkSocialLinks(links)
Parameters:
links
(object[]): Array of social link objectsReturns:
{
isClean: boolean,
violations: string[],
cleanedLinks: object[]
}
user.runContentModerationCheck()
Returns:
{
isClean: boolean,
hasViolations: boolean,
hasWarnings: boolean,
violations: object[],
warnings: object[],
userMessages: string[]
}
Note: This content moderation system is designed for academic conference environments and prioritizes professional communication standards while balancing user expression with community safety.