CONTENT MODERATION POLICY
Effective Date: June 1, 2025
1. Purpose
This Content Moderation Policy (“Policy”) is established to define the mechanisms and standards by which ClamCams ensures that all content disseminated through its platform complies with applicable laws, card brand rules (including Visa and Mastercard), and ClamCams’ internal codes of ethics. This Policy is binding on all users, including but not limited to models, studios, content managers, and third-party providers.
2. Scope of Application
This Policy applies to all content—live or pre-recorded—submitted, streamed, uploaded, or otherwise distributed through the ClamCams platform. It also applies to all associated metadata, audiovisual assets, promotional materials, thumbnails, usernames, profile images, and documents submitted for account verification.
3. Legal Framework and Industry Standards
- United States federal and state laws, including 18 U.S. Code § 2257 and COPPA
- The Trafficking Victims Protection Act (TVPA)
- The EARN IT Act and FOSTA-SESTA requirements
- Visa’s Adult Content Compliance Requirements (ADCR)
- Mastercard’s Business Risk Assessment and Mitigation (BRAM) program
- GDPR (for users in the European Economic Area)
4. Content Categories Subject to Moderation
- Public live streams
- Private live streams (if recorded or accessible by ClamCams)
- Profile pictures and usernames
- Model-generated promotional content (including banners, avatars, teaser videos)
- Pre-recorded content (clips, replays, previews)
- Account documentation and ID verification media
5. Prohibited Content
- Any content involving or simulating minors (real or fictional)
- Bestiality, incest, non-consensual acts, or rape (real or simulated)
- Sex trafficking, exploitation, or coercion of any kind
- Physical violence, mutilation, or torture
- Drug use, intoxication, or depiction of non-legal substances
- Any material that violates U.S. law, local law, or card brand compliance requirements
- Pre-recorded videos presented as live without disclaimer
6. Moderation Process
6.1 Pre-Publication Review
- All pre-recorded content is subject to mandatory pre-publication moderation.
- ClamCams uses a hybrid system: automated AI detection + human moderation.
- Content is checked for prohibited visuals, metadata inconsistencies, or rule violations.
6.2 Live Content Supervision
- Live broadcasts are supervised in real time using machine learning models trained to detect risk indicators.
- Alerts are escalated to the compliance team for human intervention if necessary.
- Content flagged for review may be interrupted and subject to audit.
6.3 Document and Identity Verification
- All models must pass KYC verification including:
- Government-issued ID
- Facial biometric match (live selfie with liveness detection)
- Age validation (18+)
- Document forgery or impersonation results in immediate suspension.
7. Enforcement and Sanctions
- Temporary or permanent suspension of accounts
- Forfeiture of earnings or bonuses
- Reporting to relevant law enforcement or regulatory bodies
- Blacklisting from future access to the platform
8. Appeals and Redress
Models may appeal content moderation decisions within seventy-two (72) hours via compliance@clamcams.com. Appeals must include:
- Reference to the flagged content or session
- Statement of facts
- Any supporting documents or media
9. Recordkeeping
All moderation actions, including AI flags and human decisions, are logged and retained for a minimum of two (2) years, in accordance with regulatory expectations and potential audit by card brands.
10. Policy Updates
This Policy is reviewed quarterly or upon changes to applicable law or card brand rules. The latest version will be published at https://clamcams.com/content-policy.
ClamCams is committed to ethical, lawful, and transparent operation. Moderation is not censorship—it is protection.