Content Moderation Policy
Last Updated: January 2, 2026
Effective Date: January 2, 2026
Table of Contents
1. Introduction
Pinporn.app ("we," "us," "our," or "the Platform") is committed to maintaining a safe, legal, and respectful environment for all users. This Content Moderation Policy outlines what types of content are prohibited on our Platform, how we enforce these rules, and how users can report violations or appeal moderation decisions.
This policy applies to all content uploaded, shared, or distributed on our Platform, including:
- Images, videos, and GIFs (pins)
- User profiles, biographies, and display names
- Board titles and descriptions
- Comments and direct messages
- Links to external websites
Our Commitment to Safety
We are committed to:
- Zero tolerance for CSAM: Child sexual abuse material is illegal and will always result in immediate removal, account termination, and reporting to law enforcement.
- Protecting privacy: Non-consensual intimate imagery (NCII) violates victims' privacy and dignity and will be removed promptly.
- Respecting intellectual property: Copyright infringement harms creators and will be addressed through our DMCA process.
- Preventing harm: Content that promotes violence, self-harm, hate speech, or illegal activity has no place on our Platform.
- Supporting marginalized communities: We actively protect LGBTQ+ individuals, sex workers, and other marginalized groups from harassment and discrimination.
Note: This policy should be read in conjunction with our Terms of Service, Privacy Policy, and DMCA Policy.
2. Prohibited Content Categories
The following categories of content are strictly prohibited on our Platform and will result in immediate removal and potential account termination:
2.1 Child Sexual Abuse Material (CSAM)
⛔ ZERO TOLERANCE POLICY
Pinporn.app has zero tolerance for child sexual abuse material (CSAM) of any kind. This includes:
- Any sexually explicit content depicting individuals under 18 years of age
- Depictions of minors in sexually suggestive poses or contexts (even if not explicitly sexual)
- Computer-generated imagery (CGI), cartoons, anime, or "loli/shota" content depicting minors in sexual contexts
- Age-regressed content depicting adults portrayed as minors in sexual contexts
- Text, stories, or role-play content sexualizing minors
- Links to external sites hosting CSAM
Enforcement Actions for CSAM:
- Immediate Content Removal: Content is removed from the Platform immediately upon detection or report.
- Permanent Account Termination: The user's account is permanently banned, with no possibility of reinstatement.
- IP Address Ban: The user's IP address and device fingerprints are permanently banned from accessing the Platform.
- Law Enforcement Reporting: We file a report with the National Center for Missing & Exploited Children (NCMEC) via CyberTipline as required by 18 U.S.C. § 2258A.
- Evidence Preservation: We preserve all evidence for law enforcement investigations, including user data, IP logs, and uploaded content.
- Cooperation with Authorities: We fully cooperate with law enforcement agencies (FBI, Interpol, local police) in investigations and prosecutions.
⚠️ Reporting CSAM
If you encounter suspected CSAM on our Platform:
- DO NOT download, share, or interact with the content - this may be illegal in your jurisdiction
- Report it immediately using our reporting tools or email [email protected]
- You can also report directly to NCMEC at https://report.cybertip.org
Legal Framework: 18 U.S.C. § 2251 (sexual exploitation of children), 18 U.S.C. § 2252 (distribution of CSAM), 18 U.S.C. § 2258A (reporting requirements), Directive 2011/93/EU (EU child sexual abuse directive), UK Sexual Offences Act 2003.
2.2 Non-Consensual Intimate Imagery (NCII)
We strictly prohibit the sharing of non-consensual intimate images or videos, commonly known as "revenge porn." This includes:
- Sexually explicit images or videos shared without the subject's consent
- Content obtained through hacking, phishing, or unauthorized access to devices
- Upskirt, downblouse, or other voyeuristic content captured without consent
- Content recorded in private settings (bedrooms, bathrooms, changing rooms) without consent
- Deepfakes or AI-generated sexually explicit content depicting real individuals without consent
- Content shared after consent was withdrawn
- Content from previous relationships shared without ongoing consent
Enforcement Actions for NCII:
- Immediate Content Removal: Content is removed within 24 hours of receiving a valid report.
- Account Suspension or Termination: First offense may result in permanent ban; repeat offenses always result in permanent ban.
- Hash-Based Blocking: We create digital hashes of removed NCII content to prevent re-uploads.
- Victim Support: We provide victims with information about legal resources, counseling services, and organizations like StopNCII.org.
- Law Enforcement Cooperation: We cooperate with law enforcement in jurisdictions where NCII is criminal (46 U.S. states, UK, Australia, etc.).
Reporting NCII as a Victim
If you are a victim of NCII and find your content on our Platform:
- Email [email protected] with the URL of the content
- You do NOT need to prove you are the person depicted (we prioritize victim privacy)
- We will remove the content within 24 hours and notify you
- Visit StopNCII.org to create a hash of your content to prevent it from being re-uploaded across multiple platforms
All reports are handled confidentially. We will never contact the uploader with your personal information.
Legal Framework: State revenge porn laws (46 U.S. states), UK Criminal Justice and Courts Act 2015 (s. 33), Australia Enhancing Online Safety Act 2015, EU Victims' Rights Directive (2012/29/EU).
2.3 Copyright Infringement
We respect the intellectual property rights of content creators. Uploading copyrighted content without authorization is prohibited. This includes:
- Professionally produced adult content (DVDs, streaming service content, studio content)
- Content from subscription platforms (OnlyFans, Patreon, etc.) uploaded without creator permission
- Watermarked content with watermarks removed or obscured
- Content from other adult platforms uploaded without authorization
Copyright Enforcement: Copyright infringement is handled through our DMCA Policy. Copyright owners can submit takedown notices, and we operate a three-strike repeat infringer policy. See our DMCA Policy for full details.
2.4 Violence, Gore, & Extreme Content
Content depicting or promoting violence, self-harm, or extreme gore is prohibited. This includes:
- Content depicting graphic violence, death, or serious injury
- Content promoting or glorifying self-harm, suicide, or eating disorders
- Content depicting animal abuse or cruelty
- Content depicting non-consensual violence (rape, assault, abuse)
- "Snuff" content or claims of real harm
- Content promoting dangerous activities or challenges that could result in injury
Note: Consensual BDSM content is permitted on our Platform, provided all participants are adults who have consented. However, content must not depict or simulate non-consent, serious injury, or activities that would be illegal if real.
2.5 Hate Speech & Harassment
We do not tolerate hate speech, harassment, or content that promotes discrimination or violence against individuals or groups based on protected characteristics. Prohibited content includes:
- Content promoting violence, hatred, or discrimination based on race, ethnicity, national origin, religion, caste, sexual orientation, gender identity, disability, or immigration status
- Slurs, derogatory terms, or symbols associated with hate groups (swastikas, Confederate flags, etc.)
- Holocaust denial, genocide denial, or promotion of hate ideologies
- Targeted harassment, doxxing (sharing private information), or brigading
- Content that dehumanizes or objectifies individuals based on protected characteristics
- Threatening, intimidating, or abusive messages sent to other users
Special Protections for Marginalized Communities
We recognize that LGBTQ+ individuals, sex workers, and other marginalized groups face disproportionate levels of harassment and discrimination. We apply heightened scrutiny to reports of harassment against these communities and will take swift action against bad-faith reporting or coordinated harassment campaigns. See Section 7 for details.
2.6 Illegal Content & Activities
Content promoting, facilitating, or depicting illegal activities is prohibited. This includes:
- Drug trafficking, sale, or distribution (including prescription drugs)
- Human trafficking, sex trafficking, or prostitution solicitation
- Sale of firearms, explosives, or regulated weapons
- Hacking, phishing, malware distribution, or other cybercrime
- Fraud, scams, or Ponzi schemes
- Bestiality or zoophilia
- Incest (real or simulated with actors portraying family members)
- Content depicting individuals who appear to be intoxicated to the point of incapacitation
Note: What constitutes "illegal content" varies by jurisdiction. We apply U.S. federal law as our baseline but also consider EU, UK, and other major jurisdictions when evaluating content.
2.7 Spam, Manipulation, & Inauthentic Behavior
Activities designed to manipulate the Platform, deceive users, or artificially inflate engagement are prohibited. This includes:
- Creating multiple accounts to circumvent bans or manipulate voting/engagement
- Using bots or automated tools to mass-upload content, post comments, or inflate engagement metrics
- Engaging in vote manipulation, like-farming, or artificially inflating view counts
- Repetitive, unsolicited messages or spam comments
- Misleading links, clickbait titles, or deceptive content
- Phishing attempts or social engineering attacks
2.8 Impersonation & Misleading Identity
Impersonating other individuals, brands, or organizations is prohibited. This includes:
- Creating profiles that falsely claim to be another person (celebrity, content creator, etc.)
- Using another person's name, likeness, or branding without authorization
- Impersonating Platform staff, moderators, or administrators
- Parody or fan accounts that do not clearly disclose they are unofficial
Note: Parody and fan accounts are permitted if they clearly indicate they are unofficial in the profile name and bio (e.g., "Not affiliated with [Person]").
3. Age Verification Requirements
All users must be at least 18 years of age to access or use our Platform. We implement multiple layers of age verification:
- Age Gate: Users must affirm they are 18+ before accessing the Platform.
- Account Registration Age Check: Users must provide a date of birth during registration.
- ID Verification for Uploaders: Users who upload content must verify their identity by submitting government-issued ID, in compliance with 18 U.S.C. § 2257 record-keeping requirements. See our 18 U.S.C. § 2257 Statement for details.
- Content Analysis: We use automated tools and human review to detect content that may depict minors.
- User Reports: Users can report content or accounts suspected of involving minors.
Consequences for Underage Access: Any user discovered to be under 18 will have their account immediately and permanently terminated. If content depicting a minor is discovered, we will report it to NCMEC and law enforcement as described in Section 2.1.
⚠️ State Age Verification Laws
Several U.S. states (Louisiana, Arkansas, Mississippi, Utah, Virginia, Montana, North Carolina, Texas) have enacted laws requiring age verification for adult content platforms. We comply with these laws by implementing age gates and may implement additional verification methods as required.
4. Moderation Process
We use a combination of automated detection, user reports, and human review to enforce this policy. Here's how our moderation process works:
4.1 Detection Methods
Automated Systems
- Hash Matching: We maintain databases of known CSAM hashes (from NCMEC) and NCII hashes (from StopNCII.org) to block uploads of previously identified illegal content.
- Machine Learning: We use AI/ML models to detect potential violations (CSAM, NCII, violence, etc.). [DSA Art. 14 Note: Automated systems flag content for human review, not automatic removal]
- Keyword Filtering: We monitor for keywords associated with prohibited content (e.g., terms suggesting minors, trafficking, etc.)
- Metadata Analysis: We analyze image EXIF data, upload patterns, and user behavior for anomalies
User Reports
Users can report content or accounts that violate this policy. All reports are reviewed by our Trust & Safety team. See Section 9 for reporting instructions.
Proactive Human Review
Our Trust & Safety team proactively reviews high-risk uploads, new accounts, and content flagged by automated systems.
4.2 Review & Enforcement
When a potential violation is detected:
- Initial Review (Automated or Human): Content is evaluated against this policy.
- Decision: We determine whether the content violates this policy. For borderline cases, we err on the side of removal to protect user safety.
-
Action Taken: Depending on the severity of the violation, we may:
- Remove the content
- Issue a warning to the user
- Temporarily suspend the account (7-30 days)
- Permanently terminate the account
- Ban the user's IP address and device fingerprints
- Report to law enforcement (for CSAM, trafficking, etc.)
-
Notification (DSA Art. 17 - Statement of Reasons): Users are notified of moderation actions with a statement of reasons, including:
- Which content was removed or which action was taken
- Which policy provision was violated
- Information about the appeals process
- Information about out-of-court dispute resolution (for EU users)
4.3 Response Times
- CSAM: Immediate removal upon detection + law enforcement reporting within 24 hours
- NCII: Removal within 24 hours of valid report
- Other Violations: Review and action within 48-72 hours
- Appeals: Review within 7 business days
5. Appeals & Due Process
If you believe your content was removed or your account was suspended or terminated in error, you have the right to appeal our decision.
5.1 How to File an Appeal
-
Submit an Appeal: Email [email protected] with:
- Your account username or email
- The content URL (if applicable) or case number from the moderation notice
- A clear explanation of why you believe the decision was incorrect
- Any supporting evidence (e.g., proof of consent, age verification, etc.)
- Review: Our Appeals team will review your case within 7 business days. This review is conducted by a different moderator than the one who made the original decision.
- Decision: We will notify you of the outcome via email. If we uphold the original decision, we will provide additional explanation. If we overturn the decision, your content will be restored or your account reinstated.
5.2 Limitations on Appeals
- CSAM: Decisions involving CSAM are not appealable and are always permanent.
- Repeat Offenders: Users with multiple violations may have limited or no appeal rights.
- Frivolous Appeals: Repeatedly filing baseless appeals may result in loss of appeal privileges.
5.3 Out-of-Court Dispute Resolution (EU Users - DSA Art. 21)
If you are located in the European Union and are not satisfied with the outcome of your appeal, you may submit your dispute to an out-of-court dispute settlement body. We will engage with certified dispute resolution providers as required by the EU Digital Services Act.
Contact [email protected] for information about available out-of-court dispute resolution options.
6. Transparency & Accountability
We are committed to transparency in our content moderation practices. To promote accountability, we publish regular transparency reports that include:
- Number of content removals by category (CSAM, NCII, copyright, etc.)
- Number of account suspensions and terminations
- Number of user reports received and acted upon
- Number of appeals filed and outcomes
- Response times for different violation categories
- NCMEC CyberTipline reports filed (aggregate number, no identifying details)
- Government requests for content removal or user data
DSA Transparency Reporting (EU): For EU users, we publish additional transparency reports as required by DSA Articles 15, 24, and 42, including information about our content moderation systems, automated decision-making, and complaints handling.
7. Special Protections for Marginalized Communities
We recognize that certain communities face disproportionate levels of harassment, discrimination, and abusive reporting on adult content platforms. We are committed to protecting these communities:
7.1 LGBTQ+ Protections
- LGBTQ+ content (including trans, non-binary, and gender-nonconforming content) is explicitly permitted on our Platform
- We do not tolerate homophobic, transphobic, or anti-LGBTQ+ harassment
- We apply heightened scrutiny to mass-reporting campaigns targeting LGBTQ+ creators
- Misgendering, deadnaming, or "outing" individuals without consent is prohibited
7.2 Sex Worker Protections
- We support the rights of consensual adult sex workers and do not discriminate against sex worker content
- We distinguish between consensual adult sex work and human trafficking/exploitation
- We do not permit content promoting or facilitating prostitution solicitation (arranging meetings for paid sex), but we permit sex workers to share adult content
- We apply heightened scrutiny to reports targeting sex worker accounts, recognizing that sex workers face stigma-based harassment
7.3 Protection Against Abusive Reporting
We are aware that bad actors may abuse reporting systems to harass creators, particularly marginalized creators. We combat this by:
- Reviewing patterns of mass-reporting and coordinated harassment campaigns
- Penalizing users who file false or bad-faith reports
- Providing creators with information about who reported their content (in aggregate, not individual identities) when safe to do so
- Offering enhanced protections (e.g., requiring higher thresholds for removal) for creators targeted by harassment
8. Regional Compliance
We operate globally and comply with content moderation requirements in major jurisdictions:
8.1 European Union (Digital Services Act)
For users in the EU, we comply with the Digital Services Act (Regulation 2022/2065), including:
- Art. 16 - Notice and Action: We provide an easy-to-use reporting mechanism for illegal content
- Art. 17 - Statement of Reasons: We notify users of content removal decisions with clear explanations
- Art. 20 - Internal Complaints: We provide an internal appeals process
- Art. 21 - Out-of-Court Dispute Resolution: We engage with certified dispute settlement bodies
- Art. 24 - Transparency Reporting: We publish annual transparency reports with moderation statistics
- Art. 27 - Recommender System Transparency: We disclose how our content recommendation algorithm works (see our Terms of Service)
8.2 United Kingdom (Online Safety Act)
For users in the UK, we comply with the Online Safety Act 2023, including:
- Implementing systems to prevent children from accessing adult content (age verification)
- Removing illegal content (CSAM, NCII, terrorism, etc.) expeditiously
- Assessing risks of harm to UK users and implementing mitigation measures
- Publishing transparency reports on content moderation
- Cooperating with Ofcom (UK regulator) on compliance matters
8.3 Australia (Online Safety Act)
For users in Australia, we comply with the Online Safety Act 2021 and cooperate with the eSafety Commissioner on:
- Removal of NCII within 24 hours of receiving a removal notice from the eSafety Commissioner
- Removal of CSAM and other illegal content
- Compliance with the Basic Online Safety Expectations (BOSE)
8.4 United States (Federal & State Laws)
- 18 U.S.C. § 2258A: We report CSAM to NCMEC as required by federal law
- CDA Section 230: We operate as a platform under Section 230 protections but take proactive steps to remove illegal content
- State Age Verification Laws: We implement age verification in states requiring it (LA, AR, MS, UT, VA, MT, NC, TX)
- State Revenge Porn Laws: We remove NCII in compliance with state laws
9. Reporting Mechanisms
We provide multiple ways for users to report content that violates this policy:
How to Report Violations
1. In-App Reporting (Recommended)
Use the "Report" button on any content, profile, or comment. Select the appropriate violation category and provide details.
2. Email Reporting
- CSAM: [email protected] (urgent - reviewed immediately)
- NCII: [email protected] (urgent - reviewed within 24 hours)
- General violations: [email protected]
- Copyright (DMCA): [email protected] or use our DMCA form
3. External Reporting
- CSAM: Report directly to NCMEC at https://report.cybertip.org
- NCII: Create a hash at StopNCII.org to prevent re-uploads across platforms
9.1 What to Include in Your Report
- The URL of the content or profile you're reporting
- The type of violation (select from categories in Section 2)
- A brief description of why the content violates this policy
- Any supporting evidence (if applicable and safe to provide)
- Your contact information (for follow-up, kept confidential)
⚠️ Important Notes for Reporters
- Do NOT download or share CSAM - simply report the URL
- All reports are confidential - we do not share reporter identities with the reported user (except as required by law)
- False reports have consequences - filing intentionally false reports to harass creators may result in your account being terminated
10. Contact Information
For questions about this Content Moderation Policy or our enforcement practices:
Trust & Safety Team: [email protected]
Appeals: [email protected]
General Support: [email protected]
DMCA/Copyright: [email protected]
Help Center: https://pinporn.app/help
Related Policies
- Terms of Service - Legal framework and prohibited conduct
- Privacy Policy - How we handle your personal data
- DMCA Policy - Copyright infringement process
- 18 U.S.C. § 2257 Statement - Record-keeping requirements
Community Resources
- NCMEC CyberTipline: https://report.cybertip.org - Report CSAM
- StopNCII.org: https://stopncii.org - Prevent NCII re-uploads
- Cyber Civil Rights Initiative: https://cybercivilrights.org - NCII victim support
- National Suicide Prevention Lifeline: 988 (U.S.) - Crisis support
- RAINN: 1-800-656-4673 - Sexual assault support
Effective Date: January 2, 2026
Last Updated: January 2, 2026
Version: 1.0
Replaces: Community Etiquette, Child Sexual Abuse Material Policy, and Non-Consensual Content Policy (all superseded by this comprehensive policy)