TikTok Community Guidelines and Safety Features

TikTok’s community guidelines establish rules that apply to all content, accounts, and user interactions on the platform. These guidelines cover content moderation, safety features, age-specific protections, and enforcement mechanisms designed to maintain a safe environment for the platform’s over 1 billion global users. The platform combines automated technology with human moderators to enforce these standards, removing violative content before it reaches audiences.


How TikTok Enforces Community Standards

TikTok’s enforcement system operates through a multi-layered approach combining artificial intelligence and human oversight. In 2024, over 80% of policy-violating videos were identified and removed through automated technology, representing a 15% increase from the previous year. The platform invested more than $2 billion in trust and safety operations throughout 2024.

The automated moderation system employs several detection methods. Vision-based models identify prohibited objects like weapons or hate symbols appearing in videos. Audio-based systems review sound clips against violation databases, detecting similar or modified versions of previously flagged content. Text-based detection models scan written elements including comments, captions, and hashtags for policy violations. The platform recently introduced large language learning models (LLMs) to improve moderation precision and speed, particularly for complex tasks like misinformation claims extraction.

Over 96% of content removed through automation in 2024 was taken down before receiving any views. This proactive approach significantly reduces harmful content exposure. The automated systems also prevented more than 2 billion spam accounts from being created during the same period.

Human moderators complement the technology. TikTok employs approximately 40,000 content moderation specialists globally, covering over 70 languages. These moderators provide cultural context and handle complex cases requiring nuanced judgment. In Q1 2024, the moderation teams reviewed content across markets, with the U.S. accounting for approximately 35.15 million video removals, followed by Pakistan and Indonesia.

When content violates guidelines, TikTok removes it and notifies creators with specific reasons for the action. Users can appeal decisions through the in-app Safety Center. The platform introduced a warning strike system that gives first-time violators an educational notice without counting toward their account’s violation tally. Subsequent violations result in progressive consequences including content removal, feature restrictions, temporary suspensions, or permanent account bans.

The accuracy rate for automated moderation technologies reached 99.1% in the second half of 2024. Of content removed during this period, over 98% was taken down within 24 hours of posting. These metrics demonstrate the platform’s capacity to maintain safety standards while managing massive content volumes—users upload approximately 34 million videos daily.


Core Content Violations and Prohibited Behavior

TikTok’s community guidelines prohibit several categories of harmful content and behavior. Understanding these restrictions helps users create compliant content and maintain account standing.

Violence and dangerous activities fall under strict prohibition. The platform removes content depicting graphic violence, promoting dangerous acts or challenges, or encouraging self-harm. This includes challenges that involve physical risk, content showing severe injuries, and material promoting suicide or eating disorders. In Q3 2024, approximately 147 million videos were removed for various policy violations, with safety-related content representing a significant portion.

Hate speech and discrimination receive zero tolerance. TikTok prohibits content attacking individuals or groups based on protected characteristics including race, ethnicity, religion, sexual orientation, gender identity, disability, or immigration status. The 2024 guideline updates expanded definitions around hate speech, explicitly prohibiting deadnaming, misgendering, and content supporting conversion therapy programs. These clarifications responded to feedback from civil society organizations about the need for explicit policy language.

Harassment and bullying prevention extends beyond direct attacks. The guidelines prohibit threats, doxxing, unwanted sexual advances, and coordinated harassment campaigns. Content that mocks, intimidates, or demeans individuals can be removed, and accounts engaging in persistent harassment face permanent bans. The platform also restricts content that reveals private information without consent.

Adult content restrictions protect younger users and maintain platform standards. TikTok removes sexually explicit material, nudity, and sexually suggestive content involving minors. The platform prohibits sexual solicitation, non-consensual intimate imagery, and content sexualizing youth. These protections operate in conjunction with age verification systems and content filtering technology.

Misinformation policies address false content that could cause harm. The platform removes or labels misleading information about civic processes, public health, and emergencies. During the 2024 U.S. elections, TikTok implemented enhanced policies for state-affiliated media and introduced transparency measures for election-related content. The platform works with fact-checkers and authoritative sources to verify claims before taking action.

Intellectual property violations trigger swift responses. Content infringing copyrights, trademarks, or other IP rights can be removed following valid takedown requests. The platform provides tools for rights holders to report violations and maintains a repeat infringer policy for accounts with multiple substantiated claims.

Platform manipulation and spam undermine authentic community interactions. TikTok prohibits artificial engagement schemes including purchased likes, followers, or comments. In 2022, the platform prevented 46 billion fake likes and 28 billion fake follows. Through Q1 2023, an additional 8.3 billion fake likes and 6.5 billion fake follows were blocked. Accounts participating in coordinated inauthentic behavior or operating fake personas face removal.

Enforcement consequences scale with violation severity and frequency. Minor violations might result in content removal with a warning. Repeated violations lead to temporary feature restrictions—users may lose the ability to upload, comment, message, edit profiles, or go LIVE for 24-48 hours. Severe or persistent violations result in permanent account bans. In Q1 2024, approximately 195 million accounts were removed for community guidelines violations, including 21.6 million suspected underage accounts.


Age-Specific Protections and Teen Safety

TikTok implements differentiated safety measures based on user age, recognizing that younger users require additional protections. The platform divides teen users into two groups—ages 13-15 and 16-17—each with tailored restrictions and default settings.

For users aged 13-15, accounts automatically start as private. This default setting means only approved followers can view their content, giving young users control over their audience. These users cannot change certain features: direct messaging remains completely disabled, preventing stranger contact. Comments default to “Friends only” (followers they follow back) and cannot be expanded to “Everyone.” Their content is ineligible for recommendation in the For You feed to people they don’t know, limiting exposure to wider audiences.

Additional restrictions apply to content reuse features. Users aged 13-15 cannot have their videos used in Duets or Stitches by others—these settings are locked to “Only you.” They also cannot download others’ content or have their own content downloaded. The “Suggest your account to others” feature remains disabled by default, reducing unsolicited contact. Video downloads are completely prohibited for this age group.

For users aged 16-17, restrictions relax slightly while maintaining core protections. Accounts still default to private but can be changed to public if users choose. Direct messaging becomes available, though message requests from non-friends default to off. Comment settings on private accounts default to “Followers,” expandable to “Friends” or “No one” but not “Everyone.”

Content reuse permissions expand for 16-17 year olds with public accounts. They can choose whether others can Duet or Stitch with their videos, with the default set to “Friends only.” Video downloads default to disabled but can be enabled. The account suggestion feature remains off by default but becomes toggleable. These gradual permission increases align with developing digital literacy.

Universal teen protections apply across both age groups. All accounts for users under 18 start with a 60-minute daily screen time limit, requiring passcode entry to continue beyond this threshold. Push notifications automatically disable overnight—at 9pm for ages 13-15 and 10pm for ages 16-17. Users under 18 cannot access TikTok LIVE streaming as hosts, preventing real-time exposure to unpredictable interactions. They also cannot send or receive virtual gifts, eliminating financial transaction risks.

Content filtering protects teens from mature themes. TikTok’s Content Levels system categorizes content by thematic maturity, working to minimize overtly mature material from reaching under-18 users. Age-restricted content categories—including material related to disordered eating, dangerous challenges, graphic imagery, gambling, and substance use—receive additional filtering for teen accounts.

The Wind Down feature promotes healthy evening habits for users under 16. When these users open TikTok after 10pm, a full-screen prompt with calming music appears, encouraging mindful usage and rest. A second, harder-to-dismiss reminder appears if they continue scrolling. Testing showed many teens chose to keep these alerts enabled even when given opt-out options, indicating positive reception for sleep-protective features.

For the under-13 experience available in some regions, TikTok provides an extremely restricted environment. These accounts are completely private with disabled interactive features including commenting, sharing, messaging, and posting visible content. Videos can be created but not published. Screen time is limited to one hour daily, with parental passcode required for 30-minute extensions. This limited experience aims to accommodate users who claim to be in this age bracket while providing age-appropriate restrictions.

Recent updates to teen privacy expanded protections further. In 2025, TikTok updated notification schedules, content preference controls, and default privacy settings based on feedback from youth councils and parental advocacy groups. The platform continues consulting with adolescent development experts and safety organizations to refine age-appropriate experiences as digital norms evolve.


Family Pairing: Parental Control Features

Family Pairing enables parents and guardians to link their TikTok account to their teen’s account, creating customizable oversight without requiring password sharing. Launched in 2020 and continuously expanded, Family Pairing now offers over 15 adjustable safety and privacy settings, giving families tools to tailor the TikTok experience to individual needs.

Screen time management forms the foundation of Family Pairing controls. Parents can set daily time limits ranging from 40 minutes to 120 minutes, with the flexibility to establish different limits for each day of the week. This allows shorter limits on school nights and extended time on weekends or holidays. When the teen reaches their limit, they must request a randomized passcode from the parent to continue, ensuring active parental involvement in time extension decisions. Parents can view a dashboard showing cumulative time spent on TikTok over the previous four weeks, enabling informed conversations about usage patterns.

The Time Away feature, introduced in 2025, allows parents to block TikTok access during specific periods. Parents can set recurring schedules blocking the app during family meals, school hours, bedtime, or other designated times. If circumstances change, teens can request extra time, but parents retain final approval authority. This feature addresses research showing that structured digital boundaries, combined with family discussion, promote healthier tech relationships than restriction alone.

Content customization gives parents several filtering options. Through Family Pairing, parents can enable Restricted Mode, which limits the appearance of content potentially inappropriate for all audiences. They can filter keywords by adding specific terms they want excluded from their teen’s For You and Following feeds—TikTok checks video descriptions and removes posts containing these keywords. The STEM feed feature can be activated, prioritizing science, technology, engineering, and mathematics content in recommendations. This feed, available in over 100 countries, is used by millions of teens weekly and helps parents direct their teen’s feed toward educational material.

Communication controls manage interaction risks. Parents can restrict who can send direct messages to teens aged 16-17—options include “Everyone,” “Friends,” or “No one.” For younger teens, messaging remains completely disabled regardless of Family Pairing settings. Parents can also control comment permissions on their teen’s posts, choosing between “Everyone” (for public accounts aged 16-17), “Friends,” or disabling comments entirely. These controls reduce exposure to cyberbullying and predatory contact.

Privacy settings oversight provides parents visibility into their teen’s privacy choices. Through Family Pairing, parents can review whether their teen’s account is public or private, who can Duet or Stitch with their content, whether content downloads are enabled, and if the account appears in suggestions to others. Parents cannot directly control all these settings but gain awareness that facilitates discussions about privacy implications. For some settings, parents can enforce restrictions that teens cannot override.

Account blocking functionality, rolling out in phases starting in Europe, lets parents block specific accounts from interacting with their teen. When blocked through Family Pairing, the account cannot view the teen’s profile, see their content in feeds, or send messages. The teen cannot view the blocked account either. Parents can see their teen’s existing blocked list and add accounts they deem problematic. Teens can request unblocking, but parents make the final decision. This tool recognizes that parents may identify concerning accounts or individuals that teens might not recognize as problematic.

Network visibility features help parents understand their teen’s social connections. Parents can view who their teen follows, who follows them, and which accounts the teen has blocked. This transparency enables conversations about online relationships, helps parents identify concerning connections, and allows monitoring for potential grooming or peer pressure situations. The feature doesn’t restrict teens from following accounts but provides information parents can use for guidance.

Content upload notifications, currently in testing, automatically alert parents when their teen uploads a video, story, or photo visible to others. This proactive notification system helps parents stay informed about their teen’s creative output without requiring constant app monitoring. It opens opportunities for supportive conversations about content choices while respecting teen autonomy.

Report alerting allows teens to notify parents when they report content or accounts for policy violations. While parents don’t see the specific reported content, they receive notification that a report was filed. This optional feature encourages teens to involve trusted adults in concerning situations without fear of overreaction, as parents understand the platform has safety teams reviewing the issue.

Setting up Family Pairing requires both parent and teen participation. Parents navigate to Family Pairing in settings, select “Parent,” and generate a QR code. The teen opens their Family Pairing settings, selects “Teen,” and scans the parent’s QR code. This linking process respects teen privacy by requiring active consent from both parties. Once linked, teens can increase their own restrictions but cannot reduce those set by parents, maintaining parental authority while encouraging self-regulation.

Family Pairing complements rather than replaces parental involvement. TikTok emphasizes that technical controls work best alongside ongoing conversations about digital citizenship, online safety, and responsible content creation. The platform provides resources through its Guardian’s Guide and partners with organizations like the Family Online Safety Institute to support families in navigating these discussions.


Content Moderation Transparency and Reporting

TikTok publishes quarterly Community Guidelines Enforcement Reports providing detailed insights into moderation activities, demonstrating the platform’s commitment to accountability. These reports, available through the Transparency Center, break down violation types, removal volumes, detection methods, and regional enforcement patterns.

In Q3 2024, TikTok removed over 147 million videos for policy violations globally. This represents approximately 1% of all uploaded content during that period, indicating that the vast majority of content complies with guidelines. The removal rate has remained relatively stable over time, suggesting consistent enforcement standards as the platform scales.

Proactive detection capabilities continue improving. Over 99% of removed content in 2024 was identified before any user reported it, reflecting the effectiveness of automated systems. The platform’s enforcement speed accelerated as well—over 98% of violative content was removed within 24 hours of being posted. For content caught by automated systems, 88.8% was removed before receiving any views, preventing exposure entirely.

Regional enforcement patterns reveal varying compliance challenges. In Q1 2024, the United States led in video removals with 35.15 million, followed by Pakistan with approximately 20 million and Indonesia close behind. These patterns reflect multiple factors including user base size, local content norms, and enforcement priorities for specific violation types in different markets. TikTok adjusts moderation approaches to respect regional cultures and legal requirements while maintaining core safety standards globally.

Livestreaming moderation presents unique challenges due to real-time content. In Q3 2024, approximately 860,000 live-streaming sessions initially suspended for policy violations were reinstated after review. This represents less than 7% of the roughly 12 million suspended sessions, indicating that most suspension decisions were correct on first pass. The decreasing number of interrupted sessions over time—from 10 million in Q3 2023 to 8 million in Q4 2023—suggests either improved creator compliance or more precise violation detection.

Comment moderation operates at massive scale. In Q1 2024, over 976 million video comments were removed for violating community guidelines, representing 1.6% of all comments posted. TikTok’s safety tools also enabled creators to filter or remove over 3.3 billion comments during this period, empowering users to curate their own comment sections. This creator-controlled filtering supplements platform-level moderation.

Account enforcement targets both policy violators and underage users. In Q1 2024, approximately 195 million accounts were removed for community guideline violations. Of these, 21.6 million were suspected of belonging to users under age 13, demonstrating active enforcement of minimum age requirements. The platform uses machine learning to detect age misrepresentation based on behavioral patterns, content creation styles, and other signals.

Advertising moderation maintains brand safety standards. In Q3 2024, TikTok removed 3.68 million ads for policy violations. The following quarter saw 1.92 million ad removals, indicating continued scrutiny of promotional content. These removals ensure that advertising doesn’t circumvent content policies or expose users to inappropriate commercial material.

User reporting systems complement automated detection. TikTok provides multiple reporting pathways for users who encounter violative content, accounts, or behavior. Reports can be filed directly from videos, profiles, comments, or messages. After filing a report, users can track its status through their Safety Center, receiving updates about review outcomes. This transparency helps users understand enforcement actions and builds confidence in the reporting process.

For EU users under the Digital Services Act (DSA), TikTok provides enhanced transparency. The fourth DSA transparency report covering July-December 2024 showed approximately 18 million pieces of violative content removed proactively in the EU. Trusted Flaggers—approved organizations with expertise in identifying illegal content—submitted 59 reports through a dedicated channel launched mid-2024. Out-of-court disputes received 173 appeals, with Dispute Settlement Bodies closing 59 cases during the period. These mechanisms provide additional accountability layers for European users.

The accuracy metrics demonstrate continuous improvement. Automated moderation accuracy reached 99.1% in H2 2024, maintaining high precision despite increasing content volumes. This accuracy reflects ongoing investment in training detection models, expanding violation databases, and incorporating machine learning advances.

TikTok’s transparency extends to acknowledging limitations. The platform recognizes that no moderation system achieves perfection and that cultural context can make some decisions difficult. By publishing enforcement data, explaining methodologies, and providing appeal mechanisms, TikTok aims to maintain trust while acknowledging the inherent challenges of content moderation at global scale.


Special Safety Initiatives and Resources

Beyond core guidelines and automated enforcement, TikTok implements targeted initiatives addressing specific safety challenges and vulnerable populations.

Election integrity measures protect democratic processes. During the 2024 U.S. elections, TikTok invested over $2 billion in trust and safety specifically for election protection. The platform expanded policies for state-affiliated media, requiring disclosure labels on government-controlled accounts. A dedicated transparency report tracks influence operations and state-sponsored manipulation attempts. These measures aim to prevent misinformation from undermining civic participation while preserving legitimate political discourse.

AMBER Alerts partnership with the National Center for Missing & Exploited Children brings critical child safety information to users. When an AMBER Alert is issued for a specific region, TikTok displays the alert to users in that geographic area, amplifying reach for urgent child abduction cases. This integration leverages TikTok’s massive user base for public safety purposes.

Mental health resources address the platform’s impact on user wellbeing. TikTok partners with crisis intervention organizations to provide in-app resources when users search for terms related to suicide, self-harm, or eating disorders. Instead of showing potentially harmful content, searches trigger safety screens with crisis helpline information and professional resources. The platform also restricts content that glorifies eating disorders or self-harm while allowing educational content that promotes recovery.

AI-generated content labeling increases transparency as synthetic media becomes more prevalent. TikTok automatically labels AI-generated content in many cases, helping users distinguish between authentic and synthetic media. Creators using AI tools are encouraged to disclose this in their content. As deepfakes and other convincing fake content proliferate, these labeling efforts help maintain platform authenticity and prevent manipulation.

Security Checkup tool provides an all-in-one dashboard for account security. Users can review login activity, check for unauthorized access, update passwords, enable two-factor authentication, and review connected devices. This proactive tool helps users identify security vulnerabilities before they’re exploited, particularly important as account takeover attempts increase across social platforms.

Well-being Missions encourage healthy digital habits through gamified challenges. This feature, developed in consultation with the Digital Wellness Lab at Boston Children’s Hospital, presents short missions designed to build long-term balanced technology use. Rather than relying solely on restriction, Well-being Missions use positive reinforcement and education to encourage self-regulation. Research informing this feature suggests that behavior change rooted in understanding and motivation proves more effective than imposed limits alone.

STEM Feed initiative directs users toward educational content. Available in over 100 countries, the STEM feed prioritizes science, technology, engineering, and mathematics content in recommendations. Millions of teens engage with this feed weekly, discovering educational creators covering topics from physics and astronomy to coding and engineering. Parents can enable this feed through Family Pairing, providing an alternative to typical entertainment-focused algorithms.

Creator safety resources support individuals who face harassment or coordinated attacks. TikTok provides guidance for creators experiencing targeted harassment, including temporary privacy features that limit unwanted interactions. The platform also offers direct support channels for creators dealing with severe safety issues, recognizing that public figures and high-visibility accounts face unique risks.

Safety partnerships with over 100 organizations globally inform policy development. TikTok consults regional Advisory Councils, youth councils, child safety organizations, mental health experts, and civil society groups when updating guidelines. This collaborative approach ensures policies reflect diverse perspectives and expert guidance rather than unilateral corporate decisions.

The Guardian’s Guide and Teen Safety Center provide comprehensive educational resources for families. These materials explain safety features, offer conversation starters for discussing online behavior, and provide age-appropriate guidance for navigating the platform. Videos created by TikTok creators and safety advocates demonstrate features and share best practices, making resources engaging and accessible.

Digital Safety Partnership agreements developed with the Family Online Safety Institute provide families a framework for establishing TikTok usage boundaries. These downloadable contracts help parents and teens negotiate rules together, addressing topics like screen time, content creation, privacy settings, and interaction boundaries. The participatory approach recognizes that collaborative rule-setting often proves more effective than unilateral restrictions.

These initiatives complement technical safeguards, acknowledging that platform safety requires more than just automated content removal. By combining technology, education, partnerships, and resources, TikTok aims to address safety holistically across prevention, detection, intervention, and support.


Platform Accountability and Future Commitments

TikTok’s safety infrastructure continues evolving in response to emerging challenges, regulatory requirements, and community feedback. The platform has committed to ongoing investment in trust and safety, pledging over $2 billion annually for these operations through 2025 and beyond.

Regulatory compliance shapes many safety features. The European Union’s Digital Services Act requires enhanced transparency, complaint mechanisms, and accountability measures for large platforms. TikTok’s DSA compliance includes detailed transparency reports, Trusted Flagger programs, out-of-court dispute resolution, and risk assessments for systemic harms. Similar regulatory frameworks in Australia, the United Kingdom, and other jurisdictions drive additional safety innovations.

Age verification technology represents a significant challenge and opportunity. TikTok employs machine learning to detect age misrepresentation based on behavioral signals, but acknowledges that no current verification method perfectly balances accuracy, privacy, and accessibility. The platform participates in industry initiatives exploring age assurance technologies that could verify age without excessive data collection. As these technologies mature, TikTok may implement more robust age verification while maintaining privacy protections.

Content Levels refinement continues addressing the nuance of content appropriateness. Not all content fits neatly into “allowed” or “prohibited” categories. The Content Levels system categorizes content by maturity themes, limiting certain material from teen feeds without removing it entirely. Ongoing work refines these categorizations, improves detection accuracy, and calibrates filtering thresholds based on research about adolescent development and content impacts.

For You feed eligibility standards distinguish between content that’s allowed on the platform and content recommended to wide audiences. Content that technically complies with community guidelines but doesn’t meet For You feed standards may remain visible to followers without receiving algorithmic amplification. Recent updates allow TikTok to temporarily make entire accounts ineligible for For You recommendations if creators repeatedly post borderline content. Creators receive notifications of this status and can appeal the restriction.

Creator Code of Conduct establishes additional standards for participants in TikTok programs, features, events, and campaigns. Beyond community guidelines and terms of service, this code applies both on and off platform, setting behavioral expectations for creators representing TikTok in official capacities. The code addresses concerns about creator accountability and ensures program participants meet higher standards.

Account Check feature empowers creators to self-audit compliance. This tool allows creators to review their last 30 posts for potential guideline violations, see their account’s standing, and identify restrictions in place. Proactive compliance checking helps creators avoid unintentional violations and understand how their content aligns with policies before issues escalate. The transparency encourages creators to align with standards rather than discovering violations reactively.

Moderation training investments ensure human reviewers apply policies consistently and accurately. As guidelines expand and become more nuanced, moderators require comprehensive training on definitions, edge cases, cultural context, and enforcement standards. TikTok continually updates training programs as policies evolve and invests in moderator wellbeing programs, recognizing the psychological toll of reviewing disturbing content.

Multi-modal LLM integration represents the next frontier in automated moderation. These AI systems can analyze video, audio, text, and context simultaneously, enabling more nuanced violation detection. For example, multi-modal systems can extract specific misinformation claims from videos, assess context around potentially problematic statements, and flag cultural nuances that single-modality systems miss. TikTok deploys these technologies cautiously, setting quality benchmarks before full implementation.

The platform acknowledges that safety work never reaches completion. As TikTok’s community grows, new content formats emerge, bad actors develop evasion techniques, and societal understanding of online harm evolves, the platform must continuously adapt. The commitment to transparency through regular reporting, engagement with external stakeholders, and willingness to adjust approaches based on evidence demonstrates institutional accountability essential for maintaining user trust.

Community safety ultimately relies on shared responsibility between platform, creators, and users. While TikTok provides tools, policies, and enforcement systems, creators must understand and follow guidelines, and users must report violations and use safety features. This ecosystem approach recognizes that technology alone cannot eliminate all harmful content—effective safety requires collective participation in maintaining community standards.


Frequently Asked Questions

What happens if I violate TikTok’s community guidelines?

Consequences depend on violation severity and frequency. First-time minor violations typically result in content removal with a warning strike that doesn’t count toward your account’s violation tally. Repeated or more serious violations lead to progressive penalties including temporary feature restrictions lasting 24-48 hours, during which you cannot upload videos, comment, message, edit your profile, or go LIVE. Severe violations or persistent patterns result in permanent account bans. You receive notifications explaining specific violations and can appeal decisions through the in-app Safety Center.

How does Family Pairing work and what can parents control?

Family Pairing links a parent’s TikTok account to their teen’s account through QR code scanning, enabling customizable safety settings without password sharing. Parents can set daily screen time limits with different durations for each day, schedule “Time Away” periods blocking app access during specific hours, enable Restricted Mode to filter mature content, add keyword filters, control direct messaging permissions, manage comment settings, and view their teen’s follower/following lists. Parents also see their teen’s privacy settings and can block specific accounts from interacting with their teen. Teens can increase but not decrease parent-set restrictions.

At what age can someone use TikTok and what restrictions apply?

The minimum age is 13 years old globally, though some regions set different minimums based on local laws. Users aged 13-15 have accounts automatically set to private, cannot use direct messaging, cannot have their content used in Duets or Stitches, have comments restricted to friends only, and their content isn’t recommended to non-followers in the For You feed. Users aged 16-17 have slightly fewer restrictions but still default to private accounts and have DM requests disabled by default. All users under 18 face a 60-minute daily screen time limit, cannot host LIVE streams, cannot send or receive virtual gifts, and have overnight push notifications disabled.

How does TikTok detect and remove inappropriate content?

TikTok uses a hybrid approach combining artificial intelligence and human moderators. Automated systems employing computer vision, audio analysis, and text detection identify violations, removing over 80% of policy-violating content. These systems removed 96% of violative content in 2024 before it received any views. Approximately 40,000 human moderators covering 70+ languages review complex cases, provide cultural context, and handle appeals. The automated moderation accuracy rate reached 99.1% in late 2024, with 98% of violations removed within 24 hours of posting.


Sources

  1. TikTok Community Guidelines (2024-2025) – Official TikTok Documentation
  2. TikTok Transparency and Accountability Center Reports (Q1-Q4 2024)
  3. TikTok Newsroom Announcements on Safety Updates (2024-2025)
  4. Digital Services Act Transparency Reports (H1-H2 2024)
  5. Community Guidelines Enforcement Reports (2024)
  6. Family Pairing Feature Documentation – TikTok Support
  7. Teen Safety Center Resources – TikTok Safety Portal
  8. Guardian’s Guide – TikTok Educational Resources
滚动至顶部