Brand Safety @X

A safer X is a better X

At X, our purpose is to serve the public conversation. And we’re committed to providing a safe environment where everyone – including brands – can participate freely and confidently.

We believe brand safety is about people – the people who use our service, the people in our communities, and the people who manage the brands that count on X every day. So we’re continuously improving our policies, products, and partnerships to help keep people safe.

There’s still much work to be done. And we won’t rest until we have a X that’s welcoming, exciting, and empowering for all.

 

For more perspective and research on how we are committed to building a better, safer X, check out our Brand Safety Marketing Collection

 

 

X's approach to brand safety

Policies that lead
Products that protect
Partnerships that drive industry-wide change
Policies that lead

Our rules are in place to ensure all people can participate in the public conversation freely and safely. These policies are enforced for all people who use X and set the standard for content and behavior not permitted on the platform. These policies address violence, terrorism/violent extremism, child sexual exploitation, abuse/harassment, hateful conduct, suicide or self-harm, sensitive media, illegal or regulated goods and services, private information, non-consensual nudity, platform manipulation and spam, civic integrity, impersonation, synthetic and manipulated media, and copyright and trademark.

Learn more about our Rules, and our range of options to enforce them.

X’s Brand Safety policies

Our Brand Safety policies, as well as the controls we offer people and advertisers, build upon the foundation laid by our Rules to promote a safe advertising experience for customers and brands. Specifically, our policies and controls are designed to avoid placement immediately above or below content that we determine may be unsafe or unsuitable, including content that falls below the GARM Brand Safety floor.  Our Brand Safety policies inform the context in which we serve ads, and include, but are not limited to:

  • Adult sexual content

  • Hate or extremist content

  • Profanity and offensive language

  • Restricted and illegal products and services 

  • Sensitive content

  • Violent, objectionable, or graphic content 

For more information on how our Brand Safety policies are enforced across the platform, see the ‘Products that protect’ section. Learn more about our Brand Safety Policy and its application in the Amplify Pre-Roll program.

Transparency Reporting

First published in July 2012, our biannual X Transparency Report highlights trends in legal requests, intellectual property-related requests, our Rules enforcement, platform manipulation, and email privacy best practices. The report also provides insight into whether or not we take action on these requests. 

In August 2020, we completely revamped these reports and consolidated them into a comprehensive Transparency Center. In July 2022, we released reporting covering the period from July through December 2021. As part of this release, we shared a new metric for the first time –  impressions – which represents the number of views violative Posts received prior to removal. We found that impressions on violative Posts accounted for less than 0.1% of all impressions of all Posts during the reporting time frame and that 71% of these Posts received fewer than 100 impressions prior to removal.

Products that protect

X is committed to working to provide advertisers with a safe environment where they can connect with their customers. To do so, we leverage a combination of machine learning, human review, and targeted policies to ensure that ads do not serve around potentially objectionable content. We also strongly believe in empowering our advertisers to customize their X campaigns in ways that help keep their unique brands safe. In addition to these controls, advertisers are also able to take advantage of the health and safety protections available to all people using X.

 
Platform-wide protections
Adjacency to sensitive media in Timeline and Search

X prevents ad placement adjacent to Posts that have been labeled as “Sensitive Media” by our X Service Team or by the posts’ authors, including media containing graphic violence and consensually produced adult content as defined under our Sensitive Media policy. 

Ensuring brand safety in the XAmplify Program

Every video from our content partners goes through a manual human review to ensure it meets our brand safety standards before it can be monetized. We supplement this review with a wide array of algorithmic health and safety checks that apply to all Posts on the platform.

We also hold regular proactive educational sessions with our content partners to help them successfully monetize their content on X within our brand safety standards. 

Promoting brand-safe placement in Search

X monitors conversations and trending topics around the world 24 hours a day, and removes ads from Search Results we deem unsafe for ads. This internal keyword denylist is updated on a regular basis, and applies to all campaigns globally. As a search is conducted, this denylist is referenced and if a search term appears on the list, no Promoted Ads will serve on that term’s search results page. The same denylist applies when users click a trending topic and are taken to the results page for that trend.

Brand safety controls for ads on Profiles

Every time a Profile is updated, our machine learning model searches the content of the Profile page with the goal of ensuring that content is brand safe, according to our brand safety policies, before a Promoted Ad is served. We only serve ads on Profiles that we deem to be safe for ads. We may also block ads from serving on individual user profiles based on the content or behavior of the account and lack of alignment with our brand safety policies.

Brand safety protections for ads in Post Replies

Post Replies is a new ad placement that launched in 2022 to help drive more scale and performance by serving in areas with high engagement and visibility. We recently expanded for additional objectives including advertisers running campaigns for  App Installs, Reach, Website Traffic and Conversions objectives. We use similar modeling to that which is applied on our Profiles placement and do not place ads within the Replies to posts from accounts that we determine to be not safe for ads. We expand upon that level of protection and employ additional modeling to avoid including ads in replies that include conversations that may be considered unsafe or unsuitable. In addition to this modeling, we will not place ads on newly created Profiles to ensure safe ad placement.

Keyword Targeting restrictions

X maintains a global denylist of Keyword Targeting terms that are not permitted to be used as parameters for positive keyword targeting (audiences associated with these terms can still be excluded through keyword exclusion targeting). This list is continually updated and includes a wide variety of terms that most brands would consider to be unsafe, as well as terms that are not allowed to be targeted per our Ads Policies. Learn more about our policies for Keyword Targeting.

Audience filtering and validation

X excludes accounts we suspect may be automated from monetizable audiences, helping to ensure valid traffic on ads. We also offer viewability measurement through integrations with multiple MRC-accredited third parties.

Private conversations

X is a public platform, and we work to ensure this open forum remains healthy through our policies and platform capabilities. Direct Messages, while private between the sender and recipients (up to a max of 50), are subject to the our Rules which apply to all individuals and content on X. In a Direct Message conversation, when a participant reports another person, we will stop the violator from sending messages to the person who reported them. The conversation will also be removed from the reporter's inbox. We will review reports and action appropriately.

 
Advertiser Controls
Adjacency Controls

We give advertisers control over which Posts their Promoted posts won’t appear adjacent to in all versions of Home Timeline (“For You” and “Following”). This control empowers advertisers to ensure their ads are not served alongside content that doesn’t align with their brand’s message and values. Advertisers can choose up to 4,000 keywords that they don’t want their paid campaigns to be displayed directly before or after. This system is case insensitive and will prevent a campaign from appearing adjacent to the keywords regardless of case, use of hashtag, or use of plurals (e.g., ‘problem’ will also exclude ‘Problem’, ‘problems’, and ‘#problem’). Controls work for keywords in all languages. In addition to negatively targeting keywords in Home Timeline, brands can also leverage author exclusion capabilities where they can choose up to 2,000 account handles that they don’t want their paid campaigns to be displayed directly before or after. Adjacency Controls are set at the Account level, so you don’t need to re-add them for every campaign. In addition, advertisers have the ability to bulk upload their selected keywords and author exclusions to avoid the tedious task of having to individually upload one keyword and/or author at a time. 

While we strive to prevent any ads from appearing adjacent to content excluded through Adjacency Controls settings, we cannot guarantee 100% effectiveness.

Campaign Placement Controls

We give advertisers control over the areas within the X platform where their campaigns may be displayed so that they can customize their delivery based on their comfort level. Most campaign objectives allow for excluding ads from serving on profiles, within search results or within Post replies. Follower campaigns can not opt out of running in search results and Pre-roll views campaigns can not opt out of running in either profiles or search results.

X Amplify Brand Safety Controls

Amplify pairs brands with the most premium, timely publisher video content, and the audiences coming to X for it. Advertisers can choose to align their content with premium publishers from within any of the standard IAB categories. When setting up this type of campaign, advertisers can choose to exclude any of the IAB content categories and can also exclude up to 50 specific Content Partner handles. A full list of our Content Partners may be downloaded within the “publisher content” section of the campaign creator

Advertisers can also take advantage of X curated content categories beyond the standard slate. When creating this type of campaign, they will be provided a list of the Content Partners contributing to each category and can choose to exclude up to five of those handles.

X Audience Platform Controls

Advertisers running campaigns on the X Audience Platform (TAP) can select up to 2,000 apps to exclude from delivery. Note that TAP placement is only available as an option for Website Clicks, App Download, or App Re-engagement objectives.

Keyword Targeting

Keyword targeting allows our advertisers to reach people on X based on their behavior, including keywords used in their search queries or their recent Posts, and keywords used in Posts they recently engaged with. This targeting option can help brands reach their most relevant audiences. Advertisers can also exclude keywords from their campaigns to prevent Ads from appearing among search results for excluded terms, and from serving to audiences who have Posted or engaged with these terms.

 
Measurement Solutions
3rd Party Brand Safety Adjacency Measurement Solutions

We have selected DoubleVerify and Integral Ad Science to be our preferred partners for providing independent reporting on the context in which ads appear on X. This is an opportunity to build solutions that will give advertisers a better understanding of the types of content that appear adjacent to their ads, helping them make informed decisions to reach their marketing goals.  This measurement solution is now live for advertisers in the US.

These services monitor and quantify the prevalence of ad placement adjacent to English-language content deemed either unsafe or unsuitable for monetization by the Global Alliance for Responsible Media (GARM) in X’s Home Timeline. These feed-based solutions are the first of their kind to be made broadly available, and underscore our commitment to independent validation of X’s efforts to uphold industry brand safety standards.

X also provides additional opportunities for transparency into campaign performance through measurement solutions and third-party studies based on your objectives. Our goal is to empower advertisers with measurement solutions to help you understand how your campaigns help achieve your broader marketing and business goals.  

X advertising customers interested in these brand safety measurement solutions can work directly with DoubleVerify or Integral Ad Science; they can also contact their X sales representative.

 
Controls for everyone
Hidden replies

All people on X have the capability to hide any replies to any of their Posts that they deem abusive or irrelevant to the conversation. Note that by hiding a reply, a Post author is not completely removing it from the platform, but is rather keeping it from appearing in the conversation below their Post. In August 2020 we released an API endpoint for this capability to allow our API Partners to build more automated ways to employ this feature. 

Conversation settings

In August of 2020, we made new conversation settings available to everyone on X, allowing people to have more control over the conversations they start. These conversation settings let anyone on X choose who can reply to their Posts with three options: 1) everyone (standard X and the default setting), 2) only people they follow, or 3) only people they mention. 

Beginning in March of 2021, we made these capabilities available to our advertisers when they compose Posts directly through Post Composer or through our Ads API. This update extended the ability to apply conversation settings to Promoted-only Posts and to those that use our most popular ad formats, in addition to organic Posts.

Partnerships that drive industry-wide change

We believe our industry needs to work together to drive industry-wide change. Because trust and transparency go hand-in-hand, and the people and brands on X deserve both. 

X is active within industry organizations such as the 4As, the Association of National Advertisers (ANA), and the Brand Safety Institute (BSI). Through our work with these and other partners, we are a proud leader in the Brand Safety space.

Additionally, X is a founding member of the Global Alliance for Responsible Media (GARM). As part of this organization, X and other platforms work with advertisers and publishers from across industries to collaborate on how to best address harmful and misleading media environments –  and to develop and deliver against a concrete set of actions, processes, and protocols for protecting brands. 

In December 2022, we met with the leadership teams of the World Federation of Advertisers and the Global Alliance for Responsible Media to reaffirm our commitment to brand safety. The conversation focused on X’s continued commitments to GARM around common definitions, common measures, common tools and independent verification.

X’s leadership has committed to working with GARM to develop an accelerated roadmap for future brand safety improvements. See below for an update on our progress: 

Demonstrate Commitment to GARM Safety Standards

  • Our platform-wide Rules and Brand Safety policies are intact and continue to be enforced. The only changes we’ve made to our Rules since the acquisition have been to clarify or strengthen them.
  • We rolled out third-party post-bid measurement solutions with IAS & DV that provide reporting on the context in which campaigns served, aligned with the GARM framework. Independent reporting solutions with IAS and DV have shown >99% GARM Brand Safety Floor compliance.

Provide Consistent Read via Enhanced Reporting on Harmful Content

  • First-party hate speech reporting available to advertisers via their X representative on demand.
  • Independent assessment of hate speech published by Sprinklr in March.

Drive Enhanced Controls to Protect Advertiser Media 

  • We introduced keyword and author-based Adjacency Controls in December, and have shipped multiple enhancements to these solutions since launch. These controls are globally available, apply for adjacent posts in all languages, and cover both the For You and Following Timelines.

Continue to Progress Audit of Brand Safety Controls + Measurement

  • X is a member of the MRC and continues to be committed to undergoing an MRC audit for Brand Safety. We have completed a pre-assessment in December of 2022 and will re-evaluate whether to initiate a full audit when appropriate. At present X is not in the MRC audit/accreditation process.
  • We renewed our global TAG Brand Safety Certification for 2023 in March. We’ve been TAG Certified for Brand Safety since 2020.
 

Our commitment to health and safety over time

We’ve made significant improvements in Platform Health and Safety over the past several years. Health has always been and will remain a top priority for X and our work is ever-evolving. Here are a few notable improvements and announcements we’ve made in the last few years:

2023
2022
2021
2020
2019
2018
  • August

    • We expanded our existing partnership with industry-leading brand safety partner, Integral Ad Science (IAS), to offer X’s US advertisers premium, vetted inventory within the context of the GARM Safety & Suitability Framework so brands can further optimize their campaigns.
    • We introduced Sensitivity Settings, an automated solution that enables advertisers to align their brand’s messaging with content on X that meets their unique sensitivity needs.
    • We launched an industry-standard blocklist that aims to protect advertisers from appearing adjacent to unsafe keywords in the Home Timeline (i.e. For You and Following). 
  • June

    • Our brand safety adjacency measurement solution, in partnership with DoubleVerify, expanded to advertising customers based in the UK, Canada, Australia, and New Zealand. These solutions provide advertisers with independent reporting on the context in which their ads serve on X, in accordance with GARM industry standards. These solutions underscore our commitment to independent validation of X’s efforts to uphold industry brand safety standards.
  • April

    • We published our 21st Transparency Report, with data on our policy enforcement for the first half of 2022. We believe it’s important to share data from H1 2022 on our health & safety efforts as part of our continuous commitment to safety and transparency. 
    • Verified Organizations, a new way for organizations and their affiliates to show up, verify, and distinguish themselves on X, launched for all eligible organizations globally. As a X Verified Organizations subscriber, an organization can verify themselves as a business with a gold check and link any number of their affiliated individuals, businesses and brands to their account. When they do, affiliated accounts will get a small badge of their parent company's profile picture next to their blue or gold checkmark.
    • We launched our new enforcement philosophy, Freedom of Speech, Not Freedom of Reach. Our mission at X is to promote and protect the public conversation. We believe X users have the right to express their opinions and ideas without fear of censorship. Where appropriate, we will restrict the reach of Posts that violate our policies by making the content less discoverable. We are also adding more transparency to the enforcement actions we take on Posts. You’ll see labels on some Posts identified as potentially violating our rules around Hateful Conduct letting you know that we’ve limited their visibility.
  • March

    • Adjacency Controls expanded once more so that advertisers are now able to exclude up to 2000 author handles and 4000 keywords to prevent adjacency one slot above and one slot below ad, a notable increase from our previous 1000 keyword and handle limit. This reflects our ongoing commitment to continually improve and offer brand safety protections that are as effective for our advertisers as possible.
    • We partnered with @Sprinklr for an independent assessment of hate speech on X. The findings point to X’s progress in reducing the amount of times hateful conduct is seen (impressions) via our content moderation strategy of limiting reach. Specifically, Sprinklr’s analysis found that hate speech on X is even lower than X’s own model quantified and receives 67% fewer impressions per Post than non-toxic slur Posts. Transparency through third party data validation is a top priority.
  • February

    • Following our commitment to bolster brand safety protections, Adjacency Controls were expanded to offer author exclusion capabilities. Brands are now able to select up to 1000 author handles to prevent adjacency to. Further, in order to make Adjacency Controls as accessible and convenient as possible, controls were improved to include the ability to bulk upload keywords & handles (separately). 
  • January

    • DV/IAS measurement partnerships became available in the US to give advertisers insight into the context in which their ads are served. Initial beta tests showed that more than 99% of measured impressions appeared adjacent to content that was deemed safe in accordance with the GARM brand safety floor criteria.

    • The first Adjacency Controls improvements were pushed forward to allow controls for all versions of Home Timeline (“For you” and “Following”) and keywords in all languages to make it a comprehensive feed-based solution
    • Internal analysis showed that the overwhelming majority of Promoted Post Impressions — 99.98% of them— are not adjacent to hateful speech in Home Timeline. 

  • December
    • We successfully launched DV/IAS 3rd Party Measurement Beta with a select group of advertising partners to provide independent reporting on the context in which ads appear on X. This is part of an effort to build solutions that will give advertisers a better understanding of the types of content that appear adjacent to their ads, helping them make informed decisions to reach their marketing goals.
    • We launched Adjacency Controls to give advertisers more control over where their ads are placed in the ranked home timeline (the timeline that is used by over 80% of our users).
    • Following our agreement made in 2021 to complete the MRC Brand Safety Accreditation pre-assessment, X completed the pre-assessment work in a major step towards achieving accreditation. 
    • X met with the World Federation of Advertisers (WFA) and GARM to reiterate our ongoing commitment to brand safety and an accelerated brand safety roadmap.
  • September 
    • We stopped the monetization of newly created user profiles to ensure safe and suitable ad placement. 
    • We expanded opt-out capabilities for all campaign objectives including Amplify Pre-Roll. 
  • August 
    • We launched X Circle,  a way to send posts to select people, and share your thoughts with a smaller crowd. This feature allows you to  choose who’s in your X Circle, and only the individuals you’ve added can reply to and interact with the posts you share in the circle.
  • July 

    • X expanded its testing of Toxic Reply Nudges in additional markets: Mexico, Turkey, and Saudi Arabia. We want customers to have meaningful, relevant, and safe conversations on X, which is why we work hard to keep abusive and hateful speech off of X. This not only involves enforcing against violators of rules and taking content down, but also encouraging and reinforcing positive, pro-social behaviors and norms. 

  • May

    • X is where people go to find reliable information in real time during periods of crisis. We introduced our crisis misinformation policy which guides our efforts to elevate credible, authoritative information and help ensure that viral misinformation isn’t amplified or recommended by us during crises.

    • In an effort to help people on X better understand the information we collect, how it is used and the control they have, we have rewritten our Privacy Policy. Our goal is to make it as simple and useful as possible by emphasizing clear language and moving away from legal jargon.

    • We began testing a new feature called X Circles which allows people to add up to 150 people who can see their posts when they want to share with a smaller crowd.

  • April

    • On Earth Day, we announced that misleading advertisements on X that contradict the scientific consensus on climate change are prohibited, in line with our inappropriate content policy. The introduction of this formalized policy reinforces our commitment to sustainability, drawing on Intergovernmental Panel on Climate Change (IPCC) assessment reports and input from global environmental experts.

    • We began experimenting with Unmentioning, a new feature that allows people on X to remove themselves from conversations. This tool is intended to help people have more control over their experience on the platform.

    • We announced some updates to how we are approaching policy in light of the ongoing conflict in Ukraine:

      • We will not amplify or recommend content from X Accounts of governments that are actively engaged in armed conflict and limiting access to internet services for their state. 

      • We are taking enforcement action on all media from government accounts that purports to depict prisoners of war in the Russia-Ukraine conflict.

      • We have disabled autoplay for videos posted by state-affiliated media accounts.

  • March

    • In light of the ongoing conflict in Ukraine, X's top priority is to promote the safety of people on and off the service. To do so, we have been focused on:

      • Elevating reliable information through curated X Moments, prompts within our Search and Home Timeline environments and adjustments to recommendations within Ukraine and Russia.

      • Building on our existing approach to state-affiliated media by adding warning labels to posts with links to Russian and Belarusian state-affiliated media and rolled out government account labels to those associated with Ukrainian government officials.

      • Pausing advertising in Ukraine and Russia in order to ensure that ads do not distract from critical public safety information and significantly broadened our rule enforcement

      • Proactively monitoring for violations of the Our Rules, resulting in the removal of more than 75K accounts for violations to our platform manipulation policy and labeling or removing 50K+ pieces of content for violation of our synthetic & manipulated media policy between the start of the war in Ukraine and March 16, 2022.

    • We announced a new partnership with Jigsaw to launch a new tool designed to allow NGOs and nonprofits to help people stay safe on X.

    • In January 2021, we began testing Birdwatch, a new way to combat misinformation on twitter by allowing users to add context to posts they believe are misleading. Throughout 2021, we made significant improvements based on feedback from our contributors, the public and academic researchers. We have expanded the test, making Birdwatch notes visible and rateable by a small group of people on X in the United States.

  • February

    • In September 2021, we introduced Safety Mode to a small test audience in the United States. This feature allows people to engage in the public conversation in safe and healthy ways by limiting unwanted interactions. Given the success of this limited test, we rolled out the feature to a larger audience in several additional English-speaking markets.

    • In July 2021, we began testing a new way for a small subset of English-speaking people using X on iOS to express whether replies to a post were relevant to the conversation. This capability is intended to better understand what users believe is relevant content within replies as opposed to what we as X believe is relevant content. We have now expanded this test to a subset of all people using X globally on web, with Android and iOS to follow shortly.

    • In May 2021, we introduced prompts to people using X in English. These prompts encourage people to pause and reconsider a potentially harmful or offensive reply before they hit send. We have found that these prompts cause people to reconsider their replies 30% of the time. Given this success we  published new research intended to serve as the foundation for how we can improve X for everyone while encouraging others outside of X to learn from this research and explore ways to promote healthier conversations online. We also extended the feature as an experiment in Portuguese for users in Brazil.

    • In December 2021, we began experimenting with a new way for post authors to indicate that one of their posts includes sensitive media. This functionality builds upon the ways in which people on X or X’s enforcement teams can already place sensitive media warnings to posts. Based on the success of this pilot, we have extended this capability to everyone using X globally on web and Android and to a subset of all people on X using iOS. 

  • January

    • We released our latest update to the X Transparency Center, inclusive of data from January 1 to June 30,2021. Notably, Impressions on violative posts accounted for less than 0.1% of all impressions for all posts during the reporting time frame - consistent with the previous reporting period. Additionally, X required account holders to remove 4.7M posts that violated the Our Rules during these 6 months, an increase from the previous reporting period.

    • X announced a new partnership with OpenMined, an open-source nonprofit organization pioneering in the privacy-preserving machine learning technology space. This collaboration is intended to test and explore the potential for privacy-enhancing technologies at X as part of our ongoing commitment to responsible machine learning.

    • We released our 2021 annual report detailing the impact of X’s ongoing efforts in the areas of Inclusion, Diversity, Equity and Accessibility on our global workforce.
  • December

    • In an effort to better support people using X in getting the help and support they need, we began testing a new reporting flow. This updated process is aimed at ensuring that everyone feels safe and heard and at making it easier for people to report unhealthy or unwanted content.

    • We began testing a new way for post authors to indicate that one of their posts includes sensitive media. This functionality builds upon the ways in which people on X or X’s enforcement teams can already place sensitive media warnings to posts. X proactively prevents ad placement adjacent to posts that have been labeled as “Sensitive Media”.

    • We disclosed an additional 3,465 accounts to our archive of accounts linked to state-linked information operations. We have been periodically making these disclosures since October 2018 and, this year, have shared relevant data about these operations with key independent research partners. We announced that we will be updating our approach for future disclosures, with the introduction of the X Moderation Research Consortium. The TMRC, set to launch in early 2022, will bring together a global group of experts from across academia, civil society, NGOs, and journalism to study platform governance issues.
  • November

    • Beginning in early 2020, X introduced labels to alert people to posts including potentially misleading information around synthetic and manipulated media, civic integrity and voting, and COVID-19 vaccine misinformation. Now, we’re introducing a new design for these labels, which has resulted in more people clicking into the labels to learn more, and fewer people reposting or liking potentially misleading posts with these labels. 

    • X Japan earned JICDAQ’s Brand Safety Certification, confirming that X Japan meets the standards set out by JICDAQ for providing a safe and high-quality environment for advertisers. 
  • October

    • To give people more control over their followers and how they interact with others on X, we launched a test that allows people to remove a follower without blocking them.
  • September

    • We began testing a feature that allows automated accounts to identify themselves to give people more context about who they’re interacting with on X. 

    • In an effort to ensure that people are able to engage with the public conversation in safe and healthy ways, we began a public test of a new feature called Safety Mode. When someone on X activates this feature, it autoblocks accounts for 7 days that may use harmful language or send repetitive, uninvited replies or mentions.

  • August

    • We began testing a new reporting flow in the United States, South Korea, and Australia which allows people to report posts that seem misleading. The intention of this pilot is to better understand whether this is an effective approach to address misinformation on the platform. We plan to iterate on this workflow as we learn from our test.

    • To promote credible information about vaccines, we served a COVID-19 PSA at the top of people’s Timelines in 14 global markets. These prompts push people to local information covering a wide range of topics relevant to that country including topics like vaccine safety, effectiveness, availability, and distribution plans.     

    • Stemming from growing concerns around the impact of certain types of ads on physical, mental health, and body image, particularly for minors, we updated our global advertising policies to include restrictions on weight loss content, particularly prohibiting the targeting of minors.

    • X condemns racism in all its forms – our aim is to become the world’s most diverse, inclusive, and accessible tech company, and lead the industry in stopping such abhorrent views from being shared on our platform. We published a blog post detailing our analysis of the conversation around the Euro 2020 final and laying out the steps we put in place to quickly identify and remove racist, abusive posts targeting the England team, the wider Euros conversation, and the football conversation in general.

    • We announced new partnerships with @AP and @Reuters as one part of our ongoing efforts to help people understand the conversation happening on X. People experience a range of public conversations on our service every day, and we’re committed to continuing our work to elevate credible information and context. 

  • July

    • As part of our ongoing effort to improve X’s accessibility, we introduced captions for voice posts, allowing more people to join the conversation.

    • We announced that we signed an agreement with the Media Ratings Council (MRC) for the Brand Safety pre-assessment. This represents a milestone in our progress towards our commitment to earning all four of the MRC’s accreditations in viewability, sophisticated invalid traffic filtration, audience measurement, and brand safety.

    • We released our latest update to the X Transparency Center, inclusive of data from July 1 to December 31, 2020. As part of this release, we shared a new metric for the first time – impressions – which is the number of views violative posts received prior to removal. We found that impressions on violative posts accounted for less than 0.1% of all impressions of all posts during the reporting time frame and that 77% of these posts received fewer than 100 impressions prior to removal.

    • In an update to the conversation settings we introduced in August of 2020, we made it possible for people on X to change who can reply to a post after it has been posted out. This tweak to the product is designed to give people more control over their conversations in overwhelming moments when their posts may be getting more attention than they previously anticipated. 

    • Abuse and harassment disproportionately affect women and underrepresented communities online and our top priority is keeping everyone who uses X safe and free from abuse. Following a year-long consultative process working alongside partner NGOs, X committed to the Web Foundation’s framework to end online gender-based violence as part of the @UN_Women #GenerationEquality initiative. 

  • June

    • In collaboration with key industry partners, X released an open letter in response to the Digital Services Act, calling on the EU commission to protect the Digital Single Market, fair competition, and the Open Internet.

    • We updated the X Help Center to more clearly articulate when we will take enforcement action moving forward on our hateful conduct and abusive behavior policies which prohibit abuse and harassment of protected categories, and cover a wide range of behaviors. Specifically, we do not permit the denial of violent events, including abusive references to specific events where protected categories were the primary victims. This policy now covers targeted and non-targeted content.

  • May

    • X engaged OpenSlate to provide third-party verification of the safety and suitability of the content in our X Amplify offering. The study found that of the over 455,000 monetized videos analyzed, 100% fell above the industry-standard GARM Brand Safety Floor. They also found that 99.9% of analyzed videos were considered low risk, based on OpenSlate’s proprietary video content categorization and the GARM Brand Suitability Framework.

    • For people on X with English-language settings enabled, we introduced prompts that encourage people to pause and reconsider a potentially harmful or offensive reply before they hit send. We know that people come to X to find, read about and discuss their interests and that sometimes when things get heated, people say mean things they might regret. In an effort to make X a better place, when we detect potentially harmful or offensive post replies, we'll prompt people and ask them to review their replies before posting. This change comes after multiple tests resulting in people sending fewer potentially offensive replies across the service, and improved behavior on X. 

  • April

    • X testified before the United States Senate Judiciary Committee regarding our approach to responsible machine learning technology focused on taking responsibility for our algorithmic decisions, equity and fairness of outcomes, transparency about our decisions, and enabling agency and algorithmic choice.

    • We introduced an interstitial addressing COVID-19 vaccines at the top of people’s timelines in 16 markets around the world as part of World Immunization Week. The prompts directed users to market-specific information on vaccine safety, effectiveness, and availability, ensuring access to credible sources and combatting public health misinformation. 

    • We introduced X’s first Global Impact Report, a cohesive representation of our work across corporate responsibility, sustainability, and philanthropy. We consider this report to be a big step in our commitment to sharing more about the work we know is important to the people we serve. 

  • March 

    • We officially launched new Curated Categories within our X Amplify offering in the US, the UK, Brazil, and MENA. These categories are X-curated sets of publishers that are bundled together around specific themes and they are designed to help Advertisers reach their audiences by aligning with brand-safe, feel-good content.

    • We put out a call for responses to a public survey to help inform the future of our policy approach to world leaders. Politicians and government officials are constantly evolving how they use our service, and we look to our community to help us ensure that our policies remain relevant to the ever-changing nature of political discourse on X and protect the health of the public conversation.

    • X successfully earned the Trustworthy Accountability Group (TAG) Brand Safety Certified Seal, which covers X’s global operations and was attained via independent audit.

    • Following the launch of conversation settings for everyone on X in August 2020, we made it possible for our advertisers to use conversation settings when they compose posts in our Ads Manager. This update extends the ability to apply conversation settings to Promoted-only posts and to those that use our most popular ad formats, in addition to organic posts.

    • We announced that moving forward we will apply labels to posts that may contain misleading information about COVID-19 vaccines, in addition to our continued efforts to remove the most harmful COVID-19 misleading information from the service. These changes are made in accordance with our COVID-19 policy which we expanded in December of 2020.

  • February

    • We disclosed the removal of 373 accounts related to independent, state-affiliated information operations for violations of our platform manipulation policies. These operations were attributed to Armenia, Russia, and a previously disclosed network from Iran.

  • January

    • We further expanded our Hateful Conduct policy to prohibit inciting behavior that targets individuals or groups of people belonging to protected categories. This includes incitement of fear or spreading fearful stereotypes, incitement of harassment on or off-platform, and incitement to deny economic support.

    • We launched a pilot for a community-driven approach to address misinformation on X, which we're calling Birdwatch. In this pilot, we will allow a select group of participants in the United States to identify posts they believe are misleading, write public notes to add context, and rate the quality of other participants’ notes. 

    • We updated the X Transparency Center with data reflecting the timeframe of January 1, 2020 - June 30, 2020. We released a blog post highlighting the trends and insights surfaced in this latest disclosure, including the impact of COVID-19 during the reporting timeframe.

    • In the wake of the events at the US Capitol on January 6, we took unprecedented action to enforce our policies against Glorification of Violence. In light of these events, we took additional action to protect the conversation on our service from attempts to incite violence, organize attacks, and share deliberately misleading information about the election outcome.
  • December

    • As the world continues to fight the COVID-19 pandemic and prepare for the global distribution of vaccines, we announced that we will be expanding our COVID-19 policy. Moving forward, we may require people to remove Posts that advance harmful false or misleading narratives about COVID-19 vaccinations, and beginning in early 2021, we may label or place a warning on Posts that advance unsubstantiated rumors, disputed claims, as well as incomplete or out-of-context information about vaccines.

    • We announced that we've selected Integral Ad Science (IAS) and Double Verify (DV) to be X's preferred partners for providing independent reporting on the context in which ads appear on X. 

    • We announced that we have committed to working with the Media Ratings Council (MRC) to begin the accreditation process across all four of their offered Accreditation Services: Viewability, Sophisticated Invalid Traffic Filtration, Audience Measurement, and Brand Safety. 

    • We expanded our hateful conduct policy to extend to Posts that seek to dehumanize people on the basis of race, ethnicity, and national origin.

  • November

    • In the week following the 2020 US Elections, we shared some key statistics about the labels, warnings, and additional restrictions we applied to Posts that included potentially misleading information about the US Election from October 27 to November 11:

      • Approximately 300,000 Posts were labeled under our Civic Integrity Policy for content that was disputed and potentially misleading. These represent 0.2% of all US election-related Posts sent during this time period.

      • 456 of those posts were also covered by a warning message and had engagement features limited (Posts could be Quote Posted but not Reposted, replied to, or liked).

      • Approximately 74% of the people who viewed those Posts saw them after we applied a label or warning message.

      • We saw an estimated 29% decrease in Quote Posts of these labeled posts due in part to a prompt that warned people prior to sharing.

  • October

    • Ahead of the 2020 US Elections, we implemented a slate of additional, significant product and enforcement updates aimed at increasing context and encouraging more thoughtful consideration before Posts are amplified. These updates included:

      • In accordance with our expanded civic integrity policy, we announced that people on X, including candidates for office, may not claim an election win before it is authoritatively called. Posts that include premature claims will be labeled and will direct people to our official US election page. Additionally, Posts meant to incite interference with the election process or with the implementation of election results, such as through violent action, will be subject to removal. 

      • We introduced enhanced prompts and warnings on Posts that feature misleading information including a prompt that provides credible information for people before they are able to amplify misleading messages. We also added additional warnings and restrictions on Posts with a misleading information label from US political figures and US-based accounts with more than 100,000 followers, or that obtain significant engagement.

      • To encourage more thoughtful amplification of information on the platform, we implemented some temporary changes or the period surrounding the election. These changes include encouraging people to add their own commentary prior to amplifying content by prompting them to Quote Post instead of repost and only surfacing Trends in the “For You” tab in the United States that include additional context.

  • September

    • We launched a new feature to prompt people to read news articles before they amplify them. This has resulted in people opening articles 40% more often after seeing the prompt and a 33% increase in people opening articles before they Repost.

    • We expanded our Civic Integrity Policy to help us more effectively address attempts to abuse X in a manner that could lead to suppression of voting and other harms to civic processes. We will now label or remove false or misleading information intended to undermine voter turnout and/or erode public confidence in an election or other civic process.

    • X is part of the inaugural group of companies to hold the Brand Safety Certified Seal from TAG (the Trustworthy Accountability Group) as part of their new TAG Brand Safety Certified Program. This indicates that X meets all of the requirements of upholding an industry-regulated framework for Brand Safety in the UK.

  • August

    • We introduced the X Transparency Center which highlights our efforts across a broader array of topics than had previously been shared in our X Transparency Reports. We now include intuitive, interactive sections covering information requests, removal requests, copyright notices, trademark notices, email security, X Rules enforcement, platform manipulation, and state-backed information operations. We have also newly introduced reporting on actions broken out by both content type and geographic location.

    • We began labeling accounts belonging to state-affiliated media entities and official representatives of the US, UK, France, Russia, and China. We will also no longer amplify state-affiliated media accounts through our recommendation systems including on the home timeline, notifications, and search.

  • July

    • We expanded our policy to address links to websites that feature hateful conduct or violence. Our goal is to block links in a way that’s consistent with how we remove Posts that violate our rules and reduce the amount of harmful content on X from outside sources.

  • June 

    • We made our latest disclosure of information on more than 30,000 accounts in our archive of state-linked information operations, the only one of its kind in the industry, regarding three distinct operations that we attributed to the People's Republic of China (PRC), Russia, and Turkey.

  • May 

    • We began testing new settings that let you choose who can reply to your post and join your conversation.

    • We introduced new labels and warning messages that provide additional context and information on some Posts containing disputed or misleading information.

  • April

    • X UK was certified against the IAB’s Gold Standard v1.1. This certification reinforces our commitment to reduce ad fraud, improve the digital advertising experience, and increase brand safety within the UK market.

  • March

    • We further expanded our rules against dehumanizing speech to prohibiting language that dehumanizes on the basis of age, disability, or disease.

    • We broadened our definition of harm to address content that goes directly against guidance on COVID-19 from authoritative sources of global and local public health information.

  • February

    • Informed by public feedback, we launched our policy on synthetic information and manipulated media, outlining how we’ll treat this content when we identify it. 

  • January

    • We launched a dedicated search prompt intended to protect the public conversation and help people find authoritative health information around COVID-19. This work is constantly evolving, so stay up to date on the latest information.
  • December

    • We launched the X Privacy Center to provide more clarity around what we’re doing to protect the information people share with us. We believe companies should be accountable to the people that trust them with their personal information, and responsible not only to protect that information but to explain how they do it.

  • November

    • We made the decision to globally prohibit the promotion of political content. We made this decision based on our belief that political message reach should be earned, not bought.

    • We launched the option to hide replies to Posts to everyone globally.

    • X became certified against the DTSG Good Practice Principles from JICWEBS.

    • We asked the public for feedback on a new rule to address synthetic and manipulated media.

  • October

    • We clarified our principles & approach to reviewing reported Posts from world leaders.

    • We published our most recent Transparency Report covering H1 2019.

    • We launched 24/7 internal monitoring of trending topics to promote brand safety on search results.

  • August

    • We updated our advertising policies to reflect that we would no longer accept advertising from state-controlled news media entities.

  • July

    • Informed by public feedback, we launched our policy prohibiting dehumanizing speech on the basis of religion.

  • June 

    • We joined the Global Alliance for Responsible Media at Cannes.

    • We refreshed our Rules with simple, clear language, paring down from 2,500 words to under 600.

    • We clarified our criteria for allowing certain Posts that violate our rules to remain on X because they are in the public’s interest.

  • April

    • We shared an update on our progress towards improving the health of the public conversation, one year after declaring it a top company priority.
  • October

    • We released all of the accounts and related content associated with potential information operations that we found on our service since 2016. This was the first of many disclosures we’ve since made for our public archive of state-backed information operations.

  • September

    • We asked the public for feedback on an upcoming policy expansion around dehumanizing speech, and took this feedback into consideration to update our rules.

  • May

    • We made the decision to exclude accounts we suspect may be automated from monetizable audiences, meaning we do not serve ads to these accounts. Learn more about how we identify automated accounts.

  • March 

    • We launched 24/7 human review of all monetized publisher content for Amplify Pre-Roll, along with an all-new Brand Safety policy for the program.

    • Jack publicly announced our commitment and approach to making X a safer place.

Ready to get started?