top of page

The noindex tag: What it is, why you need it, and when to use it for better SEO

Author: Vinnie Wong

An image of SEO expert Vinnie Wong. The text on the image reads 'the noindex tag for better SEO'

Google may be a search giant, but it still has its limits. Serving Google too many irrelevant or low-quality pages can hurt your site’s crawlability and indexation, eventually resulting in lower rankings, traffic, and revenue.


But what if you have a handful of pages that need to stay live without appearing in search results (e.g., gated content, internal search results, checkout pages)? Enter the noindex tag—your resource for telling search engines to keep a page off of the search results, while still making it available for the users that need it.


By strategically applying noindex tags, you can streamline your site’s structure, prioritize your most valuable content, and maximize the time Google spends crawling your website.


In this article, I’ll dive into the world of noindex tags and explore how they can help you take control of your website’s SEO. 


Table of contents:




What is a noindex tag?


When implemented correctly, a ‘noindex tag’ is a piece of code that instructs search engines not to include a particular webpage in their indexes, preventing the page from showing up in search results. 


This tag is part of a larger family of meta directives known as ‘robots meta tags,’ which provide search engines crawlers with important instructions about how to interact with a website’s content.


The noindex tag takes the following format when placed within the <head> section of a page’s HTML:


<meta name="robots" content="noindex">

Alternatively, the noindex tag can target a specific search engine’s crawler (such as Google) by replacing “robots” with the crawler’s name, as shown below:


<meta name="googlebot" content="noindex">

The ‘index’ instruction is the default for search engines (allowing your pages to show up in search results), while the noindex tag explicitly tells crawlers not to add the page to their indexes.


Infographic of five tips related to noindex tags
Here’s a guide breakdown of which types of content you should noindex.

It’s crucial to understand that the noindex directive operates on a page-level basis—it only applies to the specific URL on which you implement it.


Why the noindex tag is important for SEO


It may seem counterintuitive to exclude pages from search engine indexes, but there are crucial scenarios where preventing certain pages from appearing in search results is beneficial for your website’s overall SEO health and user experience.


Below are five ways to use the noindex tag to support your business’s online success:


01. Avoid duplicate content issues

02. Optimize crawl budget

03. Maintain content quality and relevance

04. Control access and visibility


01. Avoid duplicate content issues

When search engines encounter multiple pages with identical (or very similar) content, they may have difficulty determining which version is most relevant to rank in search results/show users. This can lead to several problems:



Strategically applying the noindex tag to duplicate pages (such as printer-friendly versions) signals to search engines which version should get indexed and ranked. This consolidates ranking signals and helps ensure that the original, high-quality page is what gets shown to users in search results.


02. Optimize crawl budget

As huge as Google is, the search engine giant has confirmed that there are just too many pages to crawl. To maximize its time and budget resources, Google limits how long it will crawl any one site—this is what SEOs often refer to as ‘crawl budget.’


For larger websites that have over 10,000 pages, crawl budget optimization will strengthen your site’s SEO. 


A screenshot of best practices related to noindex tags from Google
Straight from the horse’s mouth, Google advises site owners to take stock of which pages they want Googlebot to crawl to ensure your high-value pages are indexed.

The noindex tag allows SEOs and site owners to manage their crawl budget by instructing search engine bots not to waste time indexing low-value or non-public pages, such as:


  • Internal search result pages

  • Filter or sorting pages for eCommerce websites

  • User-specific content (private profiles or account pages)

  • Auto-generated pages with minimal unique content


By keeping these pages out of the index, search engines can focus on discovering and ranking the site’s most important, user-facing content.


03. Maintain content quality and relevance

Over time, content will naturally become outdated (or less relevant). Deleting this content outright isn’t always the best solution, so the noindex tag allows you to keep the content on your site while preventing it from appearing in search results and potentially harming your overall content quality signals. 


This is useful for:


  • Older blog posts or news articles that are no longer timely

  • Product pages for discontinued or out-of-stock items

  • Thin or low-quality pages that don’t meet current standards


Noindexing this content helps ensure users find your most valuable, relevant content when searching related keywords.


04. Control access and visibility

Many websites create content intended for a specific audience or requiring special access, such as:


  • Members-only content

  • Staging or development pages

  • Paid resources or course materials

  • Conversion funnel pages (e.g., ‘thank you’ pages)


The noindex tag provides a simple way to shield these pages from search engine discovery, maintaining control over who can find and access your content.


Noindex vs. Robots.txt

An infographic that says “Noindex vs. Robots.txt” with definitions of both (as defined in the next paragraph of this document)

While both the noindex tag and the robots.txt file provide instructions to search engine crawlers, they serve different purposes:


  • Robots.txt controls crawling, specifying which parts of the site search engine bots are allowed to crawl and which are off-limits.

  • The noindex tag allows bots to crawl the page but prevents them from indexing it.


Here’s a simple robots.txt example:


User-agent: *
Disallow: /private/

This instructs all search engine bots not to crawl any pages within the “/private/” directory of your website.


The key distinction is that robots.txt prevents search engine bots from accessing and crawling certain pages altogether, but it doesn’t directly impact whether a page can appear in search results. In contrast, the noindex tag allows bots to crawl the page but prevents indexing, keeping the page out of search results.

It’s a subtle difference that has important implications:


Difference in impact

Robots.txt

Noindex

Crawling vs. Indexing

Pages disallowed won’t be crawled at all. Search engines won’t see the content.

Pages will be crawled, but their content won’t be indexed. Search engines can still analyze the page and follow links.

Link equity flow

Links on blocked pages won’t be followed or pass link equity (PageRank).

Noindexing a page over the long term will eventually result in Google removing the page from its index completely, thus no longer following the links (i.e., noindex, nofollow).

Control level

Operates at the directory or site-wide level. Can disallow entire sections, but not individual pages.

Allows control of indexation on a page-by-page basis, providing more granular control.


In practice, robots.txt and noindex are often used together. For example, you might use robots.txt to prevent crawling of sensitive pages and apply the noindex tag on specific pages that shouldn’t appear in search results (e.g., ‘thank you’ pages or faceted navigation).


Noindex vs. Nofollow


Noindex and nofollow are two distinct meta directives with specific purposes, often used together but serving different functions.


The nofollow directive (which applies the nofollow link attribute to all the links on that page) is a meta tag that instructs search engine crawlers not to follow any outbound links on the page, acting as a ‘stop sign’ for link equity flow. Here’s what it looks like in the <head> section:


<meta name="robots" content="nofollow">

The meta robots nofollow directive is beneficial in two common scenarios:


  • To tell Google that you don’t endorse a link: You might use a nofollow link if you’re linking to a website that you don’t necessarily recommend or trust. By using a nofollow link, you’re telling Google that you don’t want to pass any of your page’s ranking power to the linked page.

  • In user-generated content to avoid link spam: If your website allows users to add to your content, such as comments or forum posts, you might want to use nofollow links for any links that users add. This can help to prevent spammers from adding links to their own websites in your content.


The link attribute menu within the Wix editor.
You can also set link attributes on an individual basis.

Crawl prioritization

On large sites, you can use nofollow on certain pages (or page types) to manage crawl budget and direct search engine bots to your most important content. By reducing the number of links bots have to follow, you streamline the crawling and indexing process.


You can use noindex and nofollow together on a single page:


<meta name="robots" content="noindex, nofollow">

This instructs search engines not to index the page or follow its links (common for pages like login screens or ‘thank you’ pages that are necessary but shouldn’t be discoverable through search or pass authority).


However, using noindex and nofollow together incorrectly can have unintended consequences:


  • Accidentally noindexing and nofollowing important pages could prevent indexing and cut off link equity flow to other key pages.

  • Noindexing and nofollowing large site sections can hinder search engines from discovering and ranking your most valuable content.


Use noindex on pages that shouldn’t appear in search results and nofollow only when necessary to control link equity flow. If unsure, err on the side of indexation and allow links to be followed to help search engines understand and rank your site effectively.


How to noindex a page


Adding the noindex tag to a page is relatively simple and requires access to your site’s HTML code. There are two primary methods:


01. Meta robots tag

The most common way to noindex a page is adding a meta robots tag to the <head> section of the page’s HTML:


<meta name="robots" content="noindex">

This instructs all search engine bots not to index the page. To target a specific bot, replace “robots” with the bot’s name:


<meta name="googlebot" content="noindex">

You can combine noindex with other directives, like "nofollow," by separating them with a comma:


<meta name="robots" content="noindex, nofollow">

The process for adding this tag depends on your content management system or web platform:


  • Wix: Use the settings in the Wix editor, the SEO settings panel, or the Edit by Page section. I’ll cover these options in more detail later.

  • Other platforms: For closed content management systems, you can typically noindex a page within that page’s settings. For open source platforms, you may need to install a plugin. Refer to your specific platform’s documentation.


After you add the noindex tag, save your changes and publish or update the page. The tag will take effect the next time a search engine bot crawls the page.


Before you start putting this tactic to work, it is absolutely crucial that you:


Avoid using both robots.txt disallow instructions and the noindex tag on the same page. The noindex tag will override the robots.txt file’s disallow instruction and prevent the page from being indexed, as it is a page-specific directive. The robots.txt file, on the other hand, is a broader instruction that applies to all crawlers and bots.

02. X-Robots-Tag HTTP header

An alternative method for specifying noindex is using the X-Robots-Tag HTTP header, which you can add to your server’s HTTP response for a particular page (or group of pages):


X-Robots-Tag: noindex

This method is great for non-HTML files (PDFs, images, videos) and situations where you can’t directly access a page’s HTML code, but can configure your server’s response headers.


Implementing the X-Robots-Tag header requires modifying your server configuration files (e.g., .htaccess for Apache servers or nginx.conf for NGINX). The process depends on your server setup, but here’s an example for Apache:


<Files "example.pdf"> Header set X-Robots-Tag "noindex" </Files>

This code snippet instructs Apache to add the noindex X-Robots-Tag header to the HTTP response for the file "example.pdf."


Note that using HTTP headers requires more technical knowledge and server access compared to adding meta tags to HTML. If you’re not comfortable modifying server configurations, stick with the meta tag method.


Regardless of the implementation method, search engines will recognize the noindex directive and exclude the page from their indexes.


How to noindex pages on Wix & Wix Studio 


Whether you’re on Wix or Wix Studio, it’s easy to add noindex tags to your pages through the built-in SEO settings. Here’s how:


  1. Open the Wix editor:

    1. Log in to your Wix account and open the editor for the site you want to modify.

  2. Access your Page Settings:

    1. In the editor, choose the page you want to noindex from the Pages & Menu options on the left-hand panel.

    2. Click on the ‘more actions’ (three dots), then click SEO basics.

  3. Apply the noindex:

    1. At the bottom of the SEO basics tab, toggle the switch for “Let search engines index this page (include in search results)” to the off position. This adds a noindex meta tag to the page.


Screenshot of how to noindex a page via Wix’s editor

The noindex tag is now included in the page’s HTML code, and search engines won’t index it the next time they crawl your site.


To view your noindexed pages on Wix, use the Site Inspection tool:


  1. Access your Site Inspection dashboard:

    1. From your Wix dashboard, go to Site & Mobile App > Website & SEO > SEO.

    2. Under Tools and settings, click on Site Inspection.

  2. Check your page status in Google’s index:

    1. In the Site Inspection report, open the filtering options.

    2. In the Index Status drop-down filter, select Excluded to filter for pages that are not indexed. Look for the status “Excluded by ‘noindex’ tag” to indicate pages that are noindexed.



To apply noindex tags to all pages of a certain type (e.g., all blog posts in a category, all product pages, etc.), use the Edit by Page feature in the Wix dashboard:

  • In your Wix dashboard, go to Site & Mobile App > Website & SEO > SEO.

  • Under Tools and settings, select SEO Settings.

  • From there, choose your desired category of pages and go to the Edit by page tab (shown below).


Screenshot of Wix’s Main Pages feature to noindex a whole category of pages
Noindexing a whole category of pages in one go is easy with Wix, especially for pages like gated content.

How to check if a particular page is noindexed


There are a few ways to check if a specific page is noindexed, including:


  • Checking the page’s HTML code

  • Google Search Console’s URL Inspection Tool

  • Browser extensions

  • Crawling tools


Check the page’s HTML Code

To check for a noindex tag in a page’s HTML:


  1. Open the page in your web browser.

  2. Right-click anywhere on the page and select “View Page Source” (or use Ctrl+U on Windows or Option+Command+U on Mac).

  3. In the new tab showing the page’s HTML code, use your browser’s search function (Ctrl+F or Command+F) to search for "noindex".

  4. If the page has a noindex tag, you should see a line like <meta name="robots" content="noindex"> in the code.


The noindex tag in a page’s HTML.
The noindex tag in a page’s HTML.

Note that this method only checks for the presence of the noindex tag and doesn’t confirm if search engines have respected the tag and excluded the page from their indexes.


You can also use this method to check indexation on any webpage that you can access—not just the ones on your site.


Use the URL Inspection Tool in Google Search Console

To check the index status of one of your own webpages using Google Search Console:


  1. Log in to your Google Search Console account and select the property for your website.

  2. In the left-hand menu, click on “URL Inspection.”

  3. Enter the URL of the page you want to check in the search bar and press enter.

  4. Google will display information about the page, including its indexing status. If the page is noindexed, you’ll see a message like “URL is not on Google” or “Excluded by ‘noindex’ tag.”


The URL inspection tool in GSC showing that a particular page is not on Google. The page indexing status reads ‘page is not indexed: excluded by ‘noindex’ tag’

This tool provides a definitive answer on whether Google has excluded the page from its index based on the noindex tag, but it only works for pages on sites you have verified ownership of in Search Console.


Use a browser extension

Browser extensions, like Meta SEO Inspector for Chrome, can quickly check a page’s robots meta tags, including noindex.


  1. Install the Meta SEO Inspector extension from the Chrome Web Store.

  2. Open the page you want to check in Chrome.

  3. Click on the Meta SEO Inspector icon in your browser toolbar (it looks like a magnifying glass).

  4. The extension will display a summary of the page’s meta tags, including any robots directives like noindex or nofollow.


Keep in mind that extensions can be handy for spot-checking individual pages, but aren’t as definitive as Google Search Console, as they only look at the page’s HTML and not its actual indexing status.


Crawl the website

Website crawling tools like Screaming Frog or DeepCrawl can check the status of multiple pages simultaneously, providing a comprehensive overview of your site’s indexation.


A screenshot of noindex and nofollow tags on the crawling tool screaming frog
While Screaming Frog’s interface can be overwhelming at first, looking at which pages have noindex or nofollow tags can be found with the right filters.

To find noindexed pages using Screaming Frog:


  1. Enter your site’s URL in the tool and click “Start.”

  2. After the crawl is finished, click on the “Directives” tab in the bottom window.

  3. Click on the “Filter” dropdown and select “Noindex.”

  4. The tool will display a list of all the pages on your site with a noindex tag.


Best practices: How to use the noindex tag correctly


Implementing noindex tags incorrectly can lead to unintended consequences, such as important pages being excluded from search results or search engines misinterpreting your site’s structure. 


While it’s easy to implement a noindex tag, it’s also easy to do it wrong. Here are some tips to ensure your noindex tags lead to SEO improvements, not errors.


  • Don’t block noindexed pages with robots.txt

  • Include self-referential canonical tags on noindexed pages

  • Regularly monitor site indexation


Don’t block noindexed pages with robots.txt

If there’s ever been a case for less is more, it applies to robots.txt and noindex tags. Specifically, the noindex tag only works if search engines can actually crawl the page. 


If you use the robots.txt file to disallow search engines from a page entirely, they won’t be able to see and respect the noindex tag. 

This can lead to a situation where you think a page is excluded from the index, but it actually still shows up in search results.


Instead of using robots.txt to block noindexed pages, prioritize the noindex tag itself. Allow search engines to crawl the page so they can see the noindex tag and understand that it shouldn’t be included in their indexes.


Include self-referential canonical tags on noindexed pages

When you noindex a page, you can also include a self-referential canonical tag. This means adding a canonical tag that points to the page itself as the canonical (or ‘preferred’) version. It might seem counterintuitive to do this on a page that’s being excluded from search results, but it can actually help search engines better understand your site’s structure.


Here’s an example of what a self-referential canonical tag looks like (if our example page’s URL was https://www.example.com/noindexed-page):


<link rel="canonical" 
href="https://www.example.com/noindexed-page">

Including this tag on your noindexed pages helps avoid potential confusion if the page is accessible through multiple URLs (such as with parameters or tracking codes). Without the self-referential canonical, search engines might choose one of these alternate URLs as the canonical by default, which could lead to unexpected indexing behavior.


By specifying the page’s own URL as the canonical, you’re reinforcing the noindex signal and telling search engines that this specific URL is the authoritative version, even though it’s intentionally excluded from the index.


Regularly monitor site indexation

Even if you’re careful about implementing noindex tags correctly, mistakes can happen. A noindex tag might be accidentally removed during a site update, or a valuable page might get noindexed unintentionally.


To catch these issues early, make a habit of conducting regular site audits. Tools like Google Search Console are invaluable for this purpose. In the Page Indexing report, you can see a list of all the pages on your site that Google has crawled and whether they’re indexed or excluded (and why).


The page indexing report in Google Search Console showing reasons why pages aren’t indexed and the number of affected pages.

If you notice any important pages that are unexpectedly noindexed, or any noindexed pages that suddenly show up in search results, you can take action quickly to resolve the issue before it has a significant impact on your search traffic and business.


Stay on the pulse with your noindex tags


The noindex tag is just one piece of the SEO puzzle, but it’s a crucial one. By strategically using noindex tags in conjunction with other technical SEO tactics like canonicalization, structured data, and smart internal linking, you can create a website that’s not only search engine-friendly but also laser-focused on delivering value to your target audience.


Just remember that for any tactics related to SEO success, using noindex tags isn’t a ‘set-it-and-forget-it’ task. Have a system to monitor your pages, whether it’s through regular site audits or checking your Wix dashboard, and you’ll be on track to prioritizing your site’s most important content.


 

Vinnie Wong

Vinnie is a content expert with over 5 years of SEO and content marketing experience. He's worked with Ahrefs, Empire Flippers, and is committed to crafting exceptional content and educating others on the symbiotic relationship between content creation and effective link building.

Get more SEO insights right to your inbox

* By submitting this form, you agree to the Wix Terms of Use and acknowledge that Wix will treat your data in accordance with Wix's Privacy Policy

bottom of page