Googlebot IP Addresses: Complete List (Updated Daily from JSON)

Nati Elimelech·November 24, 2025·7 min read

Official list of 1,800+ Googlebot IP addresses and CIDR ranges, updated daily from Google's JSON API. Includes all crawler types: Common Crawlers, Special Crawlers, and User-Triggered Fetchers.

Summarize and ask questions with:
Table of Contents

Complete and up-to-date list of all Google crawler IP addresses and networks (CIDR ranges). Data is fetched directly from Google Search Central’s official API and updates daily.

Check IP Address
Network (CIDR)VersionTypeReverse DNS
2001:4860:4801:10::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:12::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:13::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:14::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:15::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:16::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:17::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:18::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:19::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:1a::/64IPv6Common Crawlerscrawl-***.googlebot.com
Total 1961 IP networks
Page 1 of 197

About This IP Database

This database contains all official Googlebot IP addresses and CIDR ranges published by Google.

  • Total Networks: 1,795 IP networks (CIDR blocks)
  • Update Frequency: Updated daily from Google’s official JSON API
  • Data Source: Google Search Central Documentation
  • Last Update: 2025-11-24
  • Coverage: IPv4 and IPv6 addresses
  • Format: Searchable table with CIDR notation

JSON File Access

Download the raw JSON data directly from Google:

Why Do You Need This List?

If you manage a website, application, or server, it’s important to identify legitimate Googlebot requests. For technical SEO professionals and site administrators, this list is essential for:

  1. Security - Block fake bots pretending to be Googlebot
  2. Optimization - Prioritize legitimate Google requests
  3. Monitoring - Identify Googlebot crawl patterns in logs
  4. Debugging - Troubleshoot indexing issues in Search Console

Google’s 3 Types of Crawlers

Google uses 3 main types of crawlers, each with its own purpose and IP list:

1. Common Crawlers (Regular Googlebot)

Regular crawlers used for Google products like Google Search. Always respect robots.txt.

  • Reverse DNS: crawl-***-***-***-***.googlebot.com or geo-crawl-***-***-***-***.geo.googlebot.com
  • Quantity: 164 networks (128 IPv6 + 36 IPv4)

2. Special-Case Crawlers

Crawlers that perform specific functions (like AdsBot) where there’s an agreement between the site and the product. May ignore robots.txt.

  • Reverse DNS: rate-limited-proxy-***-***-***-***.google.com
  • Quantity: 280 networks (128 IPv6 + 152 IPv4)

3. User-Triggered Fetchers

Tools and functions where the end user triggers the fetch (like Google Site Verifier). Ignore robots.txt because they’re user-initiated.

  • Reverse DNS:
    • ***-***-***-***.gae.googleusercontent.com
    • google-proxy-***-***-***-***.google.com
  • Quantity: 1,351 networks (724 IPv6 + 627 IPv4)
How many IP addresses does Googlebot use?
Google uses hundreds of different IP networks. The list includes approximately 1,800 prefixes (CIDR blocks) covering both IPv4 and IPv6. The list automatically updates from Google's official API.
What's the difference between Common Crawlers and Special Crawlers?
Common Crawlers (regular Googlebot) always respect robots.txt rules. Special Crawlers (like AdsBot) may ignore robots.txt in some cases, depending on the agreement with the site owner.
Does Googlebot always respect robots.txt?
It depends on the crawler type: Common Crawlers always respect robots.txt. Special case crawlers may ignore it in certain cases. User triggered fetchers ignore robots.txt because they're initiated by user request.

IP Address Statistics

Network Distribution by Type

Crawler TypeIPv4 NetworksIPv6 NetworksTotal
Common Crawlers36128164
Special Crawlers152128280
User-Triggered Fetchers6277241,351
Total8159801,795

CIDR Block Ranges

The smallest block is /32 (single IP), largest is /19 (8,192 IPs), with most being /24 blocks (256 IPs each).

Comparing Googlebot to Other Crawlers

Unlike Googlebot’s 1,795 published IP ranges, other crawlers take different approaches:

  • Googlebot: Full list published and updated daily
  • Bingbot: No published list, verification via reverse DNS only
  • Facebook Crawler: Limited published ranges
  • Semrush Bot: Published list available

Having a complete, updated list lets you whitelist legitimate crawlers in your firewall, analyze crawler behavior in logs, and optimize server resources for important bots.

How often is the Googlebot IP list updated?
Google updates the official Googlebot IP list daily, typically around midnight UTC. The JSON files are refreshed every 24 hours to reflect any changes in Google's crawler infrastructure.
Can I download the Googlebot IP address list?
Yes! Google provides the complete IP list in JSON format. You can download it directly from Google's API at developers.google.com/static/search/apis/ipranges/googlebot.json. The list includes separate JSON files for Common Crawlers, Special Crawlers, and User-Triggered Fetchers.
What's the difference between IPv4 and IPv6 Googlebot addresses?
Googlebot uses both IPv4 and IPv6 addresses. IPv6 prefixes typically start with '2001:4860:' and use 128 bit addresses, while IPv4 addresses are in the familiar dotted decimal format. The list contains 815 IPv4 networks and 980 IPv6 networks.

How to Use This List

Check Using the On-Page Tool

Enter an IP address in the search field above and the tool will automatically check if it’s in one of the official networks.

Check Using Reverse DNS

# Linux / macOS
host 66.249.66.1

# Windows
nslookup 66.249.66.1

If it’s legitimate Googlebot, you’ll get a result like:

1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com

Block/Allow in Firewall

If you want to block or allow only legitimate Googlebot, use the CIDR block list. Example for iptables:

# Allow Googlebot
iptables -A INPUT -s 66.249.64.0/19 -j ACCEPT

Use in Nginx

# geo block to identify Googlebot
geo $is_googlebot {
    default 0;
    66.249.64.0/19 1;
    # ... other networks
}

Use the JSON API Programmatically

For automated systems, fetch the JSON directly:

# Download all Googlebot IPs
curl https://developers.google.com/static/search/apis/ipranges/googlebot.json

# Example response structure
{
  "creationTime": "2025-11-12T00:00:00.000000Z",
  "prefixes": [
    {
      "ipv4Prefix": "66.249.64.0/19"
    },
    {
      "ipv6Prefix": "2001:4860::/32"
    }
  ]
}

Google updates these files daily, usually around midnight UTC.

Should I whitelist Googlebot IP addresses in my firewall?
If you use IP based access control, yes. Whitelisting Googlebot's official IP ranges ensures Google can crawl your site even if you block other traffic. Also verify the reverse DNS hostname ends in googlebot.com or google.com for additional security.
Does Googlebot use different IPs for different countries?
Yes, Google uses geo distributed crawling with IP addresses from multiple countries, not just the USA. This helps Google understand how your site performs from different geographic locations, especially for locale adaptive pages.
What is the CIDR notation in the IP list?
CIDR (Classless Inter Domain Routing) notation shows IP address ranges efficiently. For example, '66.249.64.0/19' represents 8,192 IP addresses from 66.249.64.0 to 66.249.95.255. The '/19' indicates the network prefix length.

Sources


Last updated: Data updates daily from Google’s official API.

AUTHOR
Nati Elimelech
Nati Elimelech
Leading Tech SEO expert with 20+ years of experience. Former Head of SEO & Accessibility Companies at Wix, where I built SEO systems serving millions of websites. I specialize in solving complex technical SEO challenges at enterprise scale and translating SEO requirements into language that product and engineering teams understand.