How To Respond To Google’s Latest Warning – “Googlebot Cannot Access Your Javascript and CSS Files”

How To Respond To Google's Latest Warning - "Googlebot Cannot Access Your Javascript and CSS Files"
You may have noticed an increase in messages from Google Search Console stating that the Googlebot cannot access CSS and JS files for a given URL. The message is a result of Google’s recent initiative to provide greater transparency in search ranking factors. ย The message has sparked several warnings throughout website’s Search Console platforms. ย While this warning message is new, the subject was first implemented last October into Google’s Webmaster Guidelines.
Below is a picture example of what the message looks like:
Google Search Console Warning

Here are a couple things you need to know.

The message in Search Console is new, but Google’s initiative isn’t.

Twitter bird icon Google wants access to crawl EVERYTHING. The message is for those sites where Google can’t crawl everything efficiently or accurately.

The message says that this issue “can result in suboptimal rankings”.

This message would appear as a low impact as a search ranking factor. ย As the ranking factor itself is not new, there should be no drastic decrease on your account.

If the warning is received, fixing the problem is fairly easy.

Twitter bird icon If you manage your website account and are comfortable making edits to your robots.txt file, go ahead with the steps below. ย If you’re not comfortable making changes to your website contact your site’s webmaster for an update. For client’s of Boostability, your Account Manager can address any concerns you have or changes needed.

Steps from Search Engine Land on how to fix your robots.txt file: Twitter bird icon

Look through the robots.txt file for any of the following lines of code:

Disallow: /.js$*

Disallow: /.inc$*

Disallow: /.css$*

Disallow: /.php$*

If you see any of those lines, remove them. Thatโ€™s whatโ€™s blocking Googlebot from crawling the files it needs to render your site as other users can see it.

After these steps are completed, you’ll want to run your site through Google’s Fetch and Render tool to confirm the problem is fixed. ย If you are still experiencing problems, your Fetch and Render tool will provide further instructions on changes that need to be made.

In addition to Google’s Fetch tool, you can use the robots.txt tool in your Search Console to identify any remaining issues in crawling your website.

 

aeagar

No Comments

  1. Jason Corgiat on July 30, 2015 at 8:13 am

    Very helpful, thanks! One question – is there any reason (security or otherwise) a webmaster would NOT want to allow Googlebot access to these files?

    • M Andrew Eagar on July 30, 2015 at 12:19 pm

      Great question Jason. Ultimately it is up to the webmaster. Most these bits of information don’t pose a security threat in any way. It should be rare for a local business to block pages.

  2. Risa Casperson on July 30, 2015 at 9:53 am

    Me and my team were actually just discussing this issue this morning, as several of our accounts have received these warnings. We were already taking these steps, so it’s good to know that we were heading in the right direction!

    • M Andrew Eagar on July 30, 2015 at 12:17 pm

      Good call Risa! I personally don’t think it is a huge deal as far as rankings are concerned, but it is good to fix when possible.

    • Caz* on August 4, 2015 at 6:12 pm

      Hopefully some of the information is this post as well as some of the comments help you know how to fix it!

  3. Andrew Williams on July 30, 2015 at 11:18 am

    We have been seeing this a lot lately and in most cases WordPress sites are blocking wp.content files and Google wants to be able to access them.

    • M Andrew Eagar on July 30, 2015 at 12:13 pm

      I have seen that too. What is interesting is I don’t think that Google NEEDS to see that information, mostly just that they want to see that information.

      • Caz* on August 4, 2015 at 6:11 pm

        They want to know it all!

    • Caz* on August 4, 2015 at 6:11 pm

      That’s absolutely true. It seems to be a CRM issue. Sites like WordPress are doing it all wrong.

  4. Jamison Michael Furr on July 30, 2015 at 2:50 pm

    Thanks for the tip, Kyle!

  5. Maria Williams on July 30, 2015 at 4:50 pm

    Thank you for sharing this information Caz. personally I haven’t experience or have to deal with this issue so far.

    • Caz* on August 4, 2015 at 6:11 pm

      That’s good to hear!

  6. Andrew Williams on July 31, 2015 at 8:13 am

    I have read some information on blocking pages that if there is a page you want to block instead of including it in the robots.txt file that it is better to put a no index tag on the page.

    • Caz* on August 4, 2015 at 6:11 pm

      I have found, especially in WordPress, that a no follow just means that seo credit for linking to or out of that page isn’t traced. The page is still very much searchable, however. That’s why our “no follow” “robots” Social Media Challenge blog was updated back to the Knowledge Base as an intranet accessed page only. In just searching “Caz Bevan” online, it came up as one of the higher results despite the no follow.

  7. Caz* on August 4, 2015 at 6:09 pm

    Thanks for adding that info!

Leave a Comment