SEO MASTERSEO TECHNOLOGY

How to hide content may also become an SEO issue

But sometimes how to prevent search engine inclusion may also become a problem, and it has become more and more a problem. Need to block inclusions such as confidential information, copy content , advertising links, etc. Common methods used to block inclusion in the past include password protection, placing content behind tables, using JS/Ajax, using Flash, and more. Today I saw an article in the Google webmaster blog , these methods are not insurance.

Use flash

Google started trying to crawl Flash content a few years ago, and simple textual content has been crawled. Links in Flash can also be tracked.

Form

Google’s spiders can also fill out forms and crawl POST request pages. This can be seen from the log early.

JS/Ajax

Using JS links has long been regarded as a non-search engine friendly method , so spiders can be prevented from crawling, but two or three years ago I saw that JS links can’t stop Google spiders from crawling. Not only will URLs appear in JS be crawled, simple. The JS can also be executed to find more URLs.

A few days ago, it was discovered that the comments in the Facebook comment Pugin used by many websites were crawled and included, and the Pugin itself was an AJAX. This is good news. One of my experimental e-commerce website product commenting features is because of this a lot of thoughts. The benefits of using the Facebook comments plug-in are great. What are the benefits? I will have time to say that the only problem is that the comments are implemented by AJAX and cannot be crawled. The inclusion of product reviews is one of the purposes (to produce original content). I thought for a long time that there was no solution, so I had to put the Facebook comments plug-in and open the comment function of the shopping cart itself. Now that the comments on Facebook comments can be included, there is no need for two comment functions.

Robots file

The only way to ensure that content is not crawled is that the robots file is disabled. But there is also a disadvantage, it will lose weight, although the content can not be crawled, but the page becomes a bottomless pit that only accepts the weight of the link and does not flow out the weight. And the prohibition of crawling does not necessarily have to be indexed.

Nofollow

Nofollow does not guarantee that it will not be included. Even if all the links to the page on your website are added with NF, there is no guarantee that other people’s websites will not give a link to this page, and search engines can still find this page.

Meta No index + Follow

(Additional on November 3) The reader no1se reminds that in order to prevent the inclusion of weights, you can use meta no index and meta follow on the page, so the page is not included, but the weight can be discharged. This is indeed a better method. There is also a problem, it will still waste spider crawl time. Which reader has the method to prevent inclusion, no weight loss, and no waste of crawling time, please leave a message, and there is no limit to the merits of SEO.

How to make the page not included is a question worth considering. I don’t realize that the serious children’s shoes can think about how much copy content, low-quality content and various non-search values ​​on the website (but the user feels convenient and useful, so can’t take it). Classification), filtering URL.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button