salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate
We've recently discussed several ways of handling duplicate content on a single website; today we'll look at ways of handling similar duplication across different websites, across different domains. For some sites, there are legitimate reasons to duplicate content across different websites — for instance, to migrate to a new domain name using a web server that cannot create server-side redirects. To help with issues that arise on such sites, we're announcing our support of the cross-domain rel="canonical" link element. Ways of handling cross-domain content duplication:
Still have questions? Q: Do the pages have to be identical? A: No, but they should be similar. Slight differences are fine. Q: For technical reasons I can't include a 1:1 mapping for the URLs on my sites. Can I just point the rel="canonical" at the homepage of my preferred site? A: No; this could result in problems. A mapping from old URL to new URL for each URL on the old site is the best way to use rel="canonical". Q: I'm offering my content / product descriptions for syndication. Do my publishers need to use rel="canonical"? A: We leave this up to you and your publishers. If the content is similar enough, it might make sense to use rel="canonical", if both parties agree. Q: My server can't do a 301 (permanent) redirect. Can I use rel="canonical" to move my site? A: If it's at all possible, you should work with your webhost or web server to do a 301 redirect. Keep in mind that we treat rel="canonical" as a hint, and other search engines may handle it differently. But if a 301 redirect is impossible for some reason, then a rel="canonical" may work for you. For more information, see our guidelines on moving your site. Q: Should I use a noindex robots meta tag on pages with a rel="canonical" link element? A: No, since those pages would not be equivalent with regards to indexing - one would be allowed while the other would be blocked. Additionally, it's important that these pages are not disallowed from crawling through a robots.txt file, otherwise search engine crawlers will not be able to discover the rel="canonical" link element. We hope this makes it easier for you to handle duplicate content in a user-friendly way. Are there still places where you feel that duplicate content is causing your sites problems? Let us know in the Webmaster Help Forum! this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com |
Labels: crawling and indexing, intermediate