Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate

Today we are announcing a new user agent for robots.txt called Googlebot-News that gives publishers even more control over their content. In case you haven't heard of robots.txt, it's a web-wide standard that has been in use since 1994 and which has support from all major search engines and well-behaved "robots" that process the web. When a search engine checks whether it has permission to crawl and index a web page, the "check if we're allowed to crawl this page" mechanism is robots.txt.

Publishers could easily contact us via a form if they didn't want to be included in Google News but did want to be in Google's web search index. Now, publishers can manage their content in Google News in an even more automated way. Site owners can just add Googlebot-News specific directives to their robots.txt file. Similar to the Googlebot and Googlebot-Image user agents, the new Googlebot-News user agent can be used to specify which pages of a website should be crawled and ultimately appear in Google News.

Here are a few examples for publishers:

Include pages in both Google web search and News:
User-agent: Googlebot
Disallow:

This is the easiest case. In fact, a robots.txt file is not even required for this case.

Include pages in Google web search, but not in News:
User-agent: Googlebot
Disallow:

User-agent: Googlebot-News
Disallow: /

This robots.txt file says that no files are disallowed from Google's general web crawler, called Googlebot, but the user agent "Googlebot-News" is blocked from all files on the website.

Include pages in Google News, but not Google web search:
User-agent: Googlebot
Disallow: /

User-agent: Googlebot-News
Disallow:

When parsing a robots.txt file, Google obeys the most specific directive. The first two lines tell us that Googlebot (the user agent for Google's web index) is blocked from crawling any pages from the site. The next directive, which applies to the more specific user agent for Google News, overrides the blocking of Googlebot and gives permission for Google News to crawl pages from the website.

Block different sets of pages from Google web search and Google News:
User-agent: Googlebot
Disallow: /latest_news

User-agent: Googlebot-News
Disallow: /archives

The pages blocked from Google web search and Google News can be controlled independently. This robots.txt file blocks recent news articles (URLs in the /latest_news folder) from Google web search, but allows them to appear on Google News. Conversely, it blocks premium content (URLs in the /archives folder) from Google News, but allows them to appear in Google web search.

Stop Google web search and Google News from crawling pages:
User-agent: Googlebot
Disallow: /

This robots.txt file tells Google that Googlebot, the user agent for our web search crawler, should not crawl any pages from the site. Because no specific directive for Googlebot-News is given, our News search will abide by the general guidance for Googlebot and will not crawl pages for Google News.

For some queries, we display results from Google News in a discrete box or section on the web search results page, along with our regular web search results. We sometimes do this for Images, Videos, Maps, and Products, too. This is known as Universal search results. Since Google News powers Universal "News" search results, if you block the Googlebot-News user agent then your site's news stories won't be included in Universal search results.

We are currently testing our support for the new user agent. If you see any problems please let us know. Note that it is possible for Google to return a link to a page in some situations even when we didn't crawl that page. If you'd like to read more about robots.txt, we provide additional documentation on our website. We hope webmasters will enjoy the flexibility and easier management that the Googlebot-News user agent provides.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: Webmaster level: Beginner/Intermediate

So there you are, minding your own business, using Webmaster Tools to check out how awesome your site is... but, wait! The Crawl errors page is full of 404 (Not found) errors! Is disaster imminent??


Fear not, my young padawan. Let’s take a look at 404s and how they do (or do not) affect your site:

Q: Do the 404 errors reported in Webmaster Tools affect my site’s ranking?
A:
404s are a perfectly normal part of the web; the Internet is always changing, new content is born, old content dies, and when it dies it (ideally) returns a 404 HTTP response code. Search engines are aware of this; we have 404 errors on our own sites, as you can see above, and we find them all over the web. In fact, we actually prefer that, when you get rid of a page on your site, you make sure that it returns a proper 404 or 410 response code (rather than a “soft 404”). Keep in mind that in order for our crawler to see the HTTP response code of a URL, it has to be able to crawl that URL—if the URL is blocked by your robots.txt file we won’t be able to crawl it and see its response code. The fact that some URLs on your site no longer exist / return 404s does not affect how your site’s other URLs (the ones that return 200 (Successful)) perform in our search results.

Q: So 404s don’t hurt my website at all?
A:
If some URLs on your site 404, this fact alone does not hurt you or count against you in Google’s search results. However, there may be other reasons that you’d want to address certain types of 404s. For example, if some of the pages that 404 are pages you actually care about, you should look into why we’re seeing 404s when we crawl them! If you see a misspelling of a legitimate URL (www.example.com/awsome instead of www.example.com/awesome), it’s likely that someone intended to link to you and simply made a typo. Instead of returning a 404, you could 301 redirect the misspelled URL to the correct URL and capture the intended traffic from that link. You can also make sure that, when users do land on a 404 page on your site, you help them find what they were looking for rather than just saying “404 Not found."

Q: Tell me more about “soft 404s.”
A:
A soft 404 is when a web server returns a response code other than 404 (or 410) for a URL that doesn’t exist. A common example is when a site owner wants to return a pretty 404 page with helpful information for his users, and thinks that in order to serve content to users he has to return a 200 response code. Not so! You can return a 404 response code while serving whatever content you want. Another example is when a site redirects any unknown URLs to their homepage instead of returning 404s. Both of these cases can have negative effects on our understanding and indexing of your site, so we recommend making sure your server returns the proper response codes for nonexistent content. Keep in mind that just because a page says “404 Not Found,” doesn’t mean it’s actually returning a 404 HTTP response code—use the Fetch as Googlebot feature in Webmaster Tools to double-check. If you don’t know how to configure your server to return the right response codes, check out your web host’s help documentation.

Q: How do I know whether a URL should 404, or 301, or 410?
A:
When you remove a page from your site, think about whether that content is moving somewhere else, or whether you no longer plan to have that type of content on your site. If you’re moving that content to a new URL, you should 301 redirect the old URL to the new URL—that way when users come to the old URL looking for that content, they’ll be automatically redirected to something relevant to what they were looking for. If you’re getting rid of that content entirely and don’t have anything on your site that would fill the same user need, then the old URL should return a 404 or 410. Currently Google treats 410s (Gone) the same as 404s (Not found), so it’s immaterial to us whether you return one or the other.

Q: Most of my 404s are for bizarro URLs that never existed on my site. What’s up with that? Where did they come from?
A:
If Google finds a link somewhere on the web that points to a URL on your domain, it may try to crawl that link, whether any content actually exists there or not; and when it does, your server should return a 404 if there’s nothing there to find. These links could be caused by someone making a typo when linking to you, some type of misconfiguration (if the links are automatically generated, e.g. by a CMS), or by Google’s increased efforts to recognize and crawl links embedded in JavaScript or other embedded content; or they may be part of a quick check from our side to see how your server handles unknown URLs, to name just a few. If you see 404s reported in Webmaster Tools for URLs that don’t exist on your site, you can safely ignore them. We don’t know which URLs are important to you vs. which are supposed to 404, so we show you all the 404s we found on your site and let you decide which, if any, require your attention.

Q: Someone has scraped my site and caused a bunch of 404s in the process. They’re all “real” URLs with other code tacked on, like http://www.example.com/images/kittens.jpg" width="100" height="300" alt="kittens"/></a... Will this hurt my site?
A:
Generally you don’t need to worry about “broken links” like this hurting your site. We understand that site owners have little to no control over people who scrape their site, or who link to them in strange ways. If you’re a whiz with the regex, you could consider redirecting these URLs as described here, but generally it’s not worth worrying about. Remember that you can also file a takedown request when you believe someone is stealing original content from your website.

Q: Last week I fixed all the 404s that Webmaster Tools reported, but they’re still listed in my account. Does this mean I didn’t fix them correctly? How long will it take for them to disappear?
A:
Take a look at the ‘Detected’ column on the Crawl errors page—this is the most recent date on which we detected each error. If the date(s) in that column are from before the time you fixed the errors, that means we haven’t encountered these errors since that date. If the dates are more recent, it means we’re continuing to see these 404s when we crawl.

After implementing a fix, you can check whether our crawler is seeing the new response code by using Fetch as Googlebot. Test a few URLs and, if they look good, these errors should soon start to disappear from your list of Crawl errors.

Q: Can I use Google’s URL removal tool to make 404 errors disappear from my account faster?
A:
No; the URL removal tool removes URLs from Google’s search results, not from your Webmaster Tools account. It’s designed for urgent removal requests only, and using it isn’t necessary when a URL already returns a 404, as such a URL will drop out of our search results naturally over time. See the bottom half of this blog post for more details on what the URL removal tool can and can’t do for you.

Still want to know more about 404s? Check out 404 week from our blog, or drop by our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Seo Master present to you:
Setelah semua masalah yang timbul dari penerbitan konten hujatan pada Halaman Facebook, ke Ban jaringan sosial di Pakistan pihak berwenang melonggarkan pegangan pada itu dan mungkin akan segera mengangkat larangan tersebut. Namun hal itu akan terus memblokir setiap dan setiap link yang menghubungkan ke konten menghujat. Kedengarannya cukup dibenarkan.

Jujur Anda tidak dapat sensor Web Facebook Login yang adalah mustahil diberikan hanya ada terlalu banyak data di luar sana, yang mengapa PTA diblokir hampir 800 situs di Pakistan, termasuk YouTube dan Wikipedia. PTA sendiri telah menyatakan bahwa larangan itu akan berlangsung sampai 31 Mei tapi ada spekulasi bahwa larangan itu akan bertahan lebih lama atau bahkan mungkin sampai akhir yang permanen, mengingat bahwa hampir 70 persen orang yang disukai itu. Meskipun belum ada konfirmasi pada tanggal ketika larangan tersebut akan diangkat secara resmi, kami hanya berharap bahwa hal-hal seperti ini tidak terjadi lagi, terutama dari Facebook, yang menurut saya harus melakukan persyaratan dan kondisi yang tepat.

Minggu terakhir ini cukup dramatis dengan naik roller caster emosi dan menunjukkan divisi dalam pola pikir penduduk negara itu dan juga beberapa bahan copy paste murah untuk meniru Facebook di Pakistan muncul. Dengan semua yang pergi, atau segera menghilang, saya berharap kita mencoba dan menemukan cara yang lebih baik untuk melawan aktivitas seperti dari sekedar larangan website, meskipun larangan ini sebagian besar diambil karena kerusuhan politik di negara yang mungkin telah dibuat lebih buruk dengan larangan ini tidak terjadi.
2013, By: Seo Master
Powered by Blogger.