Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: How search results may differ based on accented characters and interface languages 2013

salam every one, this is a topic from google web master centrale blog:
When a searcher enters a query that includes a word with accented characters, our algorithms consider web pages that contain versions of that word both with and without the accent. For instance, if a searcher enters [México], we'll return results for pages about both "Mexico" and "México."



Conversely, if a searcher enters a query without using accented characters, but a word in that query could be spelled with them, our algorithms consider web pages with both the accented and non-accented versions of the word. So if a searcher enters [Mexico], we'll return results for pages about both "Mexico" and "México."



How the searcher's interface language comes into play
The searcher's interface language is taken into account during this process. For instance, the set of accented characters that are treated as equivalent to non-accented characters varies based on the searcher's interface language, as language-level rules for accenting differ.

Also, documents in the chosen interface language tend to be considered more relevant. If a searcher's interface language is English, our algorithms assume that the queries are in English and that the searcher prefers English language documents returned.

This means that the search results for the same query can vary depending on the language interface of the searcher. They can also vary depending on the location of the searcher (which is based on IP address) and if the searcher chooses to see results only from the specified language. If the searcher has personalized search enabled, that will also influence the search results.

The example below illustrates the results returned when a searcher queries [Mexico] with the interface language set to Spanish.



Note that when the interface language is set to Spanish, more results with accented characters are returned, even though the query didn't include the accented character.

How to restrict search results
To obtain search results for only a specific version of the word (with or without accented characters), you can place a + before the word. For instance, the search [+Mexico] returns only pages about "Mexico" (and not "México"). The search [+México] returns only pages about "México" and not "Mexico." Note that you may see some search results that don't appear to use the version of word you specified in your query, but that version of the word may appear within the content of the page or in anchor text to the page, rather than in the title or description listed in the results. (You can see the top anchor text used to link to your site by choosing Statistics > Page analysis in webmaster tools.)

The example below illustrates the results returned when a searcher queries [+Mexico].

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Supplemental goes mainstream 2013

salam every one, this is a topic from google web master centrale blog:

When Google originally introduced Supplemental Results in 2003, our main web index had billions of web pages. The supplemental index made it possible to index even more web pages and, just like our main web index, make this content available when generating relevant search results for user queries. This was especially useful for queries that did not return many results from the main web index, and for these the supplemental index allowed us to query even more web pages. The fewer constraints we're able to place on sites we crawl for the supplemental index means that web pages that are not in the main web index could be included in the supplemental. These are often pages with lower PageRank or those with more complex URLs. Thus the supplemental index (read more - and here's Matt's talk about it on video) serves a very important purpose: to index as much of the relevant content that we crawl as possible.

The changes we make must focus on improving the search experience for our users. Since 2006, we've completely overhauled the system that crawls and indexes supplemental results. The current system provides deeper and more continuous indexing. Additionally, we are indexing URLs with more parameters and are continuing to place fewer restrictions on the sites we crawl. As a result, Supplemental Results are fresher and more comprehensive than ever. We're also working towards showing more Supplemental Results by ensuring that every query is able to search the supplemental index, and expect to roll this out over the course of the summer.

The distinction between the main and the supplemental index is therefore continuing to narrow. Given all the progress that we've been able to make so far, and thinking ahead to future improvements, we've decided to stop labeling these URLs as "Supplemental Results." Of course, you will continue to benefit from Google's supplemental index being deeper and fresher.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Better geographic choices for webmasters 2013

salam every one, this is a topic from google web master centrale blog: Written by Amanda Camp, Webmaster Tools and Trystan Upstill, International Search Quality Team

Starting today Google Webmaster Tools helps you better control the country association of your content on a per-domain, per-subdomain, or per-directory level. The information you give us will help us determine how your site appears in our country-specific search results, and also improves our search results for geographic queries.

We currently only allow you to associate your site with a single country and location. If your site is relevant to an even more specific area, such as a particular state or region, feel free to tell us that. Or let us know if your site isn't relevant to any particular geographic location at all. If no information is entered in Webmaster Tools, we'll continue to make geographic associations largely based on the top-level domain (e.g. .co.uk or .ca) and the IP of the webserver from which the context was served.

For example, if we wanted to associate www.google.com with Hungary:


But you don't want www.google.com/webmasters/tools" associated with any country...


This feature is restricted for sites with a country code top level domain, as we'll always associate that site with the country domain. (For example, google.ru will always be the version of Google associated with Russia.)


Note that in the same way that Google may show your business address if you register your brick-and-mortar business with the Google Local Business Center, we may show the information that you give us publicly.

This feature was largely initiated by your feedback, so thanks for the great suggestion. Google is always committed towards helping more sites and users get better and more relevant results. This is a new step as we continue to think about how to improve searches around the world.

We encourage you to tell us what you think in the Webmaster Tools section of our discussion group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part I: URLs & directories 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:
Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or web-sniffer.net to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We cover what to do in this situation in a subsequent post. in Part II of our removals series.

If a URL meets one of the above criteria, you can remove it by going to http://www.google.com/webmasters/tools/removals, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter
   http://www.example.com/embarrassing-stuff.html
not
   http://www.google.com/search?q=embarrassing+stuff

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Edit: Read the rest of this series:
Part II: Removing & updating cached content
Part III: Removing content you don't own
Part IV: Tracking requests, what not to remove

Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.