Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Using schema.org markup for videos 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Videos are one of the most common types of results on Google and we want to make sure that your videos get indexed. Today, we're also launching video support for schema.org. Schema.org is a joint effort between Google, Microsoft, Yahoo! and Yandex and is now the recommended way to describe videos on the web. The markup is very simple and can be easily added to most websites.

Adding schema.org video markup is just like adding any other schema.org data. Simply define an itemscope, an itemtype=”http://schema.org/VideoObject”, and make sure to set the name, description, and thumbnailUrl properties. You’ll also need either the embedURL — the location of the video player — or the contentURL — the location of the video file. A typical video player with markup might look like this:

<div itemscope itemtype="http://schema.org/VideoObject">
  <h2>Video: <span itemprop="name">Title</span></h2>
  <meta itemprop="duration" content="T1M33S" />
  <meta itemprop="thumbnailUrl" content="thumbnail.jpg" />
  <meta itemprop="embedURL"
    content="http://www.example.com/videoplayer.swf?video=123" />
  <object ...>
    <embed type="application/x-shockwave-flash" ...>
  </object>
  <span itemprop="description">Video description</span>
</div>


Using schema.org markup will not affect any Video Sitemaps or mRSS feeds you're already using. In fact, we still recommend that you also use a Video Sitemap because it alerts us of any new or updated videos faster and provides advanced functionality such as country and platform restrictions.

Since this means that there are now a number of ways to tell Google about your videos, choosing the right format can seem difficult. In order to make the video indexing process as easy as possible, we’ve put together a series of videos and articles about video indexing in our new Webmasters EDU microsite.

For more information, you can go through the Webmasters EDU video articles, read the full schema.org VideoObject specification, or ask questions in the Webmaster Help Forum. We look forward to seeing more of your video content in Google Search.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part III: Removing content that you don't own 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Welcome to the third episode of our URL removals series! In episodes one and two, we talked about expediting the removal of content that's under your control and requesting expedited cache removals. Today, we're covering how to use Google's public URL removal tool to request removal of content from Google’s search results when the content originates on a website not under your control.

Google offers two tools that provide a way to request expedited removal of content:

1. Verified URL removal tool: for requesting to remove content from Google’s search results when it’s published on a site of which you’re a verified owner in Webmaster Tools (like your blog or your company’s site)

2. Public URL removal tool: for requesting to remove content from Google’s search results when it’s published on a site which you can’t verify ownership (like your friend’s blog)

Sometimes a situation arises where the information you want to remove originates from a site that you don't own or can't control. Since each individual webmaster controls their site and their site’s content, the best way to update or remove results from Google is for the site owner (where the content is published) to either block crawling of the URL, modify the content source, or remove the page altogether. If the content isn't changed, it would just reappear in our search results the next time we crawled it. So the first step to remove content that's hosted on a site you don't own is to contact the owner of the website and request that they remove or block the content in question.
  • Removed or blocked content

    If the website owner removes a page, requests for the removed page should return a "404 Not Found" response or a "410 Gone" response. If they choose to block the page from search engines, then the page should either be disallowed in the site's robots.txt file or contain a noindex meta tag. Once one of these requirements is met, you can submit a removal request using the "Webmaster has already blocked the page" option.



    Sometimes a website owner will claim that they’ve blocked or removed a page but they haven’t technically done so. If they claim a page has been blocked you can double check by looking at the site’s robots.txt file to see if the page is listed there as disallowed.
    User-agent: *
    Disallow: /blocked-page/
    Another place to check if a page has been blocked is within the page’s HTML source code itself. You can visit the page and choose “View Page Source” from your browser. Is there a meta noindex tag in the HTML “head” section?
    <html>
    <head>
    <title>blocked page</title>
    <meta name="robots" content="noindex">
    </head>
    ...
    If they inform you that the page has been removed, you can confirm this by using an HTTP response testing tool like the Live HTTP Headers add-on for the Firefox browser. With this add-on enabled, you can request any URL in Firefox to test that the HTTP response is actually 404 Not Found or 410 Gone.

  • Content removed from the page

    Once you've confirmed that the content you're seeking to remove is no longer present on the page, you can request a cache removal using the 'Content has been removed from the page' option. This type of removal--usually called a "cache" removal--ensures that Google's search results will not include the cached copy or version of the old page, or any snippets of text from the old version of the page. Only the current updated page (without the content that's been removed) will be accessible from Google's search results. However, the current updated page can potentially still rank for terms related to the old content as a result of inbound links that still exist from external sites. For cache removal requests you’ll be asked to enter a "term that has been removed from the page." Be sure to enter a word that is not found on the current live page, so that our automated process can confirm the page has changed -- otherwise the request will be denied. Cache removals are covered in more detail in part two of the "URL removal explained" series.


  • Removing inappropriate webpages or images that appear in our SafeSearch filtered results

    Google introduced the SafeSearch filter with the goal of providing search results that exclude potentially offensive content. For situations where you find content that you feel should have been filtered out by SafeSearch, you can request that this content be excluded from SafeSearch filtered results in the future. Submit a removal request using the 'Inappropriate content appears in our SafeSearch filtered results' option.

If you encounter any issues with the public URL removal tool or have questions not addressed here, please post them to the Webmaster Help Forum or consult the more detailed removal instructions in our Help Center. If you do post to the forum, remember to use a URL shortening service to share any links to content you want removed.

Edit: Read the rest of this series:
Part I: Removing URLs & directories
Part II: Removing & updating cached content
Part IV: Tracking requests, what not to remove
Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Showing more results from a domain 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Today we’ve launched a change to our ranking algorithm that will make it much easier for users to find a large number of results from a single site. For queries that indicate a strong user interest in a particular domain, like [exhibitions at amnh], we’ll now show more results from the relevant site:



Prior to today’s change, only two results from www.amnh.org would have appeared for this query. Now, we determine that the user is likely interested in the Museum of Natural History’s website, so seven results from the amnh.org domain appear. Since the user is looking for exhibitions at the museum, it’s far more likely that they’ll find what they’re looking for, faster. The last few results for this query are from other sites, preserving some diversity in the results.

We’re always reassessing our ranking and user interface, making hundreds of changes each year. We expect today’s improvement will help users find deeper results from a single site, while still providing diversity on the results page.


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Make the most of Search Queries in Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog: Level: Beginner to Intermediate

If you’re intrigued by the Search Queries feature in Webmaster Tools but aren’t sure how to make it actionable, we have a video that we hope will help!


Maile shares her approach to Search Queries in Webmaster Tools

This video explains the vocabulary of Search Queries, such as:
  • Impressions
  • Average position (only the top-ranking URL for the user’s query is factored in our calculation)
  • Click
  • CTR
The video also reviews an approach to investigating Top queries and Top pages:
  1. Prepare by understanding your website’s goals and your target audience (then using Search Queries “filters” to support your knowledge)
  2. Sort by clicks in Top queries to understand the top queries bringing searchers to your site (for the given time period)
  3. Sort by CTR to notice any missed opportunities
  4. Categorize queries into logical buckets that simplify tracking your progress and staying in touch with users’ needs
  5. Sort Top pages by clicks to find the URLs on your site most visited by searchers (for the given time period)
  6. Sort Top pages by impressions to find valuable pages that can be used to help feature your related, high-quality, but lower-ranking pages
After you’ve watched the video and applied the knowledge of your site with the findings from Search Queries, you’ll likely have several improvement ideas to help searchers find your site. If you’re up for it, let us know in the comments what Search Queries information you find useful (and why!), and of course, as always, feel free to share any tips or feedback.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Upcoming changes in Google’s HTTP Referrer 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: all

Protecting users’ privacy is a priority for us and it’s helped drive recent changes. Helping users save time is also very important; it’s explicitly mentioned as a part of our philosophy. Today, we’re happy to announce that Google Web Search will soon be using a new proposal to reduce latency when a user of Google’s SSL-search clicks on a search result with a modern browser such as Chrome.

Starting in April, for browsers with the appropriate support, we will be using the "referrer" meta tag to automatically simplify the referring URL that is sent by the browser when visiting a page linked from an organic search result. This results in a faster time to result and more streamlined experience for the user.

What does this mean for sites that receive clicks from Google search results? You may start to see "origin" referrers—Google’s homepages (see the meta referrer specification for further detail)—as a source of organic SSL search traffic. This change will only affect the subset of SSL search referrers which already didn’t include the query terms. Non-HTTPS referrals will continue to behave as they do today. Again, the primary motivation for this change is to remove an unneeded redirect so that signed-in users reach their destination faster.

Website analytics programs can detect these organic search requests by detecting bare Google host names using SSL (like "https://www.google.co.uk/"). Webmasters will continue see the same data in Webmasters Tools—just as before, you’ll receive an aggregated list of the top search queries that drove traffic to their site.

We will continue to look into further improvements to how search query data is surfaced through Webmaster Tools. If you have questions, feedback or suggestions, please let us know through the Webmaster Tools Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Page layout algorithm improvement 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

In our ongoing effort to help you find more high-quality websites in search results, today we’re launching an algorithmic change that looks at the layout of a webpage and the amount of content you see on the page once you click on a result.

As we’ve mentioned previously, we’ve heard complaints from users that if they click on a result and it’s difficult to find the actual content, they aren’t happy with the experience. Rather than scrolling down the page past a slew of ads, users want to see content right away. So sites that don’t have much content “above-the-fold” can be affected by this change. If you click on a website and the part of the website you see first either doesn’t have a lot of visible content above-the-fold or dedicates a large fraction of the site’s initial screen real estate to ads, that’s not a very good user experience. Such sites may not rank as highly going forward.

We understand that placing ads above-the-fold is quite common for many websites; these ads often perform well and help publishers monetize online content. This algorithmic change does not affect sites who place ads above-the-fold to a normal degree, but affects sites that go much further to load the top of the page with ads to an excessive degree or that make it hard to find the actual original content on the page. This new algorithmic improvement tends to impact sites where there is only a small amount of visible content above-the-fold or relevant content is persistently pushed down by large blocks of ads.

This algorithmic change noticeably affects less than 1% of searches globally. That means that in less than one in 100 searches, a typical user might notice a reordering of results on the search page. If you believe that your website has been affected by the page layout algorithm change, consider how your web pages use the area above-the-fold and whether the content on the page is obscured or otherwise hard for users to discern quickly. You can use our Browser Size tool, among many others, to see how your website would look under different screen resolutions.

If you decide to update your page layout, the page layout algorithm will automatically reflect the changes as we re-crawl and process enough pages from your site to assess the changes. How long that takes will depend on several factors, including the number of pages on your site and how efficiently Googlebot can crawl the content. On a typical website, it can take several weeks for Googlebot to crawl and process enough pages to reflect layout changes on the site.

Overall, our advice for publishers continues to be to focus on delivering the best possible user experience on your websites and not to focus on specific algorithm tweaks. This change is just one of the over 500 improvements we expect to roll out to search this year. As always, please post your feedback and questions in our Webmaster Help forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: The Ultimate Fate of Supplemental Results 2013

salam every one, this is a topic from google web master centrale blog:
In 2003, Google introduced a "supplemental index" as a way of showing more documents to users. Most webmasters will probably snicker about that statement, since supplemental docs were famous for refreshing less often and showing up in search results less often. But the supplemental index served an important purpose: it stored unusual documents that we would search in more depth for harder or more esoteric queries. For a long time, the alternative was to simply not show those documents at all, but this was always unsatisfying—ideally, we would search all of the documents all of the time, to give users the experience they expect.

This led to a major effort to rethink the entire supplemental index. We improved the crawl frequency and decoupled it from which index a document was stored in, and once these "supplementalization effects" were gone, the "supplemental result" tag itself—which only served to suggest that otherwise good documents were somehow suspect—was eliminated a few months ago. Now we're coming to the next major milestone in the elimination of the artificial difference between indices: rather than searching some part of our index in more depth for obscure queries, we're now searching the whole index for every query.

From a user perspective, this means that you'll be seeing more relevant documents and a much deeper slice of the web, especially for non-English queries. For webmasters, this means that good-quality pages that were less visible in our index are more likely to come up for queries.

Hidden behind this are some truly amazing technical feats; serving this much larger of an index doesn't happen easily, and it took several fundamental innovations to make it possible. At this point it's safe to say that the Google search engine works like nothing else in the world. If you want to know how it actually works, you'll have to come join Google Engineering; as usual, it's all triple-hush-hush secrets.*



* Originally, I was going to give the stock Google answer, "If I told you, I'd have to kill you." However, I've been informed by management that killing people violates our "Don't be evil" policy, so I'm forced to replace that with sounding mysterious and suggesting that good engineers come and join us. Which I'm dead serious about; if you've got the technical chops and want to work on some of the most complex and advanced large-scale software infrastructure in the world, we want you here.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Blast from the past 2013

salam every one, this is a topic from google web master centrale blog: Written by Sahala Swenson, Webmaster Tools Team

As you know, the queries used to find your website in search results can change over time. Your website content changes, as do the needs of all the busy searchers out there. Whether the queries associated with your site change subtly or dramatically, it's pretty useful to see how they transform over time.

Recognizing this, Top Search Queries in Webmaster Tools now presents historical data and other enhancements. Let's take a closer look:


Up to 6 months of historical data:
Previously we only showed query stats for the last 7 days. Now you can jump between 9 query stats snapshots ranging from now to 6 months ago. Note that the time interval for each of these snapshots is different. For the 7 day, 2 week, and 3 week snapshots, we report the top queries for the previous week. For the 1 to 6 month snapshots, we report statistics for the previous month. And still others of you who log in may notice that you don't have query stats data going back to 6 months ago. We hope to improve that experience in the future. :)

Top query percentages:
You might have noticed a new column in the top query listings. Previously we just ranked your query results and clicks. While useful, this didn't really tell you to what extent one query was ranked higher than another. Now we show what percentage each query result or click represents out of the top 20 queries. This should help you see how well the result or click volume is distributed in the top 20.

Downloads:

Since we're now showing historical data on the Top Search Queries screen, we figured it would be rude to not let you download it all and play with the data yourself (spreadsheet masochists, I'm looking at you). We added a “Download data” link that lets you download all the stats in CSV format. Note that this exports all query stats historical data across all snapshots as well as search types and languages, so you can slice and dice to your satisfaction. The “Download all stats (including subfolders)” link, however, will still only show query stats for your site and sub-folders for the last 7 days.

Freshness:

We've improved data freshness in Webmaster Tools a couple of times in the past, and we've done it again with the new Top Search Queries. Statistics are being now updated constantly. Top query results and clicks may visibly change rank a lot more often now, sometimes daily.


So enough talk. Sign in and play around with the new improvements for yourself. As always we welcome feedback (especially in the form of beer), so feel free to drop us a note in the Webmaster Help Group and let us know what you think.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Troubleshooting Instant Previews in Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

In November, we launched Instant Previews to help users better understand if a particular result was relevant for a their search query. Since launch, our Instant Previews team has been keeping an eye on common complaints and problems related to how pages are rendered for Instant Previews.

When we see issues with preview images, they are frequently due to:
  • Blocked resources due to a robots.txt entry
  • Cloaking: Erroneous content being served to the Googlebot user-agent
  • Poor alternative content when Flash is unavailable
To help webmasters diagnose these problems, we have a new Instant Preview tool in the Labs section of Webmaster Tools (in English only for now).



Here, you can input the URL of any page on your site. We will then fetch the page from your site and try to render it both as it would display in Chrome and through our Instant Preview renderer. Please keep in mind that both of these renders are done using a recent build of Webkit which does not include plugins such as Flash or Silverlight, so it's important to consider the value of providing alternative content for these situations. Alternative content can be helpful to search engines, and visitors to your site without the plugin would benefit as well.

Below the renders, you’ll also see automated feedback on problems our system can detect such as missing or roboted resources. And, in the future, we plan to add more informative and timely feedback to help improve your Instant Previews!

Please direct your questions and feedback to the Webmaster Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Webmasters can now provide feedback on Sitelinks 2013

salam every one, this is a topic from google web master centrale blog:


Sitelinks are extra links that appear below some search results in Google. They serve as shortcuts to help users quickly navigate to the important pages on your site.

Selecting pages to appear as sitelinks is a completely automated process. Our algorithms parse the structure and content of websites and identify pages that provide fast navigation and relevant information for the user's query. Since our algorithms consider several factors to generate sitelinks, not all websites have them.

Now, Webmaster Tools lets you view potential sitelinks for your site and block the ones you don't want to appear in Google search results. Because sitelinks are extremely useful in helping users navigate your site, we don't typically recommend blocking them. However, occasionally you might want to exclude a page from your sitelinks, for example: a page that has become outdated or unavailable, or a page that contains information you don't want emphasized to users. Once you block a page, it won't appear as a sitelink for 90 days unless you choose to unblock it sooner. It may take a week or so to remove a page from your sitelinks, but we are working on making this process faster.

To view and manage your sitelinks, go to the Webmaster Tools Dashboard and click the site you want. In the left menu click Links, then click Sitelinks.
Thanks for your feedback and stay tuned for more updates!



Update: the user-interface for this feature has changed. For more information, please see the Sitelinks Help Center article.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Accessing search query data for your sites 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

SSL encryption on the web has been growing by leaps and bounds. As part of our commitment to provide a more secure online experience, today we announced that SSL Search on https://www.google.com will become the default experience for signed in users on google.com. This change will be rolling out over the next few weeks.

What is the impact of this change for webmasters? Today, a web site accessed through organic search results on http://www.google.com (non-SSL) can see both that the user came from google.com and their search query. (Technically speaking, the user’s browser passes this information via the HTTP referrer field.) However, for organic search results on SSL search, a web site will only know that the user came from google.com.

Webmasters can still access a wealth of search query data for their sites via Webmaster Tools. For sites which have been added and verified in Webmaster Tools, webmasters can do the following:
  • View the top 1000 daily search queries and top 1000 daily landing pages for the past 30 days.
  • View the impressions, clicks, clickthrough rate (CTR), and average position in search results for each query, and compare this to the previous 30 day period.
  • Download this data in CSV format.
In addition, users of Google Analytics’ Search Engine Optimization reports have access to the same search query data available in Webmaster Tools and can take advantage of its rich reporting capabilities.

We will continue to look into further improvements to how search query data is surfaced through Webmaster Tools. If you have questions, feedback or suggestions, please let us know through the Webmaster Tools Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Matt Cutts on ranking, spam and the future of search 2013

salam every one, this is a topic from google web master centrale blog: During a recent visit to the Mountain View Googleplex, I had the chance to interview Matt Cutts for our German Webmaster Blog. While enjoying the California sunshine we chatted about how to rank in Google, resources for webmasters and Matt's first encounter with spam. As these topics are not only interesting for a German audience, I want to share them with you as well. So watch the video and find out how difficult it can be for the head of webspam engineering to surf the Internet. :)


If you speak German you might want to check out the German translation of the interview.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Taking feeds out of our web search results 2013

salam every one, this is a topic from google web master centrale blog:

As a webmaster, you may have been concerned about your RSS/Atom feeds crowding out their associated HTML pages in Google's search results. By serving feeds, we could cause a poor user experience:
  1. Feeds increase the likelihood that users see duplicate search results.
  2. Users clicking on a feed may miss valuable content available only in the HTML page.
To address these concerns, we prevent feeds from being returned in Google's search results, with the exception of podcasts (feeds with multimedia enclosures). We continue to allow podcasts, because we noticed a significant number of them are standalone documents (i.e. no HTML page has the same content) or they have more complete item descriptions than the associated HTML page. However, if, as a webmaster, you'd like your podcasts to be excluded from Google's search results (e.g. if you have a vlog, its feed is probably a podcast), you can use Yahoo's spec for noindex feeds. If you use FeedBurner, making your podcast noindex is as simple as checking a box ("Noindex" under the "Publicize" tab).

As a user, you may ask yourself whether Google has a way to search for feeds. The answer is yes; both Google Reader and iGoogle allow searching for feeds to subscribe to.

We're aware that there are a few non-podcast feeds out there with no associated HTML pages, and thus removing these feeds for now from the search results might be less than ideal. We remain open to other feedback on how to improve the handling of feeds, and especially welcome your comments and questions in the Crawling, Indexing and Ranking subtopic of our Webmaster Help Group.

For the German version of this post, go to "Wir entfernen Feeds aus unseren Suchergebnissen."this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Petits fours in your search results 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Recently we made a change to show more results from a domain for certain types of queries -- this helped searchers get to their desired result even faster. Today we’re expanding the feature so that, when appropriate, more queries show additional results from a domain. As a webmaster, you’ll appreciate the fact that these results may bring targeted visitors directly to the pages they’re interested in.

Here’s an example: in the past, the query [moma] (the Museum of Modern Art), might have triggered two results from the official site:


With this iteration, our search results may show:
  • Up to four web results from each domain (i.e., several domains may have multiple results)
  • Single-line snippets for the additional results, to keep them compact
As before, we still provide links to results from a variety of domains to ensure people find a diverse set of sources relevant to their searches. However, when our algorithms predict pages from a particular site are likely to be most relevant, it makes sense to provide additional direct links in our search results.


Like all the hundreds of changes we make a year, we’re trying to help users quickly reach their desired result. Even though we’re constantly improving our algorithms, our general advice still holds true: create compelling, search-engine friendly sites in order to attract users, buzz, and often targeted traffic!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Requesting removal of content from our index 2013

salam every one, this is a topic from google web master centrale blog:

Note: The user-interface of the described features has changed.

As a site owner, you control what content of your site is indexed in search engines. The easiest way to let search engines know what content you don't want indexed is to use a robots.txt file or robots meta tag. But sometimes, you want to remove content that's already been indexed. What's the best way to do that?

As always, the answer begins: it depends on the type of content that you want to remove. Our webmaster help center provides detailed information about each situation. Once we recrawl that page, we'll remove the content from our index automatically. But if you'd like to expedite the removal rather than wait for the next crawl, the way to do that has just gotten easier.

For sites that you've verified ownership for in your webmaster tools account, you'll now see a new option under the Diagnostic tab called URL Removals. To get started, simply click the URL Removals link, then New Removal Request. Choose the option that matches the type of removal you'd like.



Individual URLs
Choose this option if you'd like to remove a URL or image. In order for the URL to be eligible for removal, one of the following must be true:
Once the URL is ready for removal, enter the URL and indicate whether it appears in our web search results or image search results. Then click Add. You can add up to 100 URLs in a single request. Once you've added all the URLs you would like removed, click Submit Removal Request.

A directory
Choose this option if you'd like to remove all files and folders within a directory on your site. For instance, if you request removal of the following:

http://www.example.com/myfolder

this will remove all URLs that begin with that path, such as:

http://www.example.com/myfolder
http://www.example.com/myfolder/page1.html
http://www.example.com/myfolder/images/image.jpg

In order for a directory to be eligible for removal, you must block it using a robots.txt file. For instance, for the example above, http://www.example.com/robots.txt could include the following:

User-agent: Googlebot
Disallow: /myfolder

Your entire site
Choose this option only if you want to remove your entire site from the Google index. This option will remove all subdirectories and files. Do not use this option to remove the non-preferred version of your site's URLs from being indexed. For instance, if you want all of your URLs indexed using the www version, don't use this tool to request removal of the non-www version. Instead, specify the version you want indexed using the Preferred domain tool (and do a 301 redirect to the preferred version, if possible). To use this option, you must block the site using a robots.txt file.

Cached copies

Choose this option to remove cached copies of pages in our index. You have two options for making pages eligible for cache removal.

Using a meta noarchive tag and requesting expedited removal
If you don't want the page cached at all, you can add a meta noarchive tag to the page and then request expedited cache removal using this tool. By requesting removal using this tool, we'll remove the cached copy right away, and by adding the meta noarchive tag, we will never include the cached version. (If you change your mind later, you can remove the meta noarchive tag.)

Changing the page content
If you want to remove the cached version of a page because it contained content that you've removed and don't want indexed, you can request the cache removal here. We'll check to see that the content on the live page is different from the cached version and if so, we'll remove the cached version. We'll automatically make the latest cached version of the page available again after six months (and at that point, we likely will have recrawled the page and the cached version will reflect the latest content) or, if you see that we've recrawled the page sooner than that, you can request that we reinclude the cached version sooner using this tool.

Checking the status of removal requests
Removal requests show as pending until they have been processed, at which point, the status changes to either Denied or Removed. Generally, a request is denied if it doesn't meet the eligibility criteria for removal.


To reinclude content
If a request is successful, it appears in the Removed Content tab and you can reinclude it any time simply by removing the robots.txt or robots meta tag block and clicking Reinclude. Otherwise, we'll exclude the content for six months. After that six month period, if the content is still blocked or returns a 404 or 410 status message and we've recrawled the page, it won't be reincluded in our index. However, if the page is available to our crawlers after this six month period, we'll once again include it in our index.

Requesting removal of content you don't own
But what if you want to request removal of content that's located on a site that you don't own? It's just gotten easier to do that as well. Our new Webpage removal request tool steps through the process for each type of removal request.

Since Google indexes the web and doesn't control the content on web pages, we generally can't remove results from our index unless the webmaster has blocked or modified the content or removed the page. If you would like content removed, you can work with the site owner to do so, and then use this tool to expedite the removal from our search results.

If you have found search results that contain specific types of personal information, you can request removal even if you've been unable to work with the site owner. For this type of removal, provide your email address so we can work with you directly.



If you have found search results that shouldn't be returned with SafeSearch enabled, you can let us know using this tool as well.

You can check on the status of pending requests, and as with the version available in webmaster tools, the status will change to Removed or Denied once it's been processed. Generally, the request is denied if it doesn't meet the eligibility criteria. For requests that involve personal information, you won't see the status available here, but will instead receive an email with more information about next steps.

What about the existing URL removal tool?
If you've made previous requests with this tool, you can still log in to check on the status of those requests. However, make any new requests with this new and improved version of the tool.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Pros and cons of watermarked images 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: All

What's our take on watermarked images for Image Search? It's a complicated topic. I talked with Peter Linsley—my friend at the 'plex, video star, and Product Manager for Image Search—to hear his thoughts.

Maile: So, Peter... "watermarked images". Can you break it down for us?
Peter: It's understandable that webmasters find watermarking images beneficial.
Pros of watermarked images
  • Photographers can claim credit/be recognized for their art.
  • Unknown usage of the image is deterred.
If search traffic is important to a webmaster, then he/she may also want to consider some of our findings:
Findings relevant to watermarked images
  • Users prefer large, high-quality images (high-resolution, in-focus).
  • Users are more likely to click on quality thumbnails in search results. Quality pictures (again, high-res and in-focus) often look better at thumbnail size.
  • Distracting features such as loud watermarks, text over the image, and borders are likely to make the image look cluttered when reduced to thumbnail size.
In summary, if a feature such as watermarking reduces the user-perceived quality of your image or your image's thumbnail, then searchers may select it less often. Preview your images at thumbnail size to get an idea of how the user might perceive it.
Maile: Ahh, I see: Webmasters concerned with search traffic likely want to balance the positives of watermarking with the preferences of their users -- keeping in mind that sites that use clean images without distracting artifacts tend to be more popular, and that this can also impact rankings. Will Google rank an image differently just because it's watermarked?
Peter: Nope. The presence of a watermark doesn't itself cause an image to be ranked higher or lower.

Do you have questions or opinions on the topic? Let's chat in the webmaster forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Taking advantage of universal search 2013

salam every one, this is a topic from google web master centrale blog:
Yesterday, at Searchology, we unveiled exciting changes in our search results. With universal search, we've begun blending results from more than just the web in order to provide the most relevant and useful results possible. In addition to web pages, for instance, the search results may include video, news, images, maps, and books. Over time, we'll continue to enhance this blending so that searchers can get the exact information they need right from the search results.

This is great news for the searcher, but what does it mean for you, the webmaster? It's great news for you as well. Many people do their searches from web search and aren't aware of our many other tools to search for images, news, videos, maps, and books. Since more of those results may now be returned in web search, if you have content that is returned in these others searches, more potential visitors may see your results.

Want to make sure you're taking full advantage of universal search? Here are some tips:

Google News results
If your site includes news content, you can, submit your site for inclusion in Google News. Once your site is included, you can let us know about your latest articles by submitting a News Sitemap. (Note News Sitemaps are currently available for English sites only.)

News Archive results
If you have historical news content (available for free or by subscription), you can submit it for inclusion in News Archive Search.

Image results
If your site includes images, you can opt-in to enhanced Image search in webmaster tools, which will enable us to gather additional metadata about your images using our Image Labeler. This helps us return your images for the most relevant queries. Also ensure that you are fully taking advantage of the images on your site.

Local results
If your site is for a business in a particular geographic location, you can provide information to us using our Local Business Center. By providing this information, you can help us provide the best, locally relevant results to searchers both in web search and on Google Maps.

Video results
If you have video content, you can host it on Google Video, YouTube, or a number of other video hosting providers. If the video is a relevant result for the query, searchers can play the video directly from the search results page (for Google Video and YouTube) or can view a thumbnail of the video then click over to the player for other hosting providers. You can easily upload videos to Google Video or to YouTube.

Our goal with universal search is to provide most relevant and useful results, so for those of you who want to connect to visitors via search, our best advice remains the same: create valuable, unique content that is exactly what searchers are looking for.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: New hacked site notifications in search results 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Today we’ve added a new notification to our search results that helps people know when a site may have been hacked. We’ve provided notices for malware for years, which also involve a separate warning page. Now we’re expanding the search results notifications to help people avoid sites that may have been compromised and altered by a third party, typically for spam. When a user visits a site, we want her to be confident the information on that site comes from the original publisher.

Here’s what the notification looks like:


Clicking the “This site may be compromised” link brings you to an article in our Help Center which explains more about the notice. Meanwhile, clicking the result itself brings you to the target website, as expected.

We use a variety of automated tools to detect common signs of a hacked site as quickly as possible. When we detect something suspicious, we’ll add the notification to our search results. We’ll also do our best to contact the site’s webmaster via their Webmaster Tools account and any contact email addresses we can find on the webpage. We hope webmasters will also appreciate these notices, because it will help you more quickly discover when someone may be abusing your site so you can correct the problem.

Of course, we also understand that webmasters may be concerned that these notices are impacting their traffic from search. Rest assured, once the problem has been fixed, the warning label will be automatically removed from our search results, usually in a matter of days. You can also request a review of your site to accelerate removal of the notice.

If you see this notification appearing on your site’s listing, please take a look at the instructions in our Help Center to learn how you can begin to address the problem. Together, we can make the web a safer place.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Easier URL removals for site owners 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

We recently made a change to the Remove URL tool in Webmaster Tools to eliminate the requirement that the webpage's URL must first be blocked by a site owner before the page can be removed from Google's search results. Because you've already verified ownership of the site, we can eliminate this requirement to make it easier for you, as the site owner, to remove unwanted pages (e.g. pages accidentally made public) from Google's search results.

Removals persist for at least 90 days
When a page’s URL is requested for removal, the request is temporary and persists for at least 90 days. We may continue to crawl the page during the 90-day period but we will not display it in the search results. You can still revoke the removal request at any time during those 90 days. After the 90-day period, the page can reappear in our search results, assuming you haven’t made any other changes that could impact the page’s availability.

Permanent removal
In order to permanently remove a URL, you must ensure that one of the following page blocking methods is implemented for the URL of the page that you want removed:
This will ensure that the page is permanently removed from Google's search results for as long as the page is blocked. If at any time in the future you remove the previously implemented page blocking method, we may potentially re-crawl and index the page. For immediate and permanent removal, you can request that a page be removed using the Remove URL tool and then permanently block the page’s URL before the 90-day expiration of the removal request.



For more information about URL removals, see our “URL removal explained” blog series covering this topic. If you still have questions about this change or about URL removal requests in general, please post in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Best practices for Product Search 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: Beginner to Intermediate

If you run an e-commerce site and you'd like your products to be eligible to be shown in Google search results, then check out our "Product Search for Webmasters" video. In addition to the basics on Product Search, I cover:
  • Attributes to include in your feed
  • FAQs
    • Will my products' rankings improve if I include custom attributes in my feed?
    • Do product listings expire after 30 days?
    • How often should I submit my feed?



More information can be found in the Product Search Help Center.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.