Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Update to Top Search Queries data 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Starting today, we’re updating our Top Search Queries feature to make it better match expectations about search engine rankings. Previously we reported the average position of all URLs from your site for a given query. As of today, we’ll instead average only the top position that a URL from your site appeared in.

An example
Let’s say Nick searched for [bacon] and URLs from your site appeared in positions 3, 6, and 12. Jane also searched for [bacon] and URLs from your site appeared in positions 5 and 9. Previously, we would have averaged all these positions together and shown an Average Position of 7. Going forward, we’ll only average the highest position your site appeared in for each search (3 for Nick’s search and 5 for Jane’s search), for an Average Position of 4.

We anticipate that this new method of calculation will more accurately match your expectations about how a link's position in Google Search results should be reported.

How will this affect my Top Search Queries data?
This change will affect your Top Search Queries data going forward. Historical data will not change. Note that the change in calculation means that the Average Position metric will usually stay the same or decrease, as we will no longer be averaging in lower-ranking URLs.

Check out the updated Top Search Queries data in the Your site on the web section of Webmaster Tools. And remember, you can also download Top Search Queries data programmatically!

We look forward to providing you a more representative picture of your Google Search data. Let us know what you think in our Webmaster Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Dealing with Sitemap cross-submissions 2013

salam every one, this is a topic from google web master centrale blog:

Since the launch of Sitemaps, webmasters have been asking if they could submit their Sitemaps for multiple hosts on a single dedicated host. A fair question -- and now you can!

Why would someone want to do this? Let's say that you own www.example.com and mysite.google.com and you have Sitemaps for both hosts, e.g. sitemap-example.xml and sitemap-mysite.xml. Until today, you would have to store each Sitemap on its respective host. If you tried to place sitemap-mysite.xml on www.example.com, you would get an error because, for security reasons, a Sitemap on www.example.com can only contains URLs from www.example.com. So how do we solve this? Well, if you can "prove" that you own or control both of these hosts, then either one can host a Sitemap containing URLs for the other. Just follow the normal verification process in Google Webmaster Tools and any verified site in your account will be able to host Sitemaps for any other verified site in the same account.

Here is an example showing both sites verified:

And now, from a single host, you can submit Sitemaps for both sites without any errors. sitemap-example.xml contains URLs from www.example.com and sitemap-mysite.xml contains URLs from mysite.google.com but both now reside on www.example.com:
We've also added more information on handling cross-submits in our Webmaster Help Center.
For those of you wondering how this affects the other search engines that support the Sitemap Protocol, rest assured that we're talking to them about how to make cross-submissions work seamlessly across all of them. Until then, this specific solution will work only for users of Google Webmaster Tools.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: About badware warnings 2013

salam every one, this is a topic from google web master centrale blog:
Some of you have asked about the warnings we show searchers when they click on search results leading to sites that distribute malicious software. As a webmaster, you may be concerned about the possibility of your site being flagged. We want to assure you that we take your concerns very seriously, and that we are very careful to avoid flagging sites incorrectly. It's our goal to avoid sending people to sites that would compromise their computers. These exploits often result in real people losing real money. Compromised bank accounts and stolen credit card numbers are just the tip of this identity theft iceberg.

If your site has been flagged for badware, we let you know this in webmaster tools. Often, we find that webmasters aren't aware that their sites have been compromised, and this warning in search results is a surprise. Fixing a compromised site can be quite hard. Simply cleaning up the HTML files is seldom sufficient. If a rootkit has been installed, for instance, nothing short of wiping the machine and starting over may work. Even then, if the underlying security hole isn't also fixed, they may be compromised again within minutes.

We are looking at ways to provide additional information to webmasters whose sites have been flagged, while balancing our need to keep malicious site owners from hiding from Google's badware protection. We aim to be responsive to any misidentified sites too. If your site has been flagged, you'll see information on the appeals process in webmaster tools. If you can't find anything malicious on your site and believe it was misidentified, go to http://stopbadware.org/home/review to request an evaluation. If you'd like to discuss this with us or have ideas for how we can better communicate with you about it, please post in our webmaster discussion forum.

Update: this post has been updated to provide a link to the new form for requesting a review.


Update: for more information, please see our Help Center article on malware and hacked sites.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Webmaster Tools spring cleaning 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All

Webmaster Tools added lots of new functionality over the past year, such as improvements to Sitemaps and Crawl errors, as well as the new User Administration feature. In recent weeks, we also updated the look & feel of our user interface to match Google's new style. In order to keep bringing you improvements, we occasionally review each of our features to see if they’re still useful in comparison to the maintenance and support they require. As a result of our latest round of spring cleaning, we'll be removing the Subscriber stats feature, the Create robots.txt tool, and the Site performance feature in the next two weeks.

Subscriber stats reports the number of subscribers to a site’s RSS or Atom feeds. This functionality is currently provided in Feedburner, another Google product which offers its own subscriber stats as well as other cool features specifically geared for feeds of all types. If you are looking for a replacement to Subscriber stats in Webmaster Tools, check out Feedburner.

The Create robots.txt tool provides a way to generate robots.txt files for the purpose of blocking specific parts of a site from being crawled by Googlebot. This feature has very low usage, so we've decided to remove it from Webmaster Tools. While many websites don't even need a robots.txt file, if you feel that you do need one, it's easy to make one yourself in a text editor or use one of the many other tools available on the web for generating robots.txt files.

Site performance is a Webmaster Tools Labs feature that provides information about the average load time of your site's pages. This feature is also being removed due to low usage. Now you might have heard our announcement from a couple of years ago that the latency of a site's pages is a factor in our search ranking algorithms. This is still true, and you can analyze your site's performance using the Site Speed feature in Google Analytics or using Google's PageSpeed online. There are also many other site performance analysis tools available like WebPageTest and the YSlow browser plugin.

If you have questions or comments about these changes please post them in our Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: More webmaster questions - Answered! 2013

salam every one, this is a topic from google web master centrale blog:
When it comes to answering your webmaster related questions, we just can't get enough. I wanted to follow-up and answer some additional questions that webmasters asked in our latest installment of Popular Picks. In case you missed it, you can find our answers to image search ranking, sitelinks, reconsideration requests, redirects, and our communication with webmasters in this blog post.



Check out these resources for additional details on questions answered in the video:
Video Transcript:

Hi everyone, I'm Reid from the Search Quality team. Today I'd like to answer some of the unanswered questions from our latest round of popular picks.

Searchmaster had a question about duplicate content. Understandably, this is a popular concern from webmasters. You should check out the Google Webmaster Central Blogwhere my colleague Susan Moskwa recently posted "Demystifying the 'duplicate content penalty," which answers many questions and concerns about duplicate content.

Jay is the Boss wanted to know if e-commerce websites suffer if they have two or more different themes. For example, you could have a site that sells auto parts, but also sightseeing guides. In general, I'd encourage webmasters to create a website that they feel is relevant for users. If it makes sense to sell auto parts and sightseeing guides, then go for it. Those are the sites that perform well, because users want to visit those sites, and they'll link to them as well.

emma2 wanted to know if Google will follow links on a page using the "noindex" attribute in the "robots" meta tag. To answer this question, Googlebot will follow links on a page which uses the meta "noindex" tag, but that page will not appear in our search results. As a reminder, if you would like to prevent Googlebot from crawling any links on a page, use the "nofollow" attribute in the "robots" meta tag.

Aaron Pratt wanted to know about some ways a webmaster can rank well for local searches. A quick recommendation is to add your business to the Local Business Center. There, you can add contact information as well as store operating hours and coupons as well. Another example, or a tip, is to take advantage and purchase a country-specific top-level domain, or use the geotargeting feature in Webmaster Tools.

jdeb901 said it would be helpful if we could let webmasters know if we are having problems with Webmaster Tools. This is an excellent point, and we're always thinking about better ways to communicate with webmasters. If you're having problems with Webmaster Tools, chances are someone else is as well, and they've posted to the Google Webmaster Help Group about this. In the past, if we've experienced problems with Webmaster Tools, we've also created a "sticky" post to let users know that we know about these issues with Webmaster Tools, and we're working to find a solution.

Well, that about wraps it up with our Popular Picks. Thanks again for all of your questions, and I look forward to seeing you around the group.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Where in the world is your site? 2013

salam every one, this is a topic from google web master centrale blog:


The Set Geographic Target tool in Webmaster Tools lets you associate your site with a specific region. We've heard a lot of questions from webmasters about how to use the tool, and here Webmaster Trends Analyst Susan Moskwa explains how it works and when to use it.



The http://www.google.ca/ example in the video is a little hard to see, so here's a screenshot:

the Google Canada home page

Want to know more about setting a geographic target for your site? Check out our Help Center. And if you like this video, you can see more on our Webmaster Tools playlist on YouTube.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: System maintenance 2013

salam every one, this is a topic from google web master centrale blog: We're currently doing routine system maintenance, and some data may not be available in your webmaster tools account today. We're working as quickly as possible, and all information should be available again by Thursday, 8/24. Thank you for your patience in the meantime.

Update: We're still finishing some things up, so thanks for bearing with us. Note that the preferred domain feature is currently unavailable, but will available as soon as our maintenance is complete.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Download search queries data using Python 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Advanced

For all the developers who have expressed interest in getting programmatic access to the search queries data for their sites in Webmaster Tools, we've got some good news. You can now get access to your search queries data in CSV format using a open source Python script from the webmaster-tools-downloads project. Search queries data is not currently available via the Webmaster Tools API, which has been a common API user request that we're considering for the next API update. For those of you who need access to search queries data right now, let's look at an example of how the search queries downloader Python script can be used to download your search queries data and upload it to a Google Spreadsheet in Google Docs.

Example usage of the search queries downloader Python script
1) If Python is not already installed on your machine, download and install Python.
2) Download and install the Google Data APIs Python Client Library.
3) Create a folder and add the downloader.py script to the newly created folder.
4) Copy the example-create-spreadsheet.py script to the same folder as downloader.py and edit it to replace the example values for “website,” “email” and “password” with valid values for your Webmaster Tools verified site.
5) Open a Terminal window and run the example-create-spreadsheet.py script by entering "python example-create-spreadsheet.py" at the Terminal window command line:
python example-create-spreadsheet.py
6) Visit Google Docs to see a new spreadsheet containing your search queries data.


If you just want to download your search queries data in a .csv file without uploading the data to a Google spreadsheet use example-simple-download.py instead of example-create-spreadsheet.py in the example above.

You could easily configure these scripts to be run daily or monthly to archive and view your search queries data across larger date ranges than the current one month of data that is available in Webmaster Tools, for example, by setting up a cron job or using Windows Task Scheduler.

An important point to note is that this script example includes user name and password credentials within the script itself. If you plan to run this in a production environment you should follow security best practices like using encrypted user credentials retrieved from a secure data storage source. The script itself uses HTTPS to communicate with the API to protect these credentials.

Take a look at the search queries downloader script and start using search queries data in your own scripts or tools. Let us know if you have questions or feedback in the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Year in Review 2013

salam every one, this is a topic from google web master centrale blog:
2008 was another great year for the Webmaster Central team. We experienced tremendous user growth with our blogs (97% increase in monthly pageviews), Help Center (25%), Help Forums (225%), and Webmaster Tools (35%). We would like to welcome our new users that joined us in '08, and thank our loyal and passionate user base that have been with us for the last couple of years. We focused on two basic goals for 2008, and here's how we think we did:

Goal #1: Educate and grow our webmaster community
  • We had our first ever online webmaster chat in February '08 to answer your top questions, and followed it up with three more. They have been incredibly successful, and we're planning for more this year.
  • We'd like to send a special thank you to our Bionic Posters, who have played a huge part in supporting our growing community.
  • Localization has been a big focus for us, so we launched our blog and Help Center in additional languages, and made Webmaster Tools available in 40 languages. We hope this makes it easier for people in other parts of the world to adopt our tools and gain a better understanding of how search works.
  • We launched a new Help Forum in English and Polish, with a broader rollout planned in other languages this year.
  • Our SEO starter guide was released and it has been one of our most successful articles to date.
  • We placed an emphasis on sharing material via YouTube and created seven video series totaling two hours of content. We kicked off '09 with a bang on the video front with Matt's "Virtual Blight" presentation.
Goal #2: Iterate early and often on Webmaster Tools
Thank you once again and we hope for another exciting and eventful year!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Improved handling of URLs with parameters 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Advanced

You may have noticed that the Parameter Handling feature disappeared from the Site configuration > Settings section of Webmaster Tools. Fear not; you can now find it under its new name, URL Parameters! Along with renaming it, we refreshed and improved the feature. We hope you’ll find it even more useful. Configuration of URL parameters made in the old version of the feature will be automatically visible in the new version. Before we reveal all the cool things you can do with URL parameters now, let us remind you (or introduce, if you are new to this feature) of the purpose of this feature and when it may come in handy.

When to use
URL Parameters helps you control which URLs on your site should be crawled by Googlebot, depending on the parameters that appear in these URLs. This functionality provides a simple way to prevent crawling duplicate content on your site. Now, your site can be crawled more effectively, reducing your bandwidth usage and likely allowing more unique content from your site to be indexed. If you suspect that Googlebot's crawl coverage of the content on your site could be improved, using this feature can be a good idea. But with great power comes great responsibility! You should only use this feature if you're sure about the behavior of URL parameters on your site. Otherwise you might mistakenly prevent some URLs from being crawled, making their content no longer accessible to Googlebot.

A lot more to do
Okay, let’s talk about what’s new and improved. To begin with, in addition to assigning a crawl action to an individual parameter, you can now also describe the behavior of the parameter. You start by telling us whether or not the parameter changes the content of the page. If the parameter doesn’t affect the page’s content then your work is done; Googlebot will choose URLs with a representative value of this parameter and will crawl the URLs with this value. Since the parameter doesn’t change the content, any value chosen is equally good. However, if the parameter does change the content of a page, you can now assign one of four possible ways for Google to crawl URLs with this parameter:
  • Let Googlebot decide
  • Every URL
  • Only crawl URLs with value=x
  • No URLs
We also added the ability to provide your own specific value to be used, with the “Only URLs with value=x” option; you’re no longer restricted to the list of values that we provide. Optionally, you can also tell us exactly what the parameter does--whether it sorts, paginates, determines content, etc. One last improvement is that for every parameter, we’ll try to show you a sample of example URLs from your site that Googlebot crawled which contain that particular parameter.

Of the four crawl options listed above, “No URLs” is new and deserves special attention. This option is the most restrictive and, for any given URL, takes precedence over settings of other parameters in that URL. This means that if the URL contains a parameter that is set to the “No URLs” option, this URL will never be crawled, even if other parameters in the URL are set to “Every URL.” You should be careful when using this option. The second most restrictive setting is “Only URLs with value=x.”

Feature in use
Now let’s do something fun and exercise our brains on an example.
- - -
Once upon a time there was an online store, fairyclothes.example.com. The store’s website used parameters in its URLs, and the same content could be reached through multiple URLs. One day the store owner noticed, that too many redundant URLs could be preventing Googlebot from crawling the site thoroughly. So he sent his assistant CuriousQuestionAsker to The GreatWebWizard to get advice on using the URL parameters feature to reduce the duplicate content crawled by Googlebot. The Great WebWizard was famous for his wisdom. He looked at the URL parameters and proposed the following configuration:

Parameter nameEffect on content?What should Googlebot crawl?
trackingIdNoneOne representative URL
sortOrderSortsOnly URLs with value = ‘lowToHigh’
sortBySortsOnly URLs with value = ‘price’
filterByColorNarrowsNo URLs
itemIdSpecifiesEvery URL
pagePaginatesEvery URL

The CuriousQuestionAsker couldn’t avoid his nature and started asking questions:

CuriousQuestionAsker: You’ve instructed Googlebot to choose a representative URL for trackingId (value to be chosen by Googlebot). Why not select the Only URLs with value=x option and choose the value myself?
Great WebWizard: While crawling the web Googlebot encountered the following URLs that link to your site:
  1. fairyclothes.example.com/skirts/?trackingId=aaa123
  2. fairyclothes.example.com/skirts/?trackingId=aaa124
  3. fairyclothes.example.com/trousers/?trackingId=aaa125
Imagine that you were to tell Googebot to only crawl URLs where “trackingId=aaa125”. In that case Googlebot would not crawl URLs 1 and 2 as neither of them has the value aaa125 for trackingId. Their content would neither be crawled nor indexed and none of your inventory of fine skirts would show up in Google’s search results. No, for this case choosing a representative URL is the way to go. Why? Because that tells Googlebot that when it encounters two URLs on the web that differ only in this parameter (as URLs 1 and 2 above do) then it only needs to crawl one of them (either will do) and it will still get all the content. In the example above two URLs will be crawled; either 1 & 3, or 2 & 3. Not a single skirt or trouser will be lost.

CuriousQuestionAsker: What about the sortOrder parameter? I don’t care if the items are listed in ascending or descending order. Why not let Google select a representative value?
Great WebWizard: As Googlebot continues to crawl it may find the following URLs:
  1. fairyclothes.example.com/skirts/?page=1&sortBy=price&sortOrder=’lowToHigh’
  2. fairyclothes.example.com/skirts/?page=1&sortBy=price&sortOrder=’highToLow’
  3. fairyclothes.example.com/skirts/?page=2&sortBy=price&sortOrder=’lowToHigh’
  4. fairyclothes.example.com/skirts/?page=2&sortBy=price&sortOrder=’ highToLow’
Notice how the first pair of URLs (1 & 2) differs only in the value of the sortOrder parameter as do URLs in the second pair (3 & 4). However, URLs 1 and 2 will produce different content: the first showing the least expensive of your skirts while the second showing the priciest. That should be your first hint that using a single representative value is not a good choice for this situation. Moreover, if you let Googlebot choose a single representative from among a set of URLs that differ only in their sortOrder parameter it might choose a different value each time. In the example above, from the first pair of URLs, URL 1 might be chosen (sortOrder=’lowToHigh’). Whereas from the second pair URL 4 might be picked (sortOrder=’ highToLow’). If that were to happen Googlebot would crawl only the least expensive skirts (twice):
  • fairyclothes.example.com/skirts/?page=1&sortBy=price&sortOrder=’lowToHigh’
  • fairyclothes.example.com/skirts/?page=2&sortBy=price&sortOrder=’ highToLow’
Your most expensive skirts would not be crawled at all! When dealing with sorting parameters consistency is key. Always sort the same way.

CuriousQuestionAsker: How about the sortBy value?
Great WebWizard: This is very similar to the sortOrder attribute. You want the crawled URLs of your listing to be sorted consistently throughout all the pages, otherwise some of the items may not be visible to Googlebot. However, you should be careful which value you choose. If you sell books as well as shoes in your store, it would be better not to select the value ‘title’ since URLs pointing to shoes never contain ‘sortBy=title’, so they will not be crawled. Likewise setting ‘sortBy=size’ works well for crawling shoes, but not for crawling books. Keep in mind that parameters configuration has influence throughout the whole site.

CuriousQuestionAsker: Why not crawl URLs with parameter filterByColor?
Great WebWizard: Imagine that you have a three-page list of skirts. Some of the skirts are blue, some of them are red and others are green.
  • fairyclothes.example.com/skirts/?page=1
  • fairyclothes.example.com/skirts/?page=2
  • fairyclothes.example.com/skirts/?page=3
This list is filterable. When a user selects a color, she gets two pages of blue skirts:
  • fairyclothes.example.com/skirts/?page=1&flterByColor=blue
  • fairyclothes.example.com/skirts/?page=2&flterByColor=blue
They seem like new pages (the set of items are different from all other pages), but there is actually no new content on them, since all the blue skirts were already included in the original three pages. There’s no need to crawl URLs that narrow the content by color, since the content served on those URLs was already crawled. There is one important thing to notice here: before you disallow some URLs from being crawled by selecting the “No URLs” option, make sure that Googlebot can access the content in another way. Considering our example, Googlebot needs to be able to find the first three links on your site, and there should be no settings that prevent crawling them.
- - -

If your site has URL parameters that are potentially creating duplicate content issues then you should check out the new URL Parameters feature in Webmaster Tools. Let us know what you think or if you have any questions post them to the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Google SEO resources for beginners 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Beginner

Want to eat healthier and exercise more in 2010? That's tough! Want to learn about search engine optimization (SEO) so you can disregard the rumors and know what's important? That's easy! Here's how to gain SEO knowledge as you go about your new start to 2010:

Step 1: Absorb the basics
  • If you like to learn by reading, download our SEO Starter Guide for reading while you're on an exercise bike, training for Ironman.
  • Or, if you're more a video watcher, try listening to my "Search Friendly Development" session while you're cleaning your house. Keep in mind that some parts of the presentation are a little more technical.

  • For good measure, and because at some point you'll hear references to them, check out our webmaster guidelines for yourself.

Step 2: Explore details that pique your interest
Are you done with the basics but now you have some questions? Good for you! Try researching a particular topic in our Webmaster Help Center. For example, do you want more information about crawling and indexing or understanding what links are all about?


Step 3: Verify ownership of your site in Webmaster Tools
It takes a little bit of skill, but we have tons of help for verification. Once you verify ownership of your site (i.e., signal to Google that you're the owner), you can:


A sample message regarding the crawlability of your site


Step 4: Research before you do anything drastic
Usually the basics (e.g., good content/service and a crawlable site with indexable information) are the necessities for SEO. You may hear or read differently, but before you do anything drastic on your site such as robots.txt disallow'ing all of your directories or revamping your entire site architecture, please try:
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Hey Google, I no longer have badware 2013

salam every one, this is a topic from google web master centrale blog:
This post is for anyone who has been emailed or notified by Google about badware, received a badware warning when browsing their own site using Firefox, or has come across malware-labeled search results for their own site(s).  As you know, these warnings are produced by our automated scanning systems, which we've put in place to ensure the quality of our results by protecting our users.  Whatever the case, if you are dealing with badware, here are a few recommendations that can help you out. 





1.  If you have badware, it usually means that your web server, your website, or a database used by your website has been compromised. We have a nifty post on how to handle being hacked.  Be very careful when inspecting for malware on your site so as to avoid exposing your computer to infection.

2. Once everything is clear and dandy, you can follow the steps in our post about malware reviews via Webmaster Tools. Please note the screen shot on the previous post is outdated, and the new malware review form is on the Overview page and looks like this:



  • Other programs, such as Firefox, also use our badware data and may not recognize the change immediately due to their caching of the data.  So even if the badware label in search is removed, it may take some time for that to be visible in such programs.

3. Lastly, if you believe that your rankings were somehow affected by the malware, such as compromised content that violated our Webmaster Guidelines [i.e. hacked pages with hidden pharmacy text links], you should fill out a reconsideration request. To clarify, reconsideration requests are usually used for when you notice issues stemming from violations of our Webmaster Guidelines and are separate from malware requests.

If you have additional questions, please review our documentation or post to the discussion group with the URL of your site. We hope you find this updated feature in Webmaster Tools useful in discovering and fixing any malware-related problems. 

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Better understanding of your site 2013

salam every one, this is a topic from google web master centrale blog: SES Chicago was wonderful. Meeting so many of you made the trip absolutely perfect. It was as special as if (Chicago local) Oprah had joined us!

While hanging out at the Google booth, I was often asked about how to take advantage of our webmaster tools. For example, here's one tip on Common Words.

Common Words: Our prioritized listing of your site's content
The common words feature lists in order of priority (from highest to lowest) the prevalent words we've found in your site, and in links to your site. (This information isn't available for subdirectories or subdomains.) Here are the steps to leveraging common words:

1. Determine your website's key concepts. If it offers getaways to a cattle ranch in Wyoming, the key concepts may be "cattle ranch," "horseback riding," and "Wyoming."

2. Verify that Google detected the same phrases you believe are of high importance. Login to webmaster tools, select your verified site, and choose Page analysis from the Statistics tab. Here, under "Common words in your site's content," we list the phrases detected from your site's content in order of prevalence. Do the common words lack any concepts you believe are important? Are they listing phrases that have little direct relevance to your site?

2a. If you're missing important phrases, you should first review your content. Do you have solid, textual information that explains and relates to the key concepts of your site? If in the cattle-ranch example, "horseback riding" was absent from common words, you may then want to review the "activities" page of the site. Does it include mostly images, or only list a schedule of riding lessons, rather than conceptually relevant information?

It may sound obvious, but if you want to rank for a certain set of keywords, but we don't even see those keyword phrases on your website, then ranking for those phrases will be difficult.

2b. When you see general, non-illustrative common words that don't relate helpfully to your site's content (e.g. a top listing of "driving directions" or "contact us"), then it may be beneficial to increase the ratio of relevant content on your site. (Although don't be too worried if you see a few of these common words, as long as you also see words that are relevant to your main topics.) In the cattle ranch example, you would give visitors "driving directions" and "contact us" information. However, if these general, non-illustrative terms surface as the highest-rated common words, or the entire list of common words is only these types of terms, then Google (and likely other search engines) could not find enough "meaty" content.

2c. If you find that many of the common words still don't relate to your site, check out our blog post on unexpected common words.

3. Here are a few of our favorite posts on improving your site's content:
Target visitors or search engines?

Improving your site's indexing and ranking

NEW! SES Chicago - Using Images

4. Should you decide to update your content, please keep in mind that we will need to recrawl your site in order to recognize changes, and that this may take time. Of course, you can notify us of modifications by submitting a Sitemap.

Happy holidays from all of us on the Webmaster Central team!

SES Chicago: Googlers Trevor Foucher, Adam Lasnik and Jonathan Simon
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Easier management of website verifications 2013

salam every one, this is a topic from google web master centrale blog:

Webmaster level: All

To help webmasters manage the verified owners for their websites in Webmaster Tools, we’ve recently introduced three new features:

  • Verification details view: You can now see the methods used to verify an owner for your site. In the Manage owners page for your site, you can now find the new Verification details link. This screenshot shows the verification details of a user who is verified using both an HTML file uploaded to the site and a meta tag:

    Where appropriate, the Verification details will have links to the correct URL on your site where the verification can be found to help you find it faster.

  • Requiring the verification method be removed from the site before unverifying an owner: You now need to remove the verification method from your site before unverifying an owner from Webmaster Tools. Webmaster Tools now checks the method that the owner used to verify ownership of the site, and will show an error message if the verification is still found. For example, this is the error message shown when an unverification was attempted while the DNS CNAME verification method was still found on the DNS records of the domain:

  • Shorter CNAME verification string: We’ve slightly modified the CNAME verification string to make it shorter to support a larger number of DNS providers. Some systems limit the number of characters that can be used in DNS records, which meant that some users were not able to use the CNAME verification method. We’ve now made the CNAME verification method have a fewer number of characters. Existing CNAME verifications will continue to be valid.

We hope this changes make it easier for you to use Webmaster Tools. As always, please post in our Verification forum if you have any questions or feedback.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Get Cooking with the Webmaster Tools API 2013

salam every one, this is a topic from google web master centrale blog:



As the days grow longer and summer takes full stage, many of us are flocking to patios and parks to engage in the time-honored tradition of grilling food. When it comes to cooking outdoors, the type of grills used span a spectrum from primitive to high tech. For some people a small campfire is all that's required for the perfect outdoor dining experience. For other people the preferred tool for outdoor cooking is a quad grill gas-powered stainless steel cooker with enough features to make an Iron Chef rust with envy.



An interesting off-shoot of outdoor cooking techniques is solar cooking, which combines primitive skills and modern ingenuity. At its most basic, solar cooking involves creating an "oven" that is placed in the sun and passively cooks the food it contains. It is simple to get started with solar cooking because a solar oven is something people can make themselves with inexpensive materials and a bit of effort. The appeal of simplicity, inexpensiveness and the ability to "do it yourself" has created a growing group of people who are making solar ovens themselves.
How all this relates to webmasters is that the webmaster community is also made up of a diverse group of people who use a variety of tools in a myriad of ways. Just like how within the outdoor cooking community there's a contingent of people creating their own solar ovens, the webmaster community has a subgroup of people creating and sharing their own tools. From our discussions with webmasters, we've consistently heard requests to open Webmaster Tools for third-party integration. The Webmaster Tools team has taken this request to heart and I'm happy to announce that we're now releasing an API for Webmaster Tools. The supported features in the first version of the Webmaster Tools API are the following:
  • Managing Sites
    • Retrieve a list of your sites in Webmaster Tools

    • Add your sites to Webmaster Tools

    • Verify your sites in Webmaster Tools

    • Remove your sites from Webmaster Tools

  • Working with Sitemaps
    • Retrieve a list of your submitted Sitemaps
    • Add Sitemaps to Webmaster Tools

    • Remove Sitemaps from Webmaster Tools



Although the current API offers a limited subset of all the functionality that Webmaster Tools provides, this is only the beginning. Get started with the Developer's Guide for the Webmaster Tools Data API to begin working with the API.



Webmasters... fire up your custom tools and get cooking!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: 'New software version' notifications for your site 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

One of the great things about working at Google is that we get to take advantage of an enormous amount of computing power to do some really cool things. One idea we tried out was to let webmasters know about their potentially hackable websites. The initial effort was successful enough that we thought we would take it one step further by expanding our efforts to cover other types of web applications—for example, more content management systems (CMSs), forum/bulletin-board applications, stat-trackers, and so on.

This time, however, our goal is not just to isolate vulnerable or hackable software packages, but to also notify webmasters about newer versions of the software packages or plugins they're running on their website. For example, there might be a Drupal module or Joomla extension update available but some folks might not have upgraded. There are a few reasons a webmaster might not upgrade to the newer version and one of the reasons could be that they just don't know a new version exists. This is where we think we can help. We hope to let webmasters know about new versions of their software by sending them a message via Webmaster Tools. This way they can make an informed decision about whether or not they would like to upgrade.

One of the ways we identify sites to notify is by parsing source code of web pages that we crawl. For example, WordPress and other CMS applications include a generator meta tag that specifies the version number. This has proven to be tremendously helpful in our efforts to notify webmasters. So if you're a software developer, and would like us to help you notify your users about newer versions of your software, a great way to start would be to include a generator meta tag that tells the version number of your software. If you're a plugin or a widget developer, including a version number in the source you provide to your users is a great way to help too.

We've seen divided opinions over time about whether it's a good security practice to include a version number in source code, because it lets hackers or worm writers know that the website might be vulnerable to a particular type of exploit. But as Matt Mullenweg pointed out, "Where [a worm writer's] 1.0 might have checked for version numbers, 2.0 just tests [a website's] capabilities...". Meanwhile, the advantage of a version number is that it can help alert site owners when they need to update their site. In the end, we tend to think that including a version number can do more good than harm.

We plan to begin sending out the first of these messages soon and hope that webmasters find them useful! If you have any questions or feedback, feel free to comment here.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Structured Data Testing Tool 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All

Today we’re excited to share the launch of a shiny new version of the rich snippet testing tool, now called the structured data testing tool. The major improvements are:
  • We’ve improved how we display rich snippets in the testing tool to better match how they appear in search results.
  • The brand new visual design makes it clearer what structured data we can extract from the page, and how that may be shown in our search results.
  • The tool is now available in languages other than English to help webmasters from around the world build structured-data-enabled websites.
Here’s what it looks like:
The new structured data testing tool works with all supported rich snippets and authorship markup, including applications, products, recipes, reviews, and others.

Try it yourself and, as always, if you have any questions or feedback, please tell us in the Webmaster Help Forum.

Written by Yong Zhu on behalf of the rich snippets testing tool team



this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Make the most of Search Queries in Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog: Level: Beginner to Intermediate

If you’re intrigued by the Search Queries feature in Webmaster Tools but aren’t sure how to make it actionable, we have a video that we hope will help!


Maile shares her approach to Search Queries in Webmaster Tools

This video explains the vocabulary of Search Queries, such as:
  • Impressions
  • Average position (only the top-ranking URL for the user’s query is factored in our calculation)
  • Click
  • CTR
The video also reviews an approach to investigating Top queries and Top pages:
  1. Prepare by understanding your website’s goals and your target audience (then using Search Queries “filters” to support your knowledge)
  2. Sort by clicks in Top queries to understand the top queries bringing searchers to your site (for the given time period)
  3. Sort by CTR to notice any missed opportunities
  4. Categorize queries into logical buckets that simplify tracking your progress and staying in touch with users’ needs
  5. Sort Top pages by clicks to find the URLs on your site most visited by searchers (for the given time period)
  6. Sort Top pages by impressions to find valuable pages that can be used to help feature your related, high-quality, but lower-ranking pages
After you’ve watched the video and applied the knowledge of your site with the findings from Search Queries, you’ll likely have several improvement ideas to help searchers find your site. If you’re up for it, let us know in the comments what Search Queries information you find useful (and why!), and of course, as always, feel free to share any tips or feedback.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Where's my data? 2013

salam every one, this is a topic from google web master centrale blog:
Today we're going back to basics. We'll be answering the question: What is a website?

...Okay, not exactly. But we will be looking into what a "website" means in the context of Webmaster Tools, what kind of sites you can add to your Webmaster Tools account, and what data you can get from different types of sites.

Why should you care? Well, the following are all questions that we've gotten from webmasters recently:
  • "I know my site has lots of incoming links; why don't I see any in my Webmaster Tools account?"
  • "I see sitelinks for my site in Google's search results, but when I look in Webmaster Tools it says 'No sitelinks have been generated for your site.'"
  • "Why does my Top search queries report still say 'Data is not available at this time'? My site has been verified for months."
In each of these cases, the answer was the same: the data was there, but the webmaster was looking at the wrong "version" of their domain in Webmaster Tools.


A little background
The majority of tools and settings in Webmaster Tools operate on a per-site basis. This means that when you're looking at, say, the Top search queries report, you're only seeing the top search queries for a particular site. Looking at the top queries for www.example.com will show you different data than looking at the top queries for www.example.org. Makes sense, right?

Not all websites have URLs in the form www.example.com, though. Your root URL may not include the www subdomain (example.com); it may include a custom subdomain (rollergirl.example.com); or your site may live in a subfolder, for example if it's hosted on a free hosting site (www.example.com/rollergirl/). Since we want webmasters to be able to access our tools regardless of how their site is hosted, you can add any combination of domain, subdomain(s), and/or subfolder(s) as a "site" on your Webmaster Tools dashboard. Once you've verified your ownership of that site, we'll show you the information we have for that particular piece of the web, however big or small it may be. If you've verified your domain at the root level, we'll show you data for that whole domain; if you've only verified a particular subfolder or subdomain, we'll only show you data for that subfolder or subdomain. Take Blogger as an example—someone who blogs with Blogger should only be able to have access to the data for their own subdomain (www..matrixar.com), not the entire blogspot.com domain.

What some people overlook is the fact that www is actually a subdomain. It's a very, very common subdomain, and many sites serve the same content whether you access them with or without the www; but the fact remains that example.com and www.example.com are two different URLs and have the potential to serve different content. For this reason, they're considered different sites in Webmaster Tools. Since they're different sites—just like www.example.com and www.example.orgthey can have different data. When you're looking at the data for www.example.com (with the www subdomain) you're not seeing the data for example.com (without the subdomain), and vice versa.

What can I do to make sure I'm seeing all my data?
  • If you feel like you're missing some data, add both the www and the non-www version of your domain to your Webmaster Tools account. Take a look at the data for both sites.
  • Do a site: search for your domain without the www (e.g. [site:example.com]). This should return pages from your domain and any of your indexed subdomains (www.example.com, rollergirl.example.com, etc.). You should be able to tell from the results whether your site is mainly indexed with or without the www subdomain. The version that's indexed is likely to be the version that shows the most data in your Webmaster Tools account.
  • Tell us whether you prefer for your site to be indexed with or without the www by setting your preferred domain.
  • Let everyone else know which version you prefer by doing a site-wide 301 redirect.
Even though example.com and www.example.com may look like identical twins, any twins will be quick to tell you that they're not actually the same person. :-) Now that you know, we urge you to give both your www and non-www sites some love in Webmaster Tools, and—as usual—to post any follow-up questions in our Webmaster Help Group.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Debugging blocked URLs 2013

salam every one, this is a topic from google web master centrale blog: Vanessa's been posting a lot lately, and I'm starting to feel left out. So here my tidbit of wisdom for you: I've noticed a couple of webmasters confused by "blocked by robots.txt" errors, and I wanted to share the steps I take when debugging robots.txt problems:

A handy checklist for debugging a blocked URL

Let's assume you are looking at crawl errors for your website and notice a URL restricted by robots.txt that you weren't intending to block:
http://www.example.com/amanda.html URL restricted by robots.txt Sep 3, 2006

Check the robots.txt analysis tool
The first thing you should do is go to the robots.txt analysis tool for that site. Make sure you are looking at the correct site for that URL, paying attention that you are looking at the right protocol and subdomain. (Subdomains and protocols may have their own robots.txt file, so https://www.example.com/robots.txt may be different from http://example.com/robots.txt and may be different from http://amanda.example.com/robots.txt.) Paste the blocked URL into the "Test URLs against this robots.txt file" box. If the tool reports that it is blocked, you've found your problem. If the tool reports that it's allowed, we need to investigate further.

At the top of the robots.txt analysis tool, take a look at the HTTP status code. If we are reporting anything other than a 200 (Success) or a 404 (Not found) then we may not be able to reach your robots.txt file, which stops our crawling process. (Note that you can see the last time we downloaded your robots.txt file at the top of this tool. If you make changes to your file, check this date and time to see if your changes were made after our last download.)

Check for changes in your robots.txt file
If these look fine, you may want to check and see if your robots.txt file has changed since the error occurred by checking the date to see when your robots.txt file was last modified. If it was modified after the date given for the error in the crawl errors, it might be that someone has changed the file so that the new version no longer blocks this URL.

Check for redirects of the URL
If you can be certain that this URL isn't blocked, check to see if the URL redirects to another page. When Googlebot fetches a URL, it checks the robots.txt file to make sure it is allowed to access the URL. If the robots.txt file allows access to the URL, but the URL returns a redirect, Googlebot checks the robots.txt file again to see if the destination URL is accessible. If at any point Googlebot is redirected to a blocked URL, it reports that it could not get the content of the original URL because it was blocked by robots.txt.

Sometimes this behavior is easy to spot because a particular URL always redirects to another one. But sometimes this can be tricky to figure out. For instance:
  • Your site may not have a robots.txt file at all (and therefore, allows access to all pages), but a URL on the site may redirect to a different site, which does have a robots.txt file. In this case, you may see URLs blocked by robots.txt for your site (even though you don't have a robots.txt file).
  • Your site may prompt for registration after a certain number of page views. You may have the registration page blocked by a robots.txt file. In this case, the URL itself may not redirect, but if Googlebot triggers the registration prompt when accessing the URL, it will be redirected to the blocked registration page, and the original URL will be listed in the crawl errors page as blocked by robots.txt.

Ask for help
Finally, if you still can't pinpoint the problem, you might want to post on our forum for help. Be sure to include the URL that is blocked in your message. Sometimes its easier for other people to notice oversights you may have missed.

Good luck debugging! And by the way -- unrelated to robots.txt -- make sure that you don't have "noindex" meta tags at the top of your web pages; those also result in Google not showing a web site in our index.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.