Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Learn more about Googlebot's crawl of your site and more! 2013

salam every one, this is a topic from google web master centrale blog: We've added a few new features to webmaster tools and invite you to check them out.

Googlebot activity reports
Check out these cool charts! We show you the number of pages Googlebot's crawled from your site per day, the number of kilobytes of data Googlebot's downloaded per day, and the average time it took Googlebot to download pages. Webmaster tools show each of these for the last 90 days. Stay tuned for more information about this data and how you can use it to pinpoint issues with your site.

Crawl rate control
Googlebot uses sophisticated algorithms that determine how much to crawl each site. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth.

We've been conducting a limited test of a new feature that enables you to provide us information about how we crawl your site. Today, we're making this tool available to everyone. You can access this tool from the Diagnostic tab. If you'd like Googlebot to slow down the crawl of your site, simply choose the Slower option.

If we feel your server could handle the additional bandwidth, and we can crawl your site more, we'll let you know and offer the option for a faster crawl.

If you request a changed crawl rate, this change will last for 90 days. If you liked the changed rate, you can simply return to webmaster tools and make the change again.


Enhanced image search
You can now opt into enhanced image search for the images on your site, which enables our tools such as Google Image Labeler to associate the images included in your site with labels that will improve indexing and search quality of those images. After you've opted in, you can opt out at any time.

Number of URLs submitted
Recently at SES San Jose, a webmaster asked me if we could show the number of URLs we find in a Sitemap. He said that he generates his Sitemaps automatically and he'd like confirmation that the number he thinks he generated is the same number we received. We thought this was a great idea. Simply access the Sitemaps tab to see the number of URLs we found in each Sitemap you've submitted.

As always, we hope you find these updates useful and look forward to hearing what you think.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Pros and cons of watermarked images 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: All

What's our take on watermarked images for Image Search? It's a complicated topic. I talked with Peter Linsley—my friend at the 'plex, video star, and Product Manager for Image Search—to hear his thoughts.

Maile: So, Peter... "watermarked images". Can you break it down for us?
Peter: It's understandable that webmasters find watermarking images beneficial.
Pros of watermarked images
  • Photographers can claim credit/be recognized for their art.
  • Unknown usage of the image is deterred.
If search traffic is important to a webmaster, then he/she may also want to consider some of our findings:
Findings relevant to watermarked images
  • Users prefer large, high-quality images (high-resolution, in-focus).
  • Users are more likely to click on quality thumbnails in search results. Quality pictures (again, high-res and in-focus) often look better at thumbnail size.
  • Distracting features such as loud watermarks, text over the image, and borders are likely to make the image look cluttered when reduced to thumbnail size.
In summary, if a feature such as watermarking reduces the user-perceived quality of your image or your image's thumbnail, then searchers may select it less often. Preview your images at thumbnail size to get an idea of how the user might perceive it.
Maile: Ahh, I see: Webmasters concerned with search traffic likely want to balance the positives of watermarking with the preferences of their users -- keeping in mind that sites that use clean images without distracting artifacts tend to be more popular, and that this can also impact rankings. Will Google rank an image differently just because it's watermarked?
Peter: Nope. The presence of a watermark doesn't itself cause an image to be ranked higher or lower.

Do you have questions or opinions on the topic? Let's chat in the webmaster forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Easier URL removals for site owners 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

We recently made a change to the Remove URL tool in Webmaster Tools to eliminate the requirement that the webpage's URL must first be blocked by a site owner before the page can be removed from Google's search results. Because you've already verified ownership of the site, we can eliminate this requirement to make it easier for you, as the site owner, to remove unwanted pages (e.g. pages accidentally made public) from Google's search results.

Removals persist for at least 90 days
When a page’s URL is requested for removal, the request is temporary and persists for at least 90 days. We may continue to crawl the page during the 90-day period but we will not display it in the search results. You can still revoke the removal request at any time during those 90 days. After the 90-day period, the page can reappear in our search results, assuming you haven’t made any other changes that could impact the page’s availability.

Permanent removal
In order to permanently remove a URL, you must ensure that one of the following page blocking methods is implemented for the URL of the page that you want removed:
This will ensure that the page is permanently removed from Google's search results for as long as the page is blocked. If at any time in the future you remove the previously implemented page blocking method, we may potentially re-crawl and index the page. For immediate and permanent removal, you can request that a page be removed using the Remove URL tool and then permanently block the page’s URL before the 90-day expiration of the removal request.



For more information about URL removals, see our “URL removal explained” blog series covering this topic. If you still have questions about this change or about URL removal requests in general, please post in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Back from SES San Jose 2013

salam every one, this is a topic from google web master centrale blog: Thanks to everyone who stopped by to say hi at the Search Engine Strategies conference in San Jose last week!

I had a great time meeting people and talking about our new webmaster tools. I got to hear a lot of feedback about what webmasters liked, didn't like, and wanted to see in our Webmaster Central site. For those of you who couldn't make it or didn't find me at the conference, please feel free to post your comments and suggestions in our discussion group. I do want to hear about what you don't understand or what you want changed so I can make our webmaster tools as useful as possible.

Some of the highlights from the week:

This year, Danny Sullivan invited some of us from the team to "chat and chew" during a lunch hour panel discussion. Anyone interested in hearing about Google's webmaster tools was welcome to come and many did -- thanks for joining us! I loved showing off our product, answering questions, and getting feedback about what to work on next. Many people had already tried Sitemaps, but hadn't seen the new features like Preferred domain and full crawling errors.

One of the questions I heard more than once at the lunch was about how big a Sitemap can be, and how to use Sitemaps with very large websites. Since Google can handle all of your URLs, the goal of Sitemaps is to tell us about all of them. A Sitemap file can contain up to 50,000 URLs and should be no larger than 10MB when uncompressed. But if you have more URLs than this, simply break them up into several smaller Sitemaps and tell us about them all. You can create a Sitemap Index file, which is just a list of all your Sitemaps, to make managing several Sitemaps a little easier.

While hanging out at the Google booth I got another interesting question: One site owner told me that his site is listed in Google, but its description in the search results wasn't exactly what he wanted. (We were using the description of his site listed in the Open Directory Project.) He asked how to remove this description from Google's search results. Vanessa Fox knew the answer! To specifically prevent Google from using the Open Directory for a page's title and description, use the following meta tag:
<meta name="GOOGLEBOT" content="NOODP">

My favorite panel of the week was definitely Pimp My Site. The whole group was dressed to match the theme as they gave some great advice to webmasters. Dax Herrera, the coolest "pimp" up there (and a fantastic piano player), mentioned that a lot of sites don't explain their product clearly on each page. For instance, when pimping Flutter Fetti, there were many instances when all the site had to do was add the word "confetti" to the product description to make it clear to search engines and to users reaching the page exactly what a Flutter Fetti stick is.

Another site pimped was a Yahoo! Stores web site. Someone from the audience asked if the webmaster could set up a Google Sitemap for their store. As Rob Snell pointed out, it's very simple: Yahoo! Stores will create a Google Sitemap for your website automatically, and even verify your ownership of the site in our webmaster tools.

Finally, if you didn't attend the Google dance, you missed out! There were Googlers dancing, eating, and having a great time with all the conference attendees. Vanessa Fox represented my team at the Meet the Google Engineers hour that we held during the dance, and I heard Matt Cutts even starred in a music video! While demo-ing Webmaster Central over in the labs area, someone asked me about the ability to share site information across multiple accounts. We associate your site verification with your Google Account, and allow multiple accounts to verify ownership of a site independently. Each account has its own verification file or meta tag, and you can remove them at any time and re-verify your site to revoke verification of a user. This means that your marketing person, your techie, and your SEO consultant can each verify the same site with their own Google Account. And if you start managing a site that someone else used to manage, all you have to do is add that site to your account and verify site ownership. You don't need to transfer the account information from the person who previously managed it.

Thanks to everyone who visited and gave us feedback. It was great to meet you!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Best practices when moving your site 2013

salam every one, this is a topic from google web master centrale blog:

Planning on moving your site to a new domain? Lots of webmasters find this a scary process. How do you do it without hurting your site's performance in Google search results?


moving your site
Your aim is to make the transition invisible and seamless to the user, and to make sure that Google knows that your new pages should get the same quality signals as the pages on your own site. When you're moving your site, pesky 404 (File Not Found) errors can harm the user experience and negatively impact your site's performance in Google search results.

Let's cover moving your site to a new domain (for instance, changing from www.example.com to www.example.org). This is different from moving to a new IP address; read this post for more information on that.

Here are the main points:

  • Test the move process by moving the contents of one directory or subdomain first. Then use a 301 Redirect to permanently redirect those pages on your old site to your new site. This tells Google and other search engines that your site has permanently moved.
  • Once this is complete, check to see that the pages on your new site are appearing in Google's search results. When you're satisfied that the move is working correctly, you can move your entire site. Don't do a blanket redirect directing all traffic from your old site to your new home page. This will avoid 404 errors, but it's not a good user experience. A page-to-page redirect (where each page on the old site gets redirected to the corresponding page on the new site) is more work, but gives your users a consistent and transparent experience. If there won't be a 1:1 match between pages on your old and new site, try to make sure that every page on your old site is at least redirected to a new page with similar content.
  • If you're changing your domain because of site rebranding or redesign, you might want to think about doing this in two phases: first, move your site; and second, launch your redesign. This manages the amount of change your users see at any stage in the process, and can make the process seem smoother. Keeping the variables to a minimum also makes it easier to troubleshoot unexpected behavior.
  • Check both external and internal links to pages on your site. Ideally, you should contact the webmaster of each site that links to yours and ask them to update the links to point to the page on your new domain. If this isn't practical, make sure that all pages with incoming links are redirected to your new site. You should also check internal links within your old site, and update them to point to your new domain. Once your content is in place on your new server, use a link checker like Xenu to make sure you don't have broken legacy links on your site. This is especially important if your original content included absolute links (like www.example.com/cooking/recipes/chocolatecake.html) instead of relative links (like .../recipes/chocolatecake.html).
  • To prevent confusion, it's best to make sure you retain control of your old site domain for at least 180 days.
  • Finally, keep both your new and old site verified in Webmaster Tools, and review crawl errors regularly to make sure that the 301s from the old site are working properly, and that the new site isn't showing unwanted 404 errors.
We'll admit it, moving is never easy - but these steps should help ensure that none of your good web reputation falls off the truck in the process.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Get a more complete picture about how other sites link to you 2013

salam every one, this is a topic from google web master centrale blog: For quite a while, you've been able to see a list of the most common words used in anchor text to your site. This information is useful, because it helps you know what others think your site is about. How sites link to you has an impact on your traffic from those links, because it describes your site to potential visitors. In addition, anchor text influences the queries your site ranks for in the search results.

Now we've enhanced the information we provide and will show you the complete phrases sites use to link to you, not just individual words. And we've expanded the number we show to 100. To make this information as useful as possible, we're aggregating the phrases by eliminating capitalization and punctuation. For instance, if several sites have linked to your site using the following anchor text:

Site 1 "Buffy, blonde girl, pointy stick"
Site 2 "Buffy blonde girl pointy stick"
Site 3 "buffy: Blonde girl; Pointy stick."

We would aggregate that anchor text and show it as one phrase, as follows:

"buffy blonde girl pointy stick"

You can find this list of phrases by logging into webmaster tools, accessing your site, then going to Statistics > Page anaysis. You can view this data in a table and can download it as a CSV file.

And as we told you last month, you can see the individual links to pages of your site by going to Links > External links. We hope these details give you additional insight into your site traffic.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: New robots.txt feature and REP Meta Tags 2013

salam every one, this is a topic from google web master centrale blog:

We've improved Webmaster Central's robots.txt analysis tool to recognize Sitemap declarations and relative URLs. Earlier versions weren't aware of Sitemaps at all, and understood only absolute URLs; anything else was reported as Syntax not understood. The improved version now tells you whether your Sitemap's URL and scope are valid. You can also test against relative URLs with a lot less typing.

Reporting is better, too. You'll now be told of multiple problems per line if they exist, unlike earlier versions which only reported the first problem encountered. And we've made other general improvements to analysis and validation.

Imagine that you're responsible for the domain www.example.com and you want search engines to index everything on your site, except for your /images folder. You also want to make sure your Sitemap gets noticed, so you save the following as your robots.txt file:

disalow images

user-agent: *
Disallow:

sitemap: http://www.example.com/sitemap.xml

You visit Webmaster Central to test your site against the robots.txt analysis tool using these two test URLs:

http://www.example.com
/archives

Earlier versions of the tool would have reported this:



The improved version tells you more about that robots.txt file:





We also want to make sure you've heard about the new unavailable_after meta tag announced by Dan Crow on the Official Google Blog a few weeks ago. This allows for a more dynamic relationship between your site and Googlebot. Just think, with www.example.com, any time you have a temporarily available news story or limited offer sale or promotion page, you can specify the exact date and time you want specific pages to stop being crawled and indexed.

Let's assume you're running a promotion that expires at the end of 2007. In the headers of page www.example.com/2007promotion.html, you would use the following:

<META NAME="GOOGLEBOT"
CONTENT="unavailable_after: 31-Dec-2007 23:59:59 EST">


The second exciting news: the new X-Robots-Tag directive, which adds Robots Exclusion Protocol (REP) META tag support for non-HTML pages! Finally, you can have the same control over your videos, spreadsheets, and other indexed file types. Using the example above, let's say your promotion page is in PDF format. For www.example.com/2007promotion.pdf, you would use the following:

X-Robots-Tag: unavailable_after: 31 Dec
2007 23:59:59 EST


Remember, REP meta tags can be useful for implementing noarchive, nosnippet, and now unavailable_after tags for page-level instruction, as opposed to robots.txt, which is controlled at the domain root. We get requests from bloggers and webmasters for these features, so enjoy. If you have other suggestions, keep them coming. Any questions? Please ask them in the Webmaster Help Group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Handling legitimate cross-domain content duplication 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate

We've recently discussed several ways of handling duplicate content on a single website; today we'll look at ways of handling similar duplication across different websites, across different domains. For some sites, there are legitimate reasons to duplicate content across different websites — for instance, to migrate to a new domain name using a web server that cannot create server-side redirects. To help with issues that arise on such sites, we're announcing our support of the cross-domain rel="canonical" link element.



Ways of handling cross-domain content duplication:
  • Choose your preferred domain
    When confronted with duplicate content, search engines will generally take one version and filter the others out. This can also happen when multiple domain names are involved, so while search engines are generally pretty good at choosing something reasonable, many webmasters prefer to make that decision themselves.
  • Enable crawling and use 301 (permanent) redirects where possible
    Where possible, the most important step is often to use appropriate 301 redirects. These redirects send visitors and search engine crawlers to your preferred domain and make it very clear which URL should be indexed. This is generally the preferred method as it gives clear guidance to everyone who accesses the content. Keep in mind that in order for search engine crawlers to discover these redirects, none of the URLs in the redirect chain can be disallowed via a robots.txt file. Don't forget to handle your www / non-www preference with appropriate redirects and in Webmaster Tools.
  • Use the cross-domain rel="canonical" link element
    There are situations where it's not easily possible to set up redirects. This could be the case when you need to move your website from a server that does not feature server-side redirects. In a situation like this, you can use the rel="canonical" link element across domains to specify the exact URL of whichever domain is preferred for indexing. While the rel="canonical" link element is seen as a hint and not an absolute directive, we do try to follow it where possible.


Still have questions?

Q: Do the pages have to be identical?
A: No, but they should be similar. Slight differences are fine.

Q: For technical reasons I can't include a 1:1 mapping for the URLs on my sites. Can I just point the rel="canonical" at the homepage of my preferred site?
A: No; this could result in problems. A mapping from old URL to new URL for each URL on the old site is the best way to use rel="canonical".

Q: I'm offering my content / product descriptions for syndication. Do my publishers need to use rel="canonical"?
A: We leave this up to you and your publishers. If the content is similar enough, it might make sense to use rel="canonical", if both parties agree.

Q: My server can't do a 301 (permanent) redirect. Can I use rel="canonical" to move my site?
A: If it's at all possible, you should work with your webhost or web server to do a 301 redirect. Keep in mind that we treat rel="canonical" as a hint, and other search engines may handle it differently. But if a 301 redirect is impossible for some reason, then a rel="canonical" may work for you. For more information, see our guidelines on moving your site.

Q: Should I use a noindex robots meta tag on pages with a rel="canonical" link element?
A: No, since those pages would not be equivalent with regards to indexing - one would be allowed while the other would be blocked. Additionally, it's important that these pages are not disallowed from crawling through a robots.txt file, otherwise search engine crawlers will not be able to discover the rel="canonical" link element.

We hope this makes it easier for you to handle duplicate content in a user-friendly way. Are there still places where you feel that duplicate content is causing your sites problems? Let us know in the Webmaster Help Forum!


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Managing your reputation through search results 2013

salam every one, this is a topic from google web master centrale blog: (Cross-posted on the Official Google Blog)

A few years ago I couldn't wait to get married. Because I was in love, yeah; but more importantly, so that I could take my husband's name and people would stop getting that ridiculous picture from college as a top result when they searched for me on Google.

After a few years of working here, though, I've learned that you don't have to change your name just because it brings up some embarrassing search results. Below are some tips for "reputation management": influencing how you're perceived online, and what information is available relating to you.

Think twice

The first step in reputation management is preemptive: Think twice before putting your personal information online. Remember that although something might be appropriate for the context in which you're publishing it, search engines can make it very easy to find that information later, out of context, including by people who don't normally visit the site where you originally posted it. Translation: don't assume that just because your mom doesn't read your blog, she'll never see that post about the new tattoo you're hiding from her.

Tackle it at the source

If something you dislike has already been published, the next step is to try to remove it from the site where it's appearing. Rather than immediately contacting Google, it's important to first remove it from the site where it's being published. Google doesn't own the Internet; our search results simply reflect what's already out there on the web. Whether or not the content appears in Google's search results, people are still going to be able to access it — on the original site, through other search engines, through social networking sites, etc. — if you don't remove it from the original site. You need to tackle this at the source.
  • If the content in question is on a site you own, easy — just remove it. It will naturally drop out of search results after we recrawl the page and discover the change.
  • It's also often easy to remove content from sites you don't own if you put it there, such as photos you've uploaded, or content on your profile page.
  • If you can't remove something yourself, you can contact the site's webmaster and ask them to remove the content or the page in question.
After you or the site's webmaster has removed or edited the page, you can expedite the removal of that content from Google using our URL removal tool.

Proactively publish information

Sometimes, however, you may not be able to get in touch with a site's webmaster, or they may refuse to take down the content in question. For example, if someone posts a negative review of your business on a restaurant review or consumer complaint site, that site might not be willing to remove the review. If you can't get the content removed from the original site, you probably won't be able to completely remove it from Google's search results, either. Instead, you can try to reduce its visibility in the search results by proactively publishing useful, positive information about yourself or your business. If you can get stuff that you want people to see to outperform the stuff you don't want them to see, you'll be able to reduce the amount of harm that that negative or embarrassing content can do to your reputation.

You can publish or encourage positive content in a variety of ways:
  • Create a Google profile. When people search for your name, Google can display a link to your Google profile in our search results and people can click through to see whatever information you choose to publish in your profile.
  • If a customer writes a negative review of your business, you could ask some of your other customers who are happy with your company to give a fuller picture of your business.
  • If a blogger is publishing unflattering photos of you, take some pictures you prefer and publish them in a blog post or two.
  • If a newspaper wrote an article about a court case that put you in a negative light, but which was subsequently ruled in your favor, you can ask them to update the article or publish a follow-up article about your exoneration. (This last one may seem far-fetched, but believe it or not, we've gotten multiple requests from people in this situation.)
Hope these tips have been helpful! Feel free to stop by our Web Search Forum and share your own advice or stories about how you manage your reputation online.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: More on 404 2013

salam every one, this is a topic from google web master centrale blog: Now that we've bid farewell to soft 404s, in this post for 404 week we'll answer your burning 404 questions.

How do you treat the response code 410 "Gone"?
Just like a 404.

Do you index content or follow links from a page with a 404 response code?
We aim to understand as much as possible about your site and its content. So while we wouldn't want to show a hard 404 to users in search results, we may utilize a 404's content or links if it's detected as a signal to help us better understand your site.

Keep in mind that if you want links crawled or content indexed, it's far more beneficial to include them in a non-404 page.

What about 404s with a 10-second meta refresh?
Yahoo! currently utilizes this method on their 404s. They respond with a 404, but the 404 content also shows:

<meta http-equiv="refresh" content="10;url=http://www.yahoo.com/?xxx">

We feel this technique is fine because it reduces confusion by giving users 10 seconds to make a new selection, only offering the homepage after 10 seconds without the user's input.

Should I 301-redirect misspelled 404s to the correct URL?
Redirecting/301-ing 404s is a good idea when it's helpful to users (i.e. not confusing like soft 404s). For instance, if you notice that the Crawl Errors of Webmaster Tools shows a 404 for a misspelled version of your URL, feel free to 301 the misspelled version of the URL to the correct version.

For example, if we saw this 404 in Crawl Errors:
http://www.google.com/webmsters  <-- typo for "webmasters"

we may first correct the typo if it exists on our own site, then 301 the URL to the correct version (as the broken link may occur elsewhere on the web):
http://www.google.com/webmasters

Have you guys seen any good 404s?
Yes, we have! (Confession: no one asked us this question, but few things are as fun to discuss as response codes. :) We've put together a list of some of our favorite 404 pages. If you have more 404-related questions, let us know, and thanks for joining us for 404 week!
http://www.metrokitchen.com/nice-404-page
"If you're looking for an item that's no longer stocked (as I was), this makes it really easy to find an alternative."
-Riona, domestigeek

http://www.comedycentral.com/another-404
"Blame the robot monkeys"
-Reid, tells really bad jokes

http://www.splicemusic.com/and-another
"Boost your 'Time on site' metrics with a 404 page like this."
-Susan, dabbler in music and Analytics

http://www.treachery.net/wow-more-404s
"It's not reassuring, but it's definitive."
-Jonathan, has trained actual spiders to build websites, ants handle the 404s

http://www.apple.com/iPhone4g
"Good with respect to usability."
http://thcnet.net/lost-in-a-forest
"At least there's a mailbox."
-JohnMu, adventurous

http://lookitsme.co.uk/404
"It's pretty cute. :)"
-Jessica, likes cute things

http://www.orangecoat.com/a-404-page.html
"Flow charts rule."
-Sahala, internet traveller

http://icanhascheezburger.com/iz-404-page
"I can has useful links and even e-mail address for questions! But they could have added 'OH NOES! IZ MISSING PAGE! MAYBE TIPO OR BROKN LINKZ?' so folks'd know what's up."
-Adam, lindy hop geek

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Video Tutorial: Google for Webmasters 2013

salam every one, this is a topic from google web master centrale blog:
We're always looking for new ways to help educate our fellow webmasters. While you may already be familiar with Webmaster Tools, Webmaster Help Discussion Groups, this blog, and our Help Center, we've added another tutorial to help you understand how Google works. Hence we've made this video of a soon-to-come presentation titled "Google for Webmasters." This video will introduce how Google discovers, crawls, indexes your site's pages, and how Google displays them in search results. It also touches lightly upon challenges webmasters and search engines face, such as duplicate content, and the effective indexing of Flash and AJAX content. Lastly, it also talks about the benefits of offerings Webmaster Central and other useful Google products.


Take a look for yourself.

Discoverability:



Accessibility - Crawling and Indexing:


Ranking:


Webmaster Central Overview:


Other Resources:



Google Presentations Version:
http://docs.google.com/Presentation?id=dc5x7mrn_245gf8kjwfx

Important links from this presentation as they chronologically appear in the video:
Add your URL to Google
Help Center: Sitemaps
Sitemaps.org
Robots.txt
Meta tags
Best uses of Flash
Best uses of Ajax
Duplicate content
Google's Technology
Google's History
PigeonRank
Help Center: Link Schemes
Help Center: Cloaking
Webmaster Guidelines
Webmaster Central
Google Analytics
Google Website Optimizer
Google Trends
Google Reader
Google Alerts
More Google Products


Special thanks to Wysz, Chark, and Alissa for the voices.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Tips for making information universally accessible 2013

salam every one, this is a topic from google web master centrale blog:



Many people talk about the effect the Internet has on democratizing access to information, but as someone who has been visually impaired since my teenage years, I can certainly speak to the profound impact it has had on my life.

In everyday life, things like a sheet of paper—and anything written on it—are completely inaccessible to a blind or visually impaired user. But with the Internet a new world has opened up for me and so many others. Thanks to modern technology like screen readers, web pages, books, and web applications are now at our fingertips.

In order to help the visually impaired find the most relevant, useful information on the web, and as quickly as possible, we developed Accessible Search. Google Accessible Search identifies and prioritizes search results that are more easily used by blind and visually impaired users – that means pages that are clean and simple (think of the Google homepage!) and that can load without images.

Why should you take the time to make your site more accessible? In addition to the service you'll be doing for the visually-impaired community, accessible sites are more easily crawled, which is a first step in your site's ability to appear in search results.

So what can you do to make your sites more accessible? Well first of all, think simple. In its current version, Google Accessible Search looks at a number of signals by examining the HTML markup found on a web page. It tends to favor pages that degrade gracefully: pages with few visual distractions and that are likely to render well with images turned off. Flashing banners and dancing animals are probably the worst thing you could put on your site if you want its content to be read by an adaptive technology like a screen reader.

Here are some basic tips:
  1. Keep web pages easy to read, avoiding visual clutter and ensuring that the primary purpose of the web page is immediately accessible with full keyboard navigation.

  2. There are many organizations and online resources that offer website owners and authors guidance on how to make websites and pages more accessible for the blind and visually impaired. The W3C publishes numerous guidelines including Web Content Access Guidelines that are helpful for website owners and authors.

  3. As with regular search, the best thing you can do with respect to making your site rank highly is to create unique, compelling content. In fact, you can think of the Google crawler as the world's most influential blind user. The content that matters most to the Googlebot is the content that matters most to the blind user: good, quality text.

  4. It's also worth reviewing your content to see how accessible it is for other end users. For example, try browsing your site on a monochrome display or try using your site without a mouse. You may also consider your site's usability through a mobile device like a Blackberry or iPhone.

Fellow webmasters, thanks for taking the time to better understand principles of accessibility. In my next post I'll talk about how to make sure that critical site features, like site navigation, are accessible. Until then!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Introducing smartphone Googlebot-Mobile 2013

salam every one, this is a topic from google web master centrale blog:

Webmaster level: All

With the number of smartphone users rapidly rising, we’re seeing more and more websites providing content specifically designed to be browsed on smartphones. Today we are happy to announce that Googlebot-Mobile now crawls with a smartphone user-agent in addition to its previous feature phone user-agents. This is to increase our coverage of smartphone content and to provide a better search experience for smartphone users.

Here are the main user-agent strings that Googlebot-Mobile now uses:

  • Feature phones Googlebot-Mobile:

    • SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)
    • DoCoMo/2.0 N905i(c100;TB;W24H16) (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)
  • Smartphone Googlebot-Mobile:

    • Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)

The content crawled by smartphone Googlebot-Mobile will be used primarily to improve the user experience on mobile search. For example, the new crawler may discover content specifically optimized to be browsed on smartphones as well as smartphone-specific redirects.

One new feature we’re also launching that uses these signals is Skip Redirect for Smartphone-Optimized Pages. When we discover a URL in our search results that redirects smartphone users to another URL serving smartphone-optimized content, we change the link target shown in the search results to point directly to the final destination URL. This removes the extra latency the redirect introduces leading to a saving of 0.5-1 seconds on average when visiting landing page for such search results.

Since all Googlebot-Mobile user-agents identify themselves as a specific kind of mobile, please treat each Googlebot-Mobile request as you would a human user with the same phone user-agent. This, and other guidelines are described in our previous blog post and they still apply, except for those referring to smartphones which we are updating today. If your site has treated Googlebot-Mobile specially based on the fact that it only crawls with feature phone user-agents, we strongly recommend reviewing this policy and serving the appropriate content based on the Googlebot-Mobile’s user-agent, so that both your feature phone and smartphone content will be indexed properly.

If you have more questions, please ask on our Webmaster Help forums.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: View-all in search results 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate to Advanced

User testing has taught us that searchers much prefer the view-all, single-page version of content over a component page containing only a portion of the same information with arbitrary page breaks (which cause the user to click “next” and load another URL).


Searchers often prefer the view-all vs. paginated content with arbitrary page breaks and worse latency.

Therefore, to improve the user experience, when we detect that a content series (e.g. page-1.html, page-2.html, etc.) also contains a single-page version (e.g. page-all.html), we’re now making a larger effort to return the single-page version in search results. If your site has a view-all option, there’s nothing you need to do; we’ll work to do it on your behalf. Also, indexing properties, like links, will be consolidated from the component pages in the series to the view-all page.

However, high latency can make the view-all less preferred

Interestingly, the cases when users didn’t prefer the view-all page were correlated with high latency (e.g., when the view-all page took a while to load, say, because it contained many images). This makes sense because we know users are less satisfied with slow results. So while a view-all page is commonly desired, as a webmaster it’s important to balance this preference with the page’s load time and overall user experience.

Best practices for a series of content
  1. If your site includes view-all pages

    We aim to detect the view-all version of your content and, if available, its associated component pages. There’s nothing more you need to do! However, if you’d like to make it more explicit to us, you can include rel=”canonical” from your component pages to your view-all to increase the likelihood that we detect your series of pages appropriately.


    rel=”canonical” can specify the superset of content (i.e. the view-all page, in this case page-all.html) from the same information in a series of URLs.

    Why does this work?

    In the diagram, page-2.html of a series may specify the canonical target as page-all.html because page-all.html is a superset of page-2.html's content. When a user searches for a query term and page-all.html is selected in search results, even if the query most related to page-2.html, we know the user will still see page-2.html’s relevant information within page-all.html.


    On the other hand, page-2.html shouldn’t designate page-1.html as the canonical because page-2.html’s content isn’t included on page-1.html. It’s possible that a user’s search query is relevant to content on page-2.html, but if page-2.html’s canonical is set to page-1.html, the user could then select page-1.html in search results and find herself in a position where she has to further navigate to a different page to arrive at the desired information. That’s a poor experience for the user, a suboptimal result from us, and it could also bring poorly targeted traffic to your site.


    However, if you strongly desire your view-all page not to appear in search results: 1) make sure the component pages in the series don’t include rel=”canonical” to the view-all page, and 2) mark the view-all page as “noindex” using any of the standard methods.
  2. If you’d like to surface individual, component pages (or there’s no view-all available)

    It may be the case that one or both of the situations below apply to your site:

    • The view-all page is undesirable as a search result (e.g., load time too high or too difficult for users to navigate).
    • Your users prefer the multi-page experience and to be directed to a component page in search results, rather than the view-all page.

    If so, you can use standard HTML rel=”next” and rel=”prev” elements to specify a relationship between the component pages in your series of content. If done correctly, Google will generally strive to:

    • Consolidate indexing properties, such as links, between the component pages/URLs.
    • Send users to the most relevant page/URL from the component pages. Typically, the most relevant page is the first page of your content, but our algorithms may point users to one of the component pages in the series.

It’s not uncommon for webmasters to incorrectly use rel=”canonical” from component pages to the first page of their series (e.g. page-2.html with rel=”canonical” to page-1.html). We recommend against this implementation because the component pages don’t actually contain duplicate content. Using rel=”next” and rel=”prev” is far more appropriate.

Summary

Because users generally prefer the view-all option in search results, we’re making more of an effort to properly detect and serve this version to searchers. If you have a series of content, there’s nothing more you need to do. If you’d like to hint more to Google how best to serve users your information:
  1. To better optimize your view-all page, you can use rel=”canonical” from component pages to the single-page version; otherwise,
  2. If a view-all page doesn’t provide a good user experience for your site, you can use the rel=”next” and rel=”prev” attributes as a strong hint for Google to identify the series of pages and still surface a component page in results.

Questions?

As always, feel free to ask in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Pagination with rel=“next” and rel=“prev” 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate to Advanced

Much like rel=”canonical” acts a strong hint for duplicate content, you can now use the HTML link elements rel=”next” and rel=”prev” to indicate the relationship between component URLs in a paginated series. Throughout the web, a paginated series of content may take many shapes—it can be an article divided into several component pages, or a product category with items spread across several pages, or a forum thread divided into a sequence of URLs. Now, if you choose to include rel=”next” and rel=”prev” markup on the component pages within a series, you’re giving Google a strong hint that you’d like us to:
  • Consolidate indexing properties, such as links, from the component pages/URLs to the series as a whole (i.e., links should not remain dispersed between page-1.html, page-2.html, etc., but be grouped with the sequence).
  • Send users to the most relevant page/URL—typically the first page of the series.


The relationship between component URLs in a series can now be indicated to Google through rel=”next” and rel=”prev”.

There’s an exception to the rel=”prev” and rel=”next” implementation: If, alongside your series of content, you also offer users a view-all page, or if you’re considering a view-all page, please see our post on View-all in search results for more information. Because view-all pages are most commonly preferred by searchers, we do our best to surface this version when appropriate in results rather than a component page (component pages are more likely to surface with rel=”next” and rel=”prev”).

If you don’t have a view-all page or you’d like to override Google returning a view-all page, you can use rel="next" and rel="prev" as described in this post.

For information on paginated configurations that include a view-all page, please see our post on View-all in search results.

Outlining your options

Here are three options for a series:
  1. Leave whatever you have exactly as-is. Paginated content exists throughout the web and we’ll continue to strive to give searchers the best result, regardless of the page’s rel=”next”/rel=”prev” HTML markup—or lack thereof.
  2. If you have a view-all page, or are considering a view-all page, see our post on View-all in search results.
  3. Hint to Google the relationship between the component URLs of your series with rel=”next” and rel=”prev”. This helps us more accurately index your content and serve to users the most relevant page (commonly the first page). Implementation details below.

Implementing rel=”next” and rel=”prev”

If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs:

http://www.example.com/article?story=abc&page=1
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4

On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the <head> section:
<link rel="next" href="http://www.example.com/article?story=abc&page=2" />

On the second page, http://www.example.com/article?story=abc&page=2:
<link rel="prev" href="http://www.example.com/article?story=abc&page=1" />
<link rel="next" href="http://www.example.com/article?story=abc&page=3" />

On the third page, http://www.example.com/article?story=abc&page=3:
<link rel="prev" href="http://www.example.com/article?story=abc&page=2" />
<link rel="next" href="http://www.example.com/article?story=abc&page=4" />

And on the last page, http://www.example.com/article?story=abc&page=4:
<link rel="prev" href="http://www.example.com/article?story=abc&page=3" />

A few points to mention:
  • The first page only contains rel=”next” and no rel=”prev” markup.
  • Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup.
  • The last page only contains markup for rel=”prev”, not rel=”next”.
  • rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the <link> tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL.
  • rel=”next” and rel=”prev” only need to be declared within the <head> section, not within the document <body>.
  • We allow rel=”previous” as a syntactic variant of rel=”prev” links.
  • rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:

    <link rel="canonical" href="http://www.example.com/article?story=abc&page=2”/>
    <link rel="prev" href="http://www.example.com/article?story=abc&page=1&sessionid=123" />
    <link rel="next" href="http://www.example.com/article?story=abc&page=3&sessionid=123" />

  • rel=”prev” and rel=”next” act as hints to Google, not absolute directives.
  • When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.

Questions?
More information can be found in our Help Center, or join the conversation in our Webmaster Help Forum!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: SES Chicago - Using Images 2013

salam every one, this is a topic from google web master centrale blog:
We all had a great time at SES Chicago last week, answering questions and getting feedback.

One of the sessions I participated in was Images and Search Engines, and the panelists had great information about using images on your site, as well as on optimizing for Google Image search.

Ensuring visitors and search engines know what your content is about
Images on a site are great -- but search engines can't read them, and not all visitors can. Make sure your site is accessible and can be understood by visitors viewing your site with images turned off in their browsers, on mobile devices, and with screen readers. If you do that, search engines won't have any trouble. Some things that you can do to ensure this:

  • Don't put the bulk of your text in images. It may sound simple, but the best thing you can do is to put your text into well, text. Reserve images for graphical elements. If all of the text on your page is in an image, it becomes inaccessible.
  • Take advantage of alt tags for all of your images. Make sure the alt text is descriptive and unique. For instance, alt text such as "picture1" or "logo" doesn't provide much information about the image. "Charting the path of stock x" and "Company Y" give more details.
  • Don't overload your alt text. Be descriptive, but don't stuff it with extra keywords.
  • It's important to use alt text for any image on your pages, but if your company name, navigation, or other major elements of your pages are in images, alt text becomes especially important. Consider moving vital details to text to ensure all visitors can view them.
  • Look at the image-to-text ratio on your page. How much text do you have? One way of looking at this is to look at your site with images turned off in your browser. What content can you see? Is the intent of your site obvious? Do the pages convey your message effectively?

Taking advantage of Image search
The panelists pointed out that shoppers often use Image search to see the things they want to buy. If you have a retail site, make sure that you have images of your products (and that they can be easily identified with alt text, headings, and textual descriptions). Searchers can then find your images and get to your site.

One thing that can help your images be returned for results in Google Image search is opting in to enhanced image search in webmaster tools. This enables us to use your images in the Google Image Labeler, which harnesses the power of the community for adding metadata to your images.

Someone asked if we have a maximum number of images per site that we accept for the Image Labeler. We don't. You can opt in no matter how many, or how few, images your site has.

Update: More information on using images can be found in our Help Center. 
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Supporting Facebook Share and RDFa for videos 2013

salam every one, this is a topic from google web master centrale blog:
Have you ever wondered how to increase the chances of your videos appearing in Google's results? Over the last year, the Video Search team has been working hard to improve our index of video on the Web. Today, we're beginning the first in a series of posts to explain some best practices for sites hosting video content.

We previously talked about the importance of submitting a Video Sitemap or mRSS feed to Google and following Google's webmaster guidelines. However, we wanted to offer webmasters an additional tool, so today we're taking a page from the rich snippets playbook and announcing support for Facebook Share and Yahoo! SearchMonkey RDFa. Both of these markup formats allow you to specify information essential to video indexing, such as a video's title and description, within the HTML of a video page. While we've become smarter at discovering this information on our own, we'd certainly appreciate some hints directly from webmasters. Also, to maximize the chances that we find the markup on your video pages, you should make sure it appears in the HTML without the execution of JavaScript or Flash.

So, check out Facebook Share and RDFa and help Google find your videos!

Facebook Share:
<meta name="title" content="Baroo? - cute puppies" />
<meta name="description" content="The cutest canine head tilts on the Internet!" />
<link rel="image_src" href="http://example.com/thumbnail_preview.jpg" />
<link rel="video_src" href="http://example.com/video_object.swf?id=12345"/>
<meta name="video_height" content="296" />
<meta name="video_width" content="512" />
<meta name="video_type" content="application/x-shockwave-flash" />
RDFa (Yahoo! SearchMonkey):
<object width="512" height="296" rel="media:video"
resource="http://example.com/video_object.swf?id=12345"
xmlns:media="http://search.yahoo.com/searchmonkey/media/"
xmlns:dc="http://purl.org/dc/terms/">
<param name="movie" value="http://example.com/video_object.swf?id=12345" />
<embed src="http://example.com/video_object.swf?id=12345"
type="application/x-shockwave-flash" width="512" height="296"></embed>
<a rel="media:thumbnail" href="http://example.com/thumbnail_preview.jpg" />
<a rel="dc:license" href="http://example.com/terms_of_service.html" />
<span property="dc:description" content="Cute Overload defines Baroo? as: Dogspeak for 'Whut the...?'
Frequently accompanied by the Canine Tilt and/or wrinkled brow for enhanced effect." />
<span property="media:title" content="Baroo? - cute puppies" />
<span property="media:width" content="512" />
<span property="media:height" content="296" />
<span property="media:type" content="application/x-shockwave-flash" />
<span property="media:region" content="us" />
<span property="media:region" content="uk" />
<span property="media:duration" content="63" />
</object>
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Webmaster tips for creating accessible, crawlable sites 2013

salam every one, this is a topic from google web master centrale blog:
Raman and Hubbell at home
Hubbell and I enjoying the day at our home in California. Please feel free to view my earlier post about accessibility for webmasters, as well as additional articles I've written for the Official Google blog.

One of the most frequently asked questions about Accessible Search is: What can I do to make my site rank well on Accessible Search? At the same time, webmasters often ask a similar but broader question: What can I do to rank high on Google Search?

Well I'm pleased to tell you that you can kill two birds with one stone: critical site features such as site navigation can be created to work for all users, including our own Googlebot. Below are a few tips for you to consider.

Ensure that all critical content is reachable

To access content, it needs to be reachable. Users and web crawlers reach content by navigating through hyperlinks, so as a critical first step, ensure that all content on your site is reachable via plain HTML hyperlinks, and avoid hiding critical portions of your site behind technologies such as JavaScript or Flash.

Plain hyperlinks are hyperlinks created via an HTML anchor element <a>. Next, ensure that the target of all hyperlinks i.e. <a> elements are real URLs, rather than using an empty hyperlink while deferring hyperlink behavior to an onclick handler.

In short, avoid hyperlinks of the form:
<a href="#" onclick="javascript:void(...)">Product Catalog</a>

In preference of simpler links, such as:
<a href="http://www.example.com/product-catalog.html">Product Catalog</a>

Ensure that content is readable

To be useful, content needs to be readable by everyone. Ensure that all important content on your site is present within the text of HTML documents. Content needs to be available without needing to evaluate scripts on a page. Content hidden behind Flash animations or text generated within the browser by executable JavaScript remains opaque to the Googlebot, as well as to most blind users.

Ensure that content is available in reading order

Having discovered and arrived at your readable content, a user needs to be able to follow the content you've put together in its logical reading order. If you are using a complex, multi-column layout for most of the content on your site, you might wish to step back and analyze how you are achieving the desired effect. For example, using deeply-nested HTML tables makes it difficult to link together related pieces of text in a logical manner.

The same effect can often be achieved using CSS and logically organized <div> elements in HTML. As an added bonus, you will find that your site renders much faster as a result.

Supplement all visual content--don't be afraid of redundancy!

Making information accessible to all does not mean that you need to 'dumb down' your site to simple text. Making your content maximally redundant is critical in ensuring that your content is maximally useful to everyone. Here are a few simple tips:
  • Ensure that content communicated via images is available when those images are missing. This goes further than adding appropriate alt attributes to relevant images. Ensure that the text surrounding the image does an adequate job of setting the context for why the image is being used, as well as detailing the conclusions you expect a person seeing the image to draw. In short, if you want to make sure everyone knows it's a picture of a bridge, wrap that text around the image.

  • Add relevant summaries and captions to tables so that the reader can gain a high-level appreciation for the information being conveyed before delving into the details contained within.

  • Accompany visual animations such as data displays with a detailed textual summary.
Following these simple tips greatly increases the quality of your landing pages for everyone. As a positive side-effect, you'll most likely discover that your site gets better indexed!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Configuring URL Parameters in Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

We recently filmed a video (with slides available) to provide more information about the URL Parameters feature in Webmaster Tools. The URL Parameters feature is designed for webmasters who want to help Google crawl their site more efficiently, and who manage a site with -- you guessed it -- URL parameters! To be eligible for this feature, the URL parameters must be configured in key/value pairs like item=swedish-fish or category=gummy-candy in the URL http://www.example.com/product.php?item=swedish-fish&category=gummy-candy.


Guidance for common cases when configuring URL Parameters. Music in the background masks the ongoing pounding of my neighbor’s construction!

URL Parameter settings are powerful. By telling us how your parameters behave and the recommended action for Googlebot, you can improve your site’s crawl efficiency. On the other hand, if configured incorrectly, you may accidentally recommend that Google ignore important pages, resulting in those pages no longer being available in search results. (There's an example provided in our improved Help Center article.) So please take care when adjusting URL Parameters settings, and be sure that the actions you recommend for Googlebot make sense across your entire site.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: We created a first steps cheat sheet for friends & family 2013

salam every one, this is a topic from google web master centrale blog:

Webmaster level: beginner
Everyone knows someone who just set up their first blog on Blogger, installed WordPress for the first time or maybe who had a web site for some time but never gave search much thought. We came up with a first steps cheat sheet for just these folks. It’s a short how-to list with basic tips on search engine-friendly design, that can help Google and others better understand the content and increase your site’s visibility. We made sure it’s available in thirteen languages. Please feel free to read it, print it, share it, copy and distribute it!

We hope this content will help those who are just about to start their webmaster adventure or have so far not paid too much attention to search engine-friendly design. Over time as you gain experience you may want to have a look at our more advanced Google SEO Starter Guide. As always we welcome all webmasters and site owners, new and experienced to join discussions on our Google Webmaster Help Forum.


Posted by Kaspar Szymanski, Search Quality Strategist, Dublin


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.