Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

salam every one, this is a topic from google web master centrale blog:

Update on July 29, 2010: We've improved our Flash indexing capability and we also now support an AJAX crawling scheme! Please check out the posts (linked above) for more details.

Many webmasters have discovered the advantages of using Ajax to improve the user experience on their sites, creating dynamic pages that act as powerful web applications. But, like Flash, Ajax can make a site difficult for search engines to index if the technology is not implemented carefully. As promised in our post answering questions about Server location, cross-linking, and Web 2.0 technology, we've compiled some tips for creating Ajax-enhanced websites that are also understood by search engines.

How will Google see my site?

One of the main issues with Ajax sites is that while Googlebot is great at following and understanding the structure of HTML links, it can have a difficult time finding its way around sites which use JavaScript for navigation. While we are working to better understand JavaScript, your best bet for creating a site that's crawlable by Google and other search engines is to provide HTML links to your content.

Design for accessibility

We encourage webmasters to create pages for users, not just search engines. When you're designing your Ajax site, think about the needs of your users, including those who may not be using a JavaScript-capable browser. There are plenty of such users on the web, including those using screen readers or mobile devices.

One of the easiest ways to test your site's accessibility to this type of user is to explore the site in your browser with JavaScript turned off, or by viewing it in a text-only browser such as Lynx. Viewing a site as text-only can also help you identify other content which may be hard for Googlebot to see, including images and Flash.

Develop with progressive enhancement

If you're starting from scratch, one good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with Ajax. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your Ajax bonuses.

Of course you will likely have links requiring JavaScript for Ajax functionality, so here's a way to help Ajax and static links coexist:
When creating your links, format them so they'll offer a static link as well as calling a JavaScript function. That way you'll have the Ajax functionality for JavaScript users, while non-JavaScript users can ignore the script and follow the link. For example:

<a href=”ajax.htm?foo=32” onClick=”navigate('ajax.html#foo=32'); return false”>foo 32</a>

Note that the static link's URL has a parameter (?foo=32) instead of a fragment (#foo=32), which is used by the Ajax code. This is important, as search engines understand URL parameters but often ignore fragments. Web developer Jeremy Keith labeled this technique as Hijax. Since you now offer static links, users and search engines can link to the exact content they want to share or reference.

While we're constantly improving our crawling capability, using HTML links remains a strong way to help us (as well as other search engines, mobile devices and users) better understand your site's structure.

Follow the guidelines

In addition to the tips described here, we encourage you to also check out our Webmaster Guidelines for more information about what can make a site good for Google and your users. The guidelines also point out some practices to avoid, including sneaky JavaScript redirects. A general rule to follow is that while you can provide users different experiences based on their capabilities, the content should remain the same. For example, imagine we've created a page for Wysz's Hamster Farm. The top of the page has a heading of "Wysz's Hamster Farm," and below it is an Ajax-powered slideshow of the latest hamster arrivals. Turning JavaScript off on the same page shouldn't surprise a user with additional text reading:
Wysz's Hamster Farm -- hamsters, best hamsters, cheap hamsters, free hamsters, pets, farms, hamster farmers, dancing hamsters, rodents, hampsters, hamsers, best hamster resource, pet toys, dancing lessons, cute, hamster tricks, pet food, hamster habitat, hamster hotels, hamster birthday gift ideas and more!
A more ideal implementation would display the same text whether JavaScript was enabled or not, and in the best scenario, offer an HTML version of the slideshow to non-JavaScript users.

This is a pretty advanced topic, so please continue the discussion by asking questions and sharing ideas over in the Webmaster Help Group. See you there!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: If you're planning to attend the Search Engine Strategies conference next week in New York, be sure to come by and say hi! A whole bunch of us from the Webmaster Central team will be there, looking to talk to you, get your feedback, and answer your questions. Be sure to join us for lunch on Tuesday, April 10th, where we'll spend an hour answering any question you may have. And then come by our other sessions, or find us in the expo hall or the bar.

Tuesday, April 10

11:00am - 12:30pm

Ads in a Quality Score World
Nick Fox, Group Business Product Manager, Ads Quality

12:45 - 1:45

Lunch Q&A with Google Webmaster Central


Vanessa Fox, Product Manager, Webmaster Central


Trevor Foucher, Software Engineer
Jonathan Simon, Webmaster Trends Analyst
Maile Ohye, Sitemaps Developer Support Engineer
Nikhil Gore, Test Engineer
Amy Lanfear, Technical Writer

Susan Mowska, International Test Engineer
Evan Roseman, Software Engineer

Wednesday, April 11

10:30pm - 12:00pm

Web Analytics & Measuring Success
Brett Crosby, Product Marketing Manager, Google Analytics

Sitemaps & URL Submission
Maile Oyhe, Sitemaps Developer Support Engineer

1:30pm - 2:45pm

Duplicate Content & Multiple Site Issues
Vanessa Fox, Product Manager, Webmaster Central

Meet the Search Ad Networks
Brian Schmidt, Online Sales and Operations Manager

3:15pm - 4:30pm

Earning Money from Contextual Ads
Gavin Bishop, GBS Sales Manager, AdSense

4:45pm - 6:00pm

Landing Page Testing & Tuning
Tom Leung, Product Manager, Google Website Optimizer

robots.txt Summit
Dan Crow, Product Manager

Thursday, April 12

9:00am - 10:15am

Meet the Crawlers
Evan Roseman, Software Engineer

Search Arbitrage Issues
Nick Fox, Group Business Product Manager, Ads Quality

11:00am - 12:15pm

Images & Search Engines
Vanessa Fox, Product Manager, Webmaster Central

4:00pm - 5:15pm

Auditing Paid Listings & Click Fraud Issues
Shuman Ghosemajumder, Business Product Manager, Trust and Safety

Friday, April 13

12:30pm - 1:45pm

Search Engine Q&A on Links
Evan Roseman, Software Engineer

CSS, Ajax, Web 2.0 and Search Engines
Dan Crow, Product Managerthis is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: Handling duplicate content within your own website can be a big challenge. Websites grow; features get added, changed and removed; content comes—content goes. Over time, many websites collect systematic cruft in the form of multiple URLs that return the same contents. Having duplicate content on your website is generally not problematic, though it can make it harder for search engines to crawl and index the content. Also, PageRank and similar information found via incoming links can get diffused across pages we aren't currently recognizing as duplicates, potentially making your preferred version of the page rank lower in Google.

Steps for dealing with duplicate content within your website
  1. Recognize duplicate content on your website.
    The first and most important step is to recognize duplicate content on your website. A simple way to do this is to take a unique text snippet from a page and to search for it, limiting the results to pages from your own website by using a site:query in Google. Multiple results for the same content show duplication you can investigate.
  2. Determine your preferred URLs.
    Before fixing duplicate content issues, you'll have to determine your preferred URL structure. Which URL would you prefer to use for that piece of content?
  3. Be consistent within your website.
    Once you've chosen your preferred URLs, make sure to use them in all possible locations within your website (including in your Sitemap file).
  4. Apply 301 permanent redirects where necessary and possible.
    If you can, redirect duplicate URLs to your preferred URLs using a 301 response code. This helps users and search engines find your preferred URLs should they visit the duplicate URLs. If your site is available on several domain names, pick one and use the 301 redirect appropriately from the others, making sure to forward to the right specific page, not just the root of the domain. If you support both www and non-www host names, pick one, use the preferred domain setting in Webmaster Tools, and redirect appropriately.
  5. Implement the rel="canonical" link element on your pages where you can.
    Where 301 redirects are not possible, the rel="canonical" link element can give us a better understanding of your site and of your preferred URLs. The use of this link element is also supported by major search engines such as Ask.comBing and Yahoo!.
  6. Use the URL parameter handling tool in Google Webmaster Tools where possible.
    If some or all of your website's duplicate content comes from URLs with query parameters, this tool can help you to notify us of important and irrelevant parameters within your URLs. More information about this tool can be found in our announcement blog post.

What about the robots.txt file?

One item which is missing from this list is disallowing crawling of duplicate content with your robots.txt file. We now recommend not blocking access to duplicate content on your website, whether with a robots.txt file or other methods. Instead, use the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. If access to duplicate content is entirely blocked, search engines effectively have to treat those URLs as separate, unique pages since they cannot know that they're actually just different URLs for the same content. A better solution is to allow them to be crawled, but clearly mark them as duplicate using one of our recommended methods. If you allow us to crawl these URLs, Googlebot will learn rules to identify duplicates just by looking at the URL and should largely avoid unnecessary recrawls in any case. In cases where duplicate content still leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.

We hope these methods will help you to master the duplicate content on your website! Information about duplicate content in general can also be found in our Help Center. Should you have any questions, feel free to join the discussion in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.