Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

salam every one, this is a topic from google web master centrale blog: If you're planning to attend the Search Engine Strategies conference next week in New York, be sure to come by and say hi! A whole bunch of us from the Webmaster Central team will be there, looking to talk to you, get your feedback, and answer your questions. Be sure to join us for lunch on Tuesday, April 10th, where we'll spend an hour answering any question you may have. And then come by our other sessions, or find us in the expo hall or the bar.

Tuesday, April 10

11:00am - 12:30pm

Ads in a Quality Score World
Nick Fox, Group Business Product Manager, Ads Quality

12:45 - 1:45

Lunch Q&A with Google Webmaster Central


Vanessa Fox, Product Manager, Webmaster Central


Trevor Foucher, Software Engineer
Jonathan Simon, Webmaster Trends Analyst
Maile Ohye, Sitemaps Developer Support Engineer
Nikhil Gore, Test Engineer
Amy Lanfear, Technical Writer

Susan Mowska, International Test Engineer
Evan Roseman, Software Engineer

Wednesday, April 11

10:30pm - 12:00pm

Web Analytics & Measuring Success
Brett Crosby, Product Marketing Manager, Google Analytics

Sitemaps & URL Submission
Maile Oyhe, Sitemaps Developer Support Engineer

1:30pm - 2:45pm

Duplicate Content & Multiple Site Issues
Vanessa Fox, Product Manager, Webmaster Central

Meet the Search Ad Networks
Brian Schmidt, Online Sales and Operations Manager

3:15pm - 4:30pm

Earning Money from Contextual Ads
Gavin Bishop, GBS Sales Manager, AdSense

4:45pm - 6:00pm

Landing Page Testing & Tuning
Tom Leung, Product Manager, Google Website Optimizer

robots.txt Summit
Dan Crow, Product Manager

Thursday, April 12

9:00am - 10:15am

Meet the Crawlers
Evan Roseman, Software Engineer

Search Arbitrage Issues
Nick Fox, Group Business Product Manager, Ads Quality

11:00am - 12:15pm

Images & Search Engines
Vanessa Fox, Product Manager, Webmaster Central

4:00pm - 5:15pm

Auditing Paid Listings & Click Fraud Issues
Shuman Ghosemajumder, Business Product Manager, Trust and Safety

Friday, April 13

12:30pm - 1:45pm

Search Engine Q&A on Links
Evan Roseman, Software Engineer

CSS, Ajax, Web 2.0 and Search Engines
Dan Crow, Product Managerthis is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: Handling duplicate content within your own website can be a big challenge. Websites grow; features get added, changed and removed; content comes—content goes. Over time, many websites collect systematic cruft in the form of multiple URLs that return the same contents. Having duplicate content on your website is generally not problematic, though it can make it harder for search engines to crawl and index the content. Also, PageRank and similar information found via incoming links can get diffused across pages we aren't currently recognizing as duplicates, potentially making your preferred version of the page rank lower in Google.

Steps for dealing with duplicate content within your website
  1. Recognize duplicate content on your website.
    The first and most important step is to recognize duplicate content on your website. A simple way to do this is to take a unique text snippet from a page and to search for it, limiting the results to pages from your own website by using a site:query in Google. Multiple results for the same content show duplication you can investigate.
  2. Determine your preferred URLs.
    Before fixing duplicate content issues, you'll have to determine your preferred URL structure. Which URL would you prefer to use for that piece of content?
  3. Be consistent within your website.
    Once you've chosen your preferred URLs, make sure to use them in all possible locations within your website (including in your Sitemap file).
  4. Apply 301 permanent redirects where necessary and possible.
    If you can, redirect duplicate URLs to your preferred URLs using a 301 response code. This helps users and search engines find your preferred URLs should they visit the duplicate URLs. If your site is available on several domain names, pick one and use the 301 redirect appropriately from the others, making sure to forward to the right specific page, not just the root of the domain. If you support both www and non-www host names, pick one, use the preferred domain setting in Webmaster Tools, and redirect appropriately.
  5. Implement the rel="canonical" link element on your pages where you can.
    Where 301 redirects are not possible, the rel="canonical" link element can give us a better understanding of your site and of your preferred URLs. The use of this link element is also supported by major search engines such as Ask.comBing and Yahoo!.
  6. Use the URL parameter handling tool in Google Webmaster Tools where possible.
    If some or all of your website's duplicate content comes from URLs with query parameters, this tool can help you to notify us of important and irrelevant parameters within your URLs. More information about this tool can be found in our announcement blog post.

What about the robots.txt file?

One item which is missing from this list is disallowing crawling of duplicate content with your robots.txt file. We now recommend not blocking access to duplicate content on your website, whether with a robots.txt file or other methods. Instead, use the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. If access to duplicate content is entirely blocked, search engines effectively have to treat those URLs as separate, unique pages since they cannot know that they're actually just different URLs for the same content. A better solution is to allow them to be crawled, but clearly mark them as duplicate using one of our recommended methods. If you allow us to crawl these URLs, Googlebot will learn rules to identify duplicates just by looking at the URL and should largely avoid unnecessary recrawls in any case. In cases where duplicate content still leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.

We hope these methods will help you to master the duplicate content on your website! Information about duplicate content in general can also be found in our Help Center. Should you have any questions, feel free to join the discussion in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Seo Master present to you:

If you have open source projects hosted on Google Code, you may have noticed that the SSL certificate changed for the googlecode.com domain. (The old certificate expired, and a new one was generated.) In particular, your Subversion client may have yelled about the certificate not being recognized:
Error validating server certificate for
'https://projectname.googlecode.com:443':
- The certificate is not issued by a trusted authority. Use the
fingerprint to validate the certificate manually!
Certificate information:
- Hostname: googlecode.com
- Valid: from Wed, 28 May 2008 16:48:13 GMT until Mon, 21 Jun 2010 14:09:43 GMT
- Issuer: Certification Services Division, Thawte Consulting cc, Cape
Town, Western Cape, ZA
- Fingerprint: b1:3a:d5:38:56:27:52:9f:ba:6c:70:1e:a9:ab:4a:1a:8b:da:ff:ec
(R)eject, accept (t)emporarily or accept (p)ermanently?
Just like a web browser, your Subversion client needs to know whether or not you trust particular SSL certificates coming from servers. You can verify the certificate using the fingerprint above, or you can choose to permanently accept the certificate, whichever makes you feel most comfortable. To permanently accept the certificate, you can simply choose the (p)ermanent option, and Subversion will trust it forever.

Thawte is a large certifying authority, and it's very likely that the OpenSSL libraries on your computer automatically trust any certificate signed by Thawte. However, if you want your Subversion client to inherit that same level of automatic trust, you'll need to set an option in your ~/.subversion/servers file:
[global]
ssl-trust-default-ca = true
If you set this option, then your client will never bug you again about any certificate signed by the "big" authorities.

Happy hacking!2013, By: Seo Master
Powered by Blogger.