Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate

It’s a moment any site owner both looks forward to, and dreads: a huge surge in traffic to your site (yay!) can often cause your site to crash (boo!). Maybe you’ll create a piece of viral content, or get Slashdotted, or maybe Larry Page will get a tattoo and your site on tech tattoos will be suddenly in vogue.

Many people go online immediately after a noteworthy event—a political debate, the death of a celebrity, or a natural disaster—to get news and information about that event. This can cause a rapid increase in traffic to websites that provide relevant information, and may even cause sites to crash at the moment they’re becoming most popular. While it’s not always possible to anticipate such events, you can prepare your site in a variety of ways so that you’ll be ready to handle a sudden surge in traffic if one should occur:
  • Prepare a lightweight version of your site.
    Consider maintaining a lightweight version of your website; you can then switch all of your traffic over to this lightweight version if you start to experience a spike in traffic. One good way to do this is to have a mobile version of your site, and to make the mobile site available to desktop/PC users during periods of high traffic. Another low-effort option is to just maintain a lightweight version of your homepage, since the homepage is often the most-requested page of a site as visitors start there and then navigate out to the specific area of the site that they’re interested in. If a particular article or picture on your site has gone viral, you could similarly create a lightweight version of just that page.
    A couple tips for creating lightweight pages:
    • Exclude decorative elements like images or Flash wherever possible; use text instead of images in the site navigation and chrome, and put most of the content in HTML.
    • Use static HTML pages rather than dynamic ones; the latter place more load on your servers. You can also cache the static output of dynamic pages to reduce server load.
  • Take advantage of stable third-party services.
    Another alternative is to host a copy of your site on a third-party service that you know will be able to withstand a heavy stream of traffic. For example, you could create a copy of your site—or a pared-down version with a focus on information relevant to the spike—on a platform like Google Sites or Blogger; use services like Google Docs to host documents or forms; or use a content delivery network (CDN).
  • Use lightweight file formats.
    If you offer downloadable information, try to make the downloaded files as small as possible by using lightweight file formats. For example, offering the same data as a plain text file rather than a PDF can allow users to download the exact same content at a fraction of the filesize (thereby lightening the load on your servers). Also keep in mind that, if it’s not possible to use plain text files, PDFs generated from textual content are more lightweight than PDFs with images in them. Text-based PDFs are also easier for Google to understand and index fully.
  • Make tabular data available in CSV and XML formats.
    If you offer numerical or tabular data (data displayed in tables), we recommend also providing it in CSV and/or XML format. These filetypes are relatively lightweight and make it easy for external developers to use your data in external applications or services in cases where you want the data to reach as many people as possible, such as in the wake of a natural disaster.
We’d love to hear your tips and tricks for weathering traffic spikes—come join us in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog:

Since duplicate content is a hot topic among webmasters, we thought it might be a good time to address common questions we get asked regularly at conferences and on the Google Webmaster Help Group.

Before diving in, I'd like to briefly touch on a concern webmasters often voice: in most cases a webmaster has no influence on third parties that scrape and redistribute content without the webmaster's consent. We realize that this is not the fault of the affected webmaster, which in turn means that identical content showing up on several sites in itself is not inherently regarded as a violation of our webmaster guidelines. This simply leads to further processes with the intent of determining the original source of the content—something Google is quite good at, as in most cases the original content can be correctly identified, resulting in no negative effects for the site that originated the content.

Generally, we can differentiate between two major scenarios for issues related to duplicate content:
  • Within-your-domain-duplicate-content, i.e. identical content which (often unintentionally) appears in more than one place on your site

  • Cross-domain-duplicate-content, i.e. identical content of your site which appears (again, often unintentionally) on different external sites
With the first scenario, you can take matters into your own hands to avoid Google indexing duplicate content on your site. Check out Adam Lasnik's post Deftly dealing with duplicate content and Vanessa Fox's Duplicate content summit at SMX Advanced, both of which give you some great tips on how to resolve duplicate content issues within your site. Here's one additional tip to help avoid content on your site being crawled as duplicate: include the preferred version of your URLs in your Sitemap file. When encountering different pages with the same content, this may help raise the likelihood of us serving the version you prefer. Some additional information on duplicate content can also be found in our comprehensive Help Center article discussing this topic.

In the second scenario, you might have the case of someone scraping your content to put it on a different site, often to try to monetize it. It's also common for many web proxies to index parts of sites which have been accessed through the proxy. When encountering such duplicate content on different sites, we look at various signals to determine which site is the original one, which usually works very well. This also means that you shouldn't be very concerned about seeing negative effects on your site's presence on Google if you notice someone scraping your content.

In cases when you are syndicating your content but also want to make sure your site is identified as the original source, it's useful to ask your syndication partners to include a link back to your original content. You can find some additional tips on dealing with syndicated content in a recent post by Vanessa Fox, Ranking as the original source for content you syndicate.

Some webmasters have asked what could cause scraped content to rank higher than the original source. That should be a rare case, but if you do find yourself in this situation:
  • Check if your content is still accessible to our crawlers. You might unintentionally have blocked access to parts of your content in your robots.txt file.

  • You can look in your Sitemap file to see if you made changes for the particular content which has been scraped.

  • Check if your site is in line with our webmaster guidelines.
To conclude, I'd like to point out that in the majority of cases, having duplicate content does not have negative effects on your site's presence in the Google index. It simply gets filtered out. If you check out some of the tips mentioned in the resources above, you'll basically learn how to have greater control about what exactly we're crawling and indexing and which versions are more likely to appear in the index. Only when there are signals pointing to deliberate and malicious intent, occurrences of duplicate content might be considered a violation of the webmaster guidelines.

If you would like to further discuss this topic, feel free to visit our Webmaster Help Group.

For the German version of this post, go to "Duplicate Content aufgrund von Scraper-Sites".this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Seo Master present to you:
Your URLs provide an opportunity to let search engines and people know what your page is about. Conversely, if you don't pay attention to your URLs, they may provide no value for your site's SEO (search engine optimization) or for your human visitors, either. Badly designed URLs may even trip up search engines or make them think you're spammy.
  • Include a few important keywords in your URLs.

A keyword-infused URL can:
  • Help visitors see that the page they're on is really what they're looking for. Would you rather see example.com/blog/219058 or example.com/blog/cute-puppies? People will see your URL in search results, at the top of their web browser while they're on your page, and any place where they may save the URL for themselves - like in bookmarks, or an email.
  • Give search engines one more indication of what your page is about, and what queries it should rank for. A URL without keywords won't hurt you, but it's a missed opportunity. A competitor who's placed
    relevant keywords in his URLs may rank higher than you for those keywords.

  • Keep your URLs to fewer than 115 characters.
  • Research shows that people click on short URLs in search results twice as often as long ones. Shorter URLs are also easier to share on social sites like Twitter and StumbleUpon.
  • Long URLs can look like spam. As the URL gets longer, the ranking weight given to each word in the URL gets spread thin, and becomes less valuable for any specific word.
You can manually check the character count of all your URLs to make sure they're not too long. The AboutUs Site Report can do it automatically, and point out any URLs that are longer than 115 characters.
  • Don't use more than a few query parameters in your URLs.

In a URL, a ? or & indicates that a parameter (like id=1234) will follow.
Here's an example of an okay URL (the kind you use to track your marketing in Google Analytics) with 1 query parameter:
http://www.example.com/page?source=facebook
Bad URL with too many query parameters:
http://www.example.com/product?id=1234567&foo=abc123def&color=yellow&sort=price
Too many query parameters can cause search engine robots to enter a loop and keep crawling the same pages over and over again. You could end up with search engines failing to index some of your most important pages.
  • Use hyphens instead of underscores in your URLs.
Search engines see underscores as a character. This means that your keywords will be seen as a single long keyword, and you'll lose any SEO benefit they could have incurred. A hyphen, however, is seen as a space that separates words. Hyphens are better for SEO because they allow search engines to interpret your web page as relevant for more keyword phrases. That said, Wikipedia's links have underscores, and they seem to be doing okay in search results :-)
Also, people can't see underscores in a URL when the link is underlined, as many links on the Web are. So hyphens are friendlier for people, and make your site more usable.
So... example.com/adorable-kitten-pics is better than example.com/adorable_kitten_pics
  • Keep all of your important content less than 3 subfolders deep.
A subfolder is a folder that is visible in a URL between two slashes. For example, in http://www.example.com/articles/name-of-page, articles is a subfolder and name-of-page is an article in that subfolder.
When it comes to subfolders, search engines assume that content living many folders away from the root domain (like example.com) is less important. So it's best to organize all of your important content so each URL has no more than two subfolders.

Here's another way to think about it: Make sure your URLs have 3 or fewer slashes (/) after the domain name. Here is an example URL that is a web page that is two subfolders deep:
http://www.example.com/articles/foo/page-name.htm
Bonus: Using subfolders allows you to use "content drilldown" in Google Analytics to easily view data for all the pages in a given subfolder.
  • Don't have too many subdomains.

A subdomain, or directory, is something that comes before the domain name in a URL. For example: http://blog.example.com Technically speaking, www. is actually a subdomain.
Too many subdomains can cause problems for search engine optimization. For more information, read Multiple Subdomains: Classic SEO Mistake.

How to Change Your URLs



Ideally, you'd set up a search-friendly URL structure when you first create your website. Then it just works for you without having to lift another finger.

Even if your website is already built, you should be able to change your URL structure - but it could be a pain in the neck. Most content management systems (CMS) allow you to change your default URL structure, or individual URLs for pages. You'll need to find that setting or option in your platform. If you use WordPress, see WordPress: Built for SEO for details on how to enable SEO-friendly URLs, or "permalinks," as WordPress calls them.

Warning! If you change existing URLs on your website, make sure to permanently redirect (using a 301 redirect) the old URL to the new one. You want to send people and search engines to the right place, not to a 404 error page. Keep in mind that while the 301 redirect will get people and search engine spiders to the right page, a small percentage of the PageRank or link juice from the linking page will be lost along the way.
Is it worth the effort to change your URLs? The above best practices for URLs can help your site's SEO, but as with other changes on your website, it's good to consider both the potential costs and benefits of the change. For example, the folks at SEOByTheSea.com know what they're doing, but opted to leave their URL structure as is - without keywords - because it would take a huge amount of work for a 6-year-old website with hundreds or thousands of pages to make the change. The site would also lose some link juice and PageRank due to so many redirects. In other words, there are tradeoffs.
Beyond changing the URLs on your website, you can sometimes change the URLs for your listings on other websites. For example, social sites like Facebook allow you to set a "vanity URL" to change a page with a URL like Facebook.com/pages/Company-Name/123456789 to Facebook.com/CompanyName. See 7 Simple Facebook Tricks for instructions. Keep an eye out for this option on other websites if you want to make it easier for people to find you there.







2013, By: Seo Master
Powered by Blogger.