Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Changes in the Chrome user agent 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

The Chrome team is exploring a few changes to Chrome’s UA string. These changes are designed to provide additional details in the user-agent, remove redundancy, and increase compatibility with Internet Explorer. They’re also happening in conjunction with similar changes in Firefox 4.

We intend to ship Chrome 11 with these changes, assuming they don't cause major web compatibility problems. To test them out and ensure your website remains compatible with Chrome, we recommend trying the Chrome Dev and Beta channel builds. If you have any questions, please check out the blog post on the Chromium blog or drop us a line at our help forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Introducing Page Speed Online, with mobile support 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: intermediate

At Google, we’re striving to make the whole web fast. As part of that effort, we’re launching a new web-based tool in Google Labs, Page Speed Online, which analyzes the performance of web pages and gives specific suggestions for making them faster. Page Speed Online is available from any browser, at any time. This allows website owners to get immediate access to Page Speed performance suggestions so they can make their pages faster.

In addition, we’ve added a new feature: the ability to get Page Speed suggestions customized for the mobile version of a page, specifically smartphones. Due to the relatively limited CPU capabilities of mobile devices, the high round-trip times of mobile networks, and rapid growth of mobile usage, understanding and optimizing for mobile performance is even more critical than for the desktop, so Page Speed Online now allows you to easily analyze and optimize your site for mobile performance. The mobile recommendations are tuned for the unique characteristics of mobile devices, and contain several best practices that go beyond the recommendations for desktop browsers, in order to create a faster mobile experience. New mobile-targeted best practices include eliminating uncacheable landing page redirects and reducing the amount of JavaScript parsed during the page load, two common issues that slow down mobile pages today.


Page Speed Online is powered by the same Page Speed SDK that powers the Chrome and Firefox extensions and webpagetest.org.

Please give Page Speed Online a try. We’re eager to hear your feedback on our mailing list and how you’re using it to optimize your site.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: 6 Quick Tips for International Websites 2013

salam every one, this is a topic from google web master centrale blog:
Note from the editors: After previously looking into various ways to handle internationalization for Google’s web-search, here’s a post from Google Web Studio team members with tips for web developers.

Many websites exist in more than one language, and more and more websites are made available for more than one language. Yet, building a website for more than one language doesn’t simply mean translation, or localization (L10N), and that’s it. It requires a few more things, all of which are related to internationalization (I18N). In this post we share a few tips for international websites.

1. Make pages I18N-ready in the markup, not the style sheets

Language and directionality are inherent to the contents of the document. If possible you should hence always use markup, not style sheets, for internationalization purposes. Use @lang and @dir, at least on the html element:

<html lang="ar" dir="rtl">

Avoid coming up with your own solutions like special classes or IDs.

As for I18N in style sheets, you can’t always rely on CSS: The CSS spec defines that conforming user agents may ignore properties like direction or unicode-bidi. (For XML, the situation changes again. XML doesn’t offer special internationalization markup, so here it’s advisable to use CSS.)

2. Use one style sheet for all locales

Instead of creating separate style sheets for LTR and RTL directionality, or even each language, bundle everything in one style sheet. That makes your internationalization rules much easier to understand and maintain.

So instead of embedding an alternative style sheet like

<link href="default.rtl.css" rel="stylesheet">

just use your existing

<link href="default.css" rel="stylesheet">

When taking this approach you’ll need to complement existing CSS rules by their international counterparts:

3. Use the [dir='rtl'] attribute selector

Since we recommend to stick with the style sheet you have (tip #2), you need a different way of selecting elements you need to style differently for the other directionality. As RTL contents require specific markup (tip #1), this should be easy: For most modern browsers, we can simply use [dir='rtl'].

Here’s an example:

aside {
 float: right;
 margin: 0 0 1em 1em;
}

[dir='rtl'] aside {
 float: left;
 margin: 0 1em 1em 0;
}

4. Use the :lang() pseudo class

To target documents of a particular language, use the :lang() pseudo class. (Note that we’re talking documents here, not text snippets, as targeting snippets of a particular language makes things a little more complex.)

For example, if you discover that bold formatting doesn’t work very well for Chinese documents (which indeed it does not), use the following:

:lang(zh) strong,
:lang(zh) b {
 font-weight: normal;
 color: #900;
}

5. Mirror left- and right-related values

When working with both LTR and RTL contents it’s important to mirror all the values that change directionality. Among the properties to watch out for is everything related to borders, margins, and paddings, but also position-related properties, float, or text-align.

For example, what’s text-align: left in LTR needs to be text-align: right in RTL.

There are tools to make it easy to “flip” directionality. One of them is CSSJanus, though it has been written for the “separate style sheet” realm, not the “same style sheet” one.

6. Keep an eye on the details

Watch out for the following items:
  • Images designed for left or right, like arrows or backgrounds, light sources in box-shadow and text-shadow values, and JavaScript positioning and animations: These may require being swapped and accommodated for in the opposite directionality.
  • Font sizes and fonts, especially for non-Latin alphabets: Depending on the script and font, the default font size may be too small. Consider tweaking the size and, if necessary, the font.
  • CSS specificity: When using the [dir='rtl'] (or [dir='ltr']) hook (tip #2), you’re using a selector of higher specificity. This can lead to issues. Just have an eye out, and adjust accordingly.

If you have any questions or feedback, check the Internationalization Webmaster Help Forum, or leave your comments here.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Beyond PageRank: Graduating to actionable metrics 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Beginner

Like any curious netizen, I have a Google Alert set up to email me whenever my name is mentioned online. Usually I get a slow trickle of my forum posts, blog posts, and tweets. But by far the most popular topic of these alerts over the past couple years has been my off-handed mention that we removed PageRank distribution data from Webmaster Tools in one of our 2009 releases.

The fact that people are still writing about this almost two years later—usually in the context of “Startling news from Susan Moskwa: ...”—really drives home how much PageRank has become a go-to statistic for some webmasters. Even the most inexperienced site owners I talk with have often heard about, and want to know more about, PageRank (“PR”) and what it means for their site. However, as I said in my fateful forum post, the Webmaster Central team has been telling webmasters for years that they shouldn't focus so much on PageRank as a metric for representing the success of one’s website. Today I’d like to explain this position in more detail and give you some relevant, actionable options to fill your time once you stop tracking your PR!

Why PageRank?
In 2008 Udi Manber, VP of engineering at Google, wrote on the Official Google Blog:
“The most famous part of our ranking algorithm is PageRank, an algorithm developed by Larry Page and Sergey Brin, who founded Google. PageRank is still in use today, but it is now a part of a much larger system.”
PageRank may have distinguished Google as a search engine when it was founded in 1998; but given the rate of change Manber describes—launching “about 9 [improvements] per week on the average”—we’ve had a lot of opportunity to augment and refine our ranking systems over the last decade. PageRank is no longer—if it ever was—the be-all and end-all of ranking.

If you look at Google’s Technology Overview, you’ll notice that it calls out relevance as one of the top ingredients in our search results. So why hasn’t as much ink been spilled over relevance as has been over PageRank? I believe it’s because PageRank comes in a number, and relevance doesn’t. Both relevance and PageRank include a lot of complex factors—context, searcher intent, popularity, reliability—but it’s easy to graph your PageRank over time and present it to your CEO in five minutes; not so with relevance. I believe the succinctness of PageRank is why it’s become such a go-to metric for webmasters over the years; but just because something is easy to track doesn’t mean it accurately represents what’s going on on your website.

What do we really want?
I posit that none of us truly care about PageRank as an end goal. PageRank is just a stand-in for what we really want: for our websites to make more money, attract more readers, generate more leads, more newsletter sign-ups, etc. The focus on PageRank as a success metric only works if you assume that a higher PageRank results in better ranking, then assume that that will drive more traffic to your site, then assume that that will lead to more people doing-whatever-you-want-them-to-do on your site. On top of these assumptions, remember that we only update the PageRank displayed on the Google Toolbar a few times a year, and we may lower the PageRank displayed for some sites if we believe they’re engaging in spammy practices. So the PR you see publicly is different from the number our algorithm actually uses for ranking. Why bother with a number that’s at best three steps removed from your actual goal, when you could instead directly measure what you want to achieve? Finding metrics that are directly related to your business goals allows you to spend your time furthering those goals.

If I don’t track my PageRank, what should I be tracking?
Take a look at metrics that correspond directly to meaningful gains for your website or business, rather than just focusing on ranking signals. Also consider metrics that are updated daily or weekly, rather than numbers (like PageRank) that only change a few times a year; the latter is far too slow for you to reliably understand which of your changes resulted in the number going up or down (assuming you update your site more than a few times a year). Here are three suggestions to get you started, all of which you can track using services like Google Analytics or Webmaster Tools:
  1. Conversion rate
  2. Bounce rate
  3. Clickthrough rate (CTR)
Conversion rate
A “conversion” is when a visitor does what you want them to do on your website. A conversion might be completing a purchase, signing up for a mailing list, or downloading a white paper. Your conversion rate is the percentage of visitors to your site who convert (perform a conversion). This is a perfect example of a metric that, unlike PageRank, is directly tied to your business goals. When users convert they’re doing something that directly benefits your organization in a measurable way! Whereas your PageRank is both difficult to measure accurately (see above), and can go up or down without having any direct effect on your business.

Bounce rate
A “bounce” is when someone comes to your website and then leaves without visiting any other pages on your site. Your bounce rate is the percentage of visits to your site where the visitor bounces. A high bounce rate may indicate that users don’t find your site compelling, because they come, take a look, and leave directly. Looking at the bounce rates of different pages across your site can help you identify content that’s underperforming and point you to areas of your site that may need work. After all, it doesn’t matter how well your site ranks if most searchers are bouncing off of it as soon as they visit.

Clickthrough rate (CTR)
In the context of organic search results, your clickthrough rate is how often people click on your site out of all the times your site gets shown in search results. A low CTR means that, no matter how well your site is ranking, users aren’t clicking through to it. This may indicate that they don’t think your site will meet their needs, or that some other site looks better. One way to improve your CTR is to look at your site’s titles and snippets in our search results: are they compelling? Do they accurately represent the content of each URL? Do they give searchers a reason to click on them? Here’s some advice for improving your snippets; the HTML suggestions section of Webmaster Tools can also point you to pages that may need help. Again, remember that it doesn’t matter how well your site ranks if searchers don’t want to click on it.

Entire blogs and books have been dedicated to explaining and exploring web metrics, so you’ll excuse me if my explanations just scrape the surface; analytics evangelist Avinash Kaushik’s site is a great place to start if you want to dig deeper into these topics. But hopefully I’ve at least convinced you that there are more direct, effective and controllable ways to measure your site’s success than PageRank.

One final note: Some site owners are interested in their site’s PR because people won’t buy links from their site unless they have a high PageRank. Buying or selling links for the purpose of passing PageRank violates our Webmaster Guidelines and is very likely to have negative consequences for your website, so a) I strongly recommend against it, and b) don’t be surprised if we aren’t interested in helping you raise your PageRank or improve your website when this is your stated goal.

We’d love to hear what metrics you’ve found useful and actionable for your website! Feel free to share your success stories with us in the comments here or in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Open redirect URLs: Is your site being abused? 2013

salam every one, this is a topic from google web master centrale blog:
No one wants malware or spammy URLs inserted onto their domain, which is why we all try to follow good security practices. But what if there were a way for spammers to take advantage of your site, without ever setting a virtual foot in your server?

There is, by abusing open redirect URLs.

Webmasters face a number of situations where it's helpful to redirect users to another page. Unfortunately, redirects left open to any arbitrary destination can be abused. This is a particularly onerous form of abuse because it takes advantage of your site's functionality rather than exploiting a simple bug or security flaw. Spammers hope to use your domain as a temporary "landing page" to trick email users, searchers and search engines into following links which appear to be pointing to your site, but actually redirect to their spammy site.

We at Google are working hard to keep the abused URLs out of our index, but it's important for you to make sure your site is not being used in this way. Chances are you don't want users finding URLs on your domain that push them to a screen full of unwanted porn, nasty viruses and malware, or phishing attempts. Spammers will generate links to make the redirects appear in search results, and these links tend to come from bad neighborhoods you don't want to be associated with.

This sort of abuse has become relatively common lately so we wanted to get the word out to you and your fellow webmasters. First we'll give some examples of redirects that are actively being abused, then we'll talk about how to find out if your site is being abused and what to do about it.

Redirects being abused by spammers

We have noticed spammers going after a wide range of websites, from large well-known companies to small local government agencies. The list below is a sample of the kinds of redirect we have seen used. These are all perfectly legitimate techniques, but if they're used on your site you should watch out for abuse.

  • Scripts that redirect users to a file on the server—such as a PDF document—can sometimes be vulnerable. If you use a content management system (CMS) that allows you to upload files, you might want to make sure the links go straight to the file, rather than going through a redirect. This includes any redirects you might have in the downloads section of your site. Watch out for links like this:
example.com/go.php?url=
example.com/ie/ie40/download/?

  • Internal site search result pages sometimes have automatic redirect options that could be vulnerable. Look for patterns like this, where users are automatically sent to any page after the "url=" parameter:
example.com/search?q=user+search+keywords&url=

  • Systems to track clicks for affiliate programs, ad programs, or site statistics might be open as well. Some example URLs include:
example.com/coupon.jsp?code=ABCDEF&url=
example.com/cs.html?url=

  • Proxy sites, though not always technically redirects, are designed to send users through to other sites and therefore can be vulnerable to this abuse. This includes those used by schools and libraries. For example:
proxy.example.com/?url=

  • In some cases, login pages will redirect users back to the page they were trying to access. Look out for URL parameters like this:
example.com/login?url=

  • Scripts that put up an interstitial page when users leave a site can be abused. Lots of educational, government, and large corporate web sites do this to let users know that information found on outgoing links isn't under their control. Look for URLs following patterns like this:
example.com/redirect/
example.com/out?
example.com/cgi-bin/redirect.cgi?

Is my site being abused?

Even if none of the patterns above look familiar, your site may have open redirects to keep an eye on. There are a number of ways to see if you are vulnerable, even if you are not a developer yourself.

  • Check if abused URLs are showing up in Google. Try a site: search on your site to see if anything unfamiliar shows up in Google's results for your site. You can add words to the query that are unlikely to appear in your content, such as commercial terms or adult language. If the query [site:example.com viagra] isn't supposed to return any pages on your site and it does, that could be a problem. You can even automate these searches with Google Alerts.

  • You can also watch out for strange queries showing up in the Top search queries section of Webmaster Tools. If you have a site dedicated to the genealogy of the landed gentry, a large number of queries for porn, pills, or casinos might be a red flag. On the other hand, if you have a drug info site, you might not expect to see celebrities in your top queries. Keep an eye on the Message Center in Webmaster Tools for any messages from Google.

  • Check your server logs or web analytics package for unfamiliar URL parameters (like "=http:" or "=//") or spikes in traffic to redirect URLs on your site. You can also check the pages with external links in Webmaster Tools.

  • Watch out for user complaints about content or malware that you know for sure can not be found on your site. Your users may have seen your domain in the URL before being redirected and assumed they were still on your site.


What you can do

Unfortunately there is no one easy way to make sure that your redirects aren't exploited. An open redirect isn't a bug or a security flaw in and of itself—for some uses they have to be left fairly open. But there are a few things you can do to prevent your redirects from being abused or at least to make them less attractive targets. Some of these aren't trivial; you may need to write some custom code or talk to your vendor about releasing a patch.

  • Change the redirect code to check the referer, since in most cases everyone coming to your redirect script legitimately should come from your site, not a search engine or elsewhere. You may need to be permissive, since some users' browsers may not report a referer, but if you know a user is coming from an external site you can stop or warn them.

  • If your script should only ever send users to an internal page or file (for example, on a page with file downloads), you should specifically disallow off-site redirects.

  • Consider using a whitelist of safe destinations. In this case your code would keep a record of all outgoing links, and then check to make sure the redirect is a legitimate destination before forwarding the user on.

  • Consider signing your redirects. If your website does have a genuine need to provide URL redirects, you can properly hash the destination URL and then include that cryptographic signature as another parameter when doing the redirect. That allows your own site to do URL redirection without opening your URL redirector to the general public.

  • If your site is really not using it, just disable or remove the redirect. We have noticed a large number of sites where the only use of the redirect is by spammers—it's probably just a feature left turned on by default.

  • Use robots.txt to exclude search engines from the redirect scripts on your site. This won't solve the problem completely, as attackers could still use your domain in email spam. Your site will be less attractive to attackers, though, and users won't get tricked via web search results. If your redirect scripts reside in a subfolder with other scripts that don't need to appear in search results, excluding the entire subfolder may even make it harder for spammers to find redirect scripts in the first place.



Open redirect abuse is a big issue right now but we think that the more webmasters know about it, the harder it will be for the bad guys to take advantage of unwary sites. Please feel free to leave any helpful tips in the comments below or discuss in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Mo’ better to also detect “mobile” user-agent 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

Here’s a trending User-Agent detection misstep we hope to help you prevent: While it seems completely reasonable to key off the string “android” in the User-Agent and then redirect users to your mobile version, there’s a small catch... Android tablets were just released! Similar to mobile, the User-Agent on Android tablets also contains “android,” yet tablet users usually prefer the full desktop version over the mobile equivalent. If your site matches “android” and then automatically redirects users, you may be forcing Android tablet users into a sub-optimal experience.

As a solution for mobile sites, our Android engineers recommend to specifically detect “mobile” in the User-Agent string as well as “android.” Let’s run through a few examples.

With a User-Agent like this:
Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13
since there is no “mobile” string, serve this user the desktop version (or a version customized for Android large-screen touch devices). The User-Agent tells us they’re coming from a large-screen device, the XOOM tablet.

On the other hand, this User-Agent:
Mozilla/5.0 (Linux; U; Android 2.2.1; en-us; Nexus One Build/FRG83) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
contains “mobile” and “android,” so serve the web surfer on this Nexus One the mobile experience!

You’ll notice that Android User-Agents have commonalities:


While you may still want to detect “android” in the User-Agent to implement Android-specific features, such as touch-screen optimizations, our main message is: Should your mobile site depends on UA sniffing, please detect the strings “mobile” and “android,” rather than just “android,” in the User-Agent. This helps properly serve both your mobile and tablet visitors.

For questions, please join our Android community in their developer forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part I: URLs & directories 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:
Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or web-sniffer.net to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We cover what to do in this situation in a subsequent post. in Part II of our removals series.

If a URL meets one of the above criteria, you can remove it by going to http://www.google.com/webmasters/tools/removals, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter
   http://www.example.com/embarrassing-stuff.html
not
   http://www.google.com/search?q=embarrassing+stuff

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Edit: Read the rest of this series:
Part II: Removing & updating cached content
Part III: Removing content you don't own
Part IV: Tracking requests, what not to remove

Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: SEO Starter Guide updated 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: Beginner

Update on October 3, 2010: We have fixed the issue causing the highlighted text to be obscured on Linux PDF readers.

About two years ago we published our first SEO Starter Guide, which we have since translated into 40 languages. Today, we’re very happy to share with you the new version of the guide with more content and examples.

Here’s what’s new:
  • Glossary to define terms throughout the guide
  • More example images to help you understand the content
  • Ways to optimize your site for mobile devices
  • Clearer wording for better readability
You may remember getting to see what Googlebot looks like in our “First date with Googlebot” post. In this version of the SEO Starter Guide, Googlebot is back to provide you with some more SEO tips.

You can download the new version here [PDF]. Entertain and impress your friends by leaving a printed copy on your coffee table.

Googlebot

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: The Webmaster Academy goes international 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Since we launched the Webmaster Academy in English back in May 2012, its educational content has been viewed well over 1 million times.

The Webmaster Academy was built to guide webmasters in creating great sites that perform well in Google search results. It is an ideal guide for beginner webmasters but also a recommended read for experienced users who wish to learn more about advanced topics.

To support webmasters across the globe, we’re happy to announce that we’re launching the Webmaster Academy in 20 languages. So whether you speak Japanese or Italian, we hope we can help you to make even better websites! You can easily access it through Webmaster Central.

We’d love to read your comments here and invite you to join the discussion in the help forums.


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Chrome Extensions for web development 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

The Chrome Developer Tools are great for debugging HTML, JavaScript and CSS in Chrome. If you're writing a webpage or even a web app for the Chrome Web Store, you can inspect elements in the DOM, debug live JavaScript, and edit CSS styles directly in the current page. Extensions can make Google Chrome an even better web development environment by providing additional features that you can easily access in your browser. To help developers like you, we created a page that features extensions for web development. We hope you’ll find them useful in creating applications and sites for the web.


For example, Speed Tracer is an extension to help you identify and fix performance issues in your web applications. With Speed Tracer, you can get a better idea of where time is being spent in your application and troubleshoot problems in JavaScript parsing and execution, CSS style, and more.


Another useful extension is the Resolution Test that changes the size of the browser window, so web developers can preview websites in different screen resolutions. It also includes a list of commonly used resolutions, as well as a custom option to input your own resolution.


With the Web Developer extension, you can access additional developer tools such as validation options, page resizing and a CSS elements viewer; all from an additional button in the toolbar.


Another extension you should check out is the Chrome Editor that allows you to easily code within your browser, so you don’t have to flip between your browser and code editor. You can also save a code reference locally to your computer for later use.

These are just a few of the extensions you can find in our extensions for web development page. You can also look for more in the extensions gallery.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Helping your site look great with Google Chrome 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: Intermediate to Advanced
Since launching Google Chrome last September, we received a number of questions from webmasters and web developers about how to make their sites look great in Google Chrome. The questions were very insightful and illuminating for the Chrome team, and I want to respond with a few helpful tips for making your site look stellar in Google Chrome.

Detecting Google Chrome

Most sites will render the same in both Safari and Google Chrome, because they're both WebKit-based browsers. If your site looks right in Safari, then it should look right in Google Chrome, too.

Since Chrome is relatively new, many sites have confused Google Chrome with another browser. If your site doesn't look quite right in Chrome but works fine in Safari, it's possible your site may just not recognize Chrome's user-agent string.

As platforms and browsers adopt WebKit as their rendering engine, your site can detect and support them automatically with the right JavaScript checks. Commonly, sites use JavaScript to 'sniff' the navigator.userAgent property for "Chrome" or "Safari", but you should use proper object detection if possible. In fact, Gmail has been detecting WebKit properly in Chrome since day one!

If you must detect the user-agent type, you can use this simple JavaScript to detect WebKit:

var isWebkit =
  navigator.userAgent.indexOf("AppleWebKit") > -1;


Or, if you want to check that the version of WebKit is at least a certain version—say, if you want to use a spiffy new WebKit feature:

var webkitVersion =
  parseFloat(navigator.userAgent.split("AppleWebKit/")[1]) ||
  undefined;
if (webkitVersion && webkitVersion > 500 ) {
  // use spiffy WebKit feature here
}


For reference, here are a few browser releases and the version of WebKit they shipped:

BrowserVersion of WebKit
Chrome 1.0525.19
Chrome 2.0 beta530.1
Safari 3.1525.19
Safari 3.2525.26.2
Safari 4.0 beta528.16


We do not recommend adding "Google" or "Apple" to your navigator.vendor checks to detect WebKit or Google Chrome, because this will not detect other WebKit or Chromium-based browsers!

You can find more information about detecting WebKit at webkit.org.

Other helpful tips
  • Google Chrome doesn't support ActiveX plug-ins, but does support NPAPI plug-ins. This means you can show plug-in content like Flash and Java in Google Chrome the same way you do with Firefox and Safari.
  • If text on your site looks a bit off, make sure you provide the proper content type and character encoding information in the HTTP response headers, or at the beginning of your pages, preferably near the top of the <head> section.
  • Don't put block elements inside inline elements.
Wrong:   <a><div>This will look wrong.</div></a>

Right:     <div><a>This will look right!</a></div>
  • If your JavaScript isn't working in Google Chrome, you can debug using Chrome's built-in JavaScript debugger, under the "page" menu -> 'Developer' -> 'Debug JavaScript' menu option.
To help webmasters and web developers find more answers, we created a support center and forum specifically to answer your questions. Of course, if you find something you think is really a bug in Chrome, please report it to us!

Help us improve Google Chrome!

If you'd like to help even more, we're looking for sites that may be interested in allowing Google to use their site as a benchmark for our internal compatibility and performance measurements. If you're interested in having Google Chrome development optimized against a cached version of your site, please contact us about details at chrome-webmasters@google.com.

Please keep the feedback coming, and we'll keep working to improve Google Chrome!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Keeping comment spam off your site and away from users 2013

salam every one, this is a topic from google web master centrale blog:
So, you've set up a forum on your site for the first time, or enabled comments on your blog. You carefully craft a post or two, click the submit button, and wait with bated breath for comments to come in.

And they do come in. Perhaps you get a friendly note from a fellow blogger, a pressing update from an MMORPG guild member, or a reminder from your Aunt Millie about dinner on Thursday. But then you get something else. Something... disturbing. Offers for deals that are too good to be true, bizarre logorrhean gibberish, and explicit images you certainly don't want Aunt Millie to see. You are now buried in a deluge of dreaded comment spam.

Comment spam is bad stuff all around. It's bad for you, because it adds to your workload. It's bad for your users, who want to find information on your site and certainly aren't interested in dodgy links and unrelated content. It's bad for the web as a whole, since it discourages people from opening up their sites for user-contributed content and joining conversations on existing forums.

So what can you, as a webmaster, do about it?

A quick disclaimer: the list below is a good start, but not exhaustive. There are so many different blog, forum, and bulletin board systems out there that we can't possibly provide detailed instructions for each, so the points below are general enough to make sense on most systems.

Make sure your commenters are real people
  • Add a CAPTCHA. CAPTCHAs require users to read a bit of obfuscated text and type it back in to prove they're a human being and not an automated script. If your blog or forum system doesn't have CAPTCHAs built in you may be able to find a plugin like Recaptcha, a project which also helps digitize old books. CAPTCHAs are not foolproof but they make life a little more difficult for spammers. You can read more about the many different types of CAPTCHAS, but keep in mind that just adding a simple one can be fairly effective.

  • Block suspicious behavior. Many forums allow you to set time limits between posts, and you can often find plugins to look for excessive traffic from individual IP addresses or proxies and other activity more common to bots than human beings.

Use automatic filtering systems
  • Block obviously inappropriate comments by adding words to a blacklist. Spammers obfuscate words in their comments so this isn't a very scalable solution, but it can keep blatant spam at bay.

  • Use built-in features or plugins that delete or mark comments as spam for you. Spammers use automated methods to besmirch your site, so why not use an automated system to defend yourself?  Comprehensive systems like Akismet, which has plugins for many blogs and forum systems and TypePad AntiSpam, which is open-source and compatible with Akismet, are easy to install and do most of the work for you. 

  • Try using Bayesian filtering options, if available. Training the system to recognize spam may require some effort on your part, but this technique has been used successfully to fight email spam

Make your settings a bit stricter
  • Nofollow untrusted links. Many systems have a setting to add a rel="nofollow" attribute to the links in comments, or do so by default. This may discourage some types of spam, but it's definitely not the only measure you should take.

  • Consider requiring users to create accounts before they can post a comment. This adds steps to the user experience and may discourage some casual visitors from posting comments, but may keep the signal-to-noise ratio higher as well.

  • Change your settings so that comments need to be approved before they show up on your site. This is a great tactic if you want to hold comments to a high standard, don't expect a lot of comments, or have a small, personal site. You may be able to allow employees or trusted users to approve posts themselves, spreading the workload. 

  • Think about disabling some types of comments. For example, you may want to disable comments on very old posts that are unlikely to get legitimate comments. On blogs you can often disable trackbacks and pingbacks, which are very cool features but can be major avenues for automated spam.

Keep your site up-to-date
  • Take the time to keep your software up-to-date and pay special attention to important security updates. Some spammers take advantage of security holes in older versions of blogs, bulletin boards, and other content management systems. Check the Quick Security Checklist for additional measures.

You may need to strike a balance on which tactics you choose to implement depending on your blog or bulletin board software, your user base, and your level of experience. Opening up a site for comments without any protection is a big risk, whether you have a small personal blog or a huge site with thousands of users. Also, if your forum has been completely filled with thousands of spam posts and doesn't even show up in Google searches, you may want to submit a reconsideration request after you clear out the bad content and take measures to prevent further spam.

As a long-time blogger and web developer myself, I can tell you that a little time spent setting up measures like these up front can save you a ton of time and effort later. I'm new to the Webmaster Central team, originally from Cleveland. I'm very excited to help fellow webmasters, and have a passion for usability and search quality (I've even done a bit of academic research on the topic). Please share your tips on preventing comment and forum spam in the comments below, and as always you're welcome to ask questions in our discussion group.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Spam2.0: Fake user accounts and spam profiles 2013

salam every one, this is a topic from google web master centrale blog:
You're a good webmaster or web developer, and you've done everything you can to keep your site from being hacked and keep your forums and comment sections free of spam. You're now the proud owner of a buzzing web2.0 social community, filling the web with user-generated content, and probably getting lots of visitors from Google and other search engines.

Many of your site's visitors will create user profiles, and some will spend hours posting in forums, joining groups, and getting the sparkles exactly right on the rainbow-and-unicorn image for their BFF's birthday. This is all great.

Others, however, will create accounts and fill their profiles with gibberish, blatherskite and palaver. Even worse, they'll add a sneaky link, a bit of redirecting JavaScript code, or a big fake embedded video that takes your users off to the seediest corners of the web.

Welcome to the world of spam profiles. The social web is growing incredibly quickly and spammers look at every kind of user content on the web as an opportunity for traffic. I've spoken with a number of experienced webmasters who were surprised to find out this was even a problem, so I thought I would talk a little bit about spam profiles and what you might do to find and clean them out of your site.

Why is this important?

Imagine the following scenario:

"Hello there, welcome to our new web2.0 social networking site. Boy, have I got a new friend for you. His name is Mr. BuyMaleEnhancementRingtonesNow, and he'd love for you to check out his profile. He's a NaN-year-old from Pharmadelphia, PA and you can check out his exciting home page at http://example.com/obviousflimflam.


Not interested? Then let me introduce you to my dear friend PrettyGirlsWebCam1234, she says she's an old college friend of yours and has exciting photos and videos you might want to see."


You probably don't want your visitors' first impression of your site to include inappropriate images or bogus business offers. You definitely don't want your users hounded by fake invites to the point where they stop visiting altogether. If your site becomes filled with spammy content and links to bad parts of the web, search engines may lose trust in your otherwise fine site.

Why would anyone create spam profiles?

Spammers create fake profiles for a number of nefarious purposes. Sometimes they're just a way to reach users internally on a social networking site. This is somewhat similar to the way email spam works - the point is to send your users messages or friend invites and trick them into following a link, making a purchase, or downloading malware by sending a fake or low-quality proposition.

Spammers are also using spam profiles as yet another avenue to generate webspam on otherwise good domains. They scour the web for opportunities to get their links, redirects, and malware to users. They use your site because it's no cost to them and they hope to piggyback off your good reputation.

The latter case is becoming more and more common. Some fake profiles are obvious, using popular pharmaceuticals as the profile name, for example; but we've noticed an increase in savvier spammers that try to use real names and realistic data to sneak in their bad links. To make sure their newly-minted gibberish profile shows up in searches they will also generate links on hacked sites, comment spam, and yes, other spam profiles. This results in a lot of bad content on your domain, unwanted incoming links from spam sites, and annoyed users.

Which sites are being abused?

You may be thinking to yourself, "But my site isn't a huge social networking juggernaut; surely I don't need to worry." Unfortunately, we see spam profiles on everything from the largest social networking sites to the smallest forums and bulletin boards. Many popular bulletin boards and content management systems (CMS) such as vBulletin, phpBB, Moodle, Joomla, etc. generate member pages for every user that creates an account. In general CMSs are great because they make it easy for you to deploy content and interactive features to your site, but auto-generated pages can be abused if you're not aware.

For all of you out there who do work for huge social networking juggernauts, your site is a target as well. Spammers want access to your large userbase, hoping that users on social sites will be more trusting of incoming friend requests, leading to larger success rates.

What can you do?

This isn't an easy problem to solve - the bad guys are attacking a wide range of sites and seem to be able to adapt their scripts to get around countermeasures. Google is constantly under attack by spammers trying to create fake accounts and generate spam profiles on our sites, and despite all of our efforts some have managed to slip through. Here are some things you can do to make their lives more difficult and keep your site clean and useful:

  • Make sure you have standard security features in place, including CAPTCHAs, to make it harder for spammers to create accounts en masse. Watch out for unlikely behavior - thousands of new user accounts created from the same IP address, new users sending out thousands of friend requests, etc. There is no simple solution to this problem, but often some simple checks will catch most of the worst spam.
  • Use a blacklist to prevent repetitive spamming attempts. We often see large numbers of fake profiles on one innocent site all linking to the same domain, so once you find one, you should make it simple to remove all of them.
  • Watch out for cross-site scripting (XSS) vulnerabilities and other security holes that allow spammers to inject questionable code onto their profile pages. We've seen techniques such as JavaScript used to redirect users to other sites, iframes that attempt to give users malware, and custom CSS code used to cover over your page with spammy content.
  • Consider nofollowing the links on untrusted user profile pages. This makes your site less attractive to anyone trying to pass PageRank from your site to their spammy site. Spammers seem to go after the low-hanging fruit, so even just nofollowing new profiles with few signals of trustworthiness will go a long way toward mitigating the problem. On the flip side, you could also consider manually or automatically lifting the nofollow attribute on links created by community members that are likely more trustworthy, such as those who have contributed substantive content over time.
  • Consider noindexing profile pages for new, not yet trustworthy users. You may even want to make initial profile pages completely private, especially if the bulk of the content on your site is in blogs, forums, or other types of pages.
  • Add a "report spam" feature to user profiles and friend invitations. Let your users help you solve the problem - they care about your community and are annoyed by spam too.
  • Monitor your site for spammy pages. One of the best tools for this is Google Alerts - set up a site: query along with commercial or adult keywords that you wouldn't expect to see on your site. This is also a great tool to help detect hacked pages. You can also check 'Keywords' data in Webmaster Tools for strange, volatile vocabulary.
  • Watch for spikes in traffic from suspicious queries. It's always great to see the line on your pageviews chart head upward, but pay attention to commercial or adult queries that don't fit your site's content. In cases like this where a spammer has abused your site, that traffic will provide little if any benefit while introducing users to your site as "the place that redirected me to that virus."


Have any other tips to share? Please feel free to comment below. If you have any questions, you can always ask in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: SEO essentials for startups in under 10 minutes 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Beginner to Intermediate

Wondering how to be search-friendly but lacking time for SEO research? We’d like to help! Meta keywords tag? Google Search ignores it. Meta description? Good to include.
If you:
  • Work on a company website that’s under 50ish pages.
  • Hope to rank well for your company name and a handful of related terms (not lots of terms like a news agency or e-commerce site).
  • Want to be smart about search engines and attracting searchers, but haven’t kept up with the latest search news.
Then perhaps set aside ten minutes for this video (or just the slides) and gain SEO peace of mind.


Everything I’d tell a startup if I had ten minutes as their SEO consultant.

More tips at developers.google.com/startups. Best of luck!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Hard facts about comment spam 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Beginner

It has probably happened to you: you're reading articles or watching videos on the web, and you come across some unrelated, gibberish comments. You may wonder what this is all about. Some webmasters abuse other sites by exploiting their comment fields, posting tons of links that point back to the poster's site in an attempt to boost their site's ranking. Others might tweak this approach a bit by posting a generic comment (like "Nice site!") with a commercial user name linking to their site.

Why is it bad?

FACT: Abusing comment fields of innocent sites is a bad and risky way of getting links to your site. If you choose to do so, you are tarnishing other people's hard work and lowering the quality of the web, transforming a potentially good resource of additional information into a list of nonsense keywords.

FACT: Comment spammers are often trying to improve their site's organic search ranking by creating dubious inbound links to their site. Google has an understanding of the link graph of the web, and has algorithmic ways of discovering those alterations and tackling them. At best, a link spammer might spend hours doing spammy linkdrops which would count for little or nothing because Google is pretty good at devaluing these types of links. Think of all the more productive things one could do with that time and energy that would provide much more value for one's site in the long run.


Promote your site without comment spam

If you want to improve your site's visibility in the search results, spamming comments is definitely not the way to go. Instead, think about whether your site offers what people are looking for, such as useful information and tools.

FACT: Having original and useful content and making your site search engine friendly is the best strategy for better ranking. With an appealing site, you'll be recognized by the web community as a reliable source and links to your site will build naturally.

Moreover, Google provides a list of advice in order to improve the crawlability and indexability of your site. Check out our Search Engine Optimization Starter Guide.

What can I do to avoid spam on my site?

Comments can be a really good source of information and an efficient way of engaging a site's users in discussions. This valuable content should not be replaced by gibberish nonsense keywords and links. For this reason there are many ways of securing your application and disincentivizing spammers.
  • Disallow anonymous posting.
  • Use CAPTCHAs and other methods to prevent automated comment spamming.
  • Turn on comment moderation.
  • Use the "nofollow" attribute for links in the comment field.
  • Disallow hyperlinks in comments.
  • Block comment pages using robots.txt or meta tags.
For detailed information about these topics, check out our Help Center document on comment spam.

My site is full of comment spam, what should I do?

It's never too late! Don't let spammers ruin the experience for others. Adopt security measures discussed above to stop the spam activity, then invest some time to clean up the spammy comments and ban the spammers from your site. Depending on you site's system, you may be able to save time by banning spammers and removing their comments all at once, rather than one by one.

If I spammed comment fields of third party sites, what should I do?

If you used this approach in the past and you want to solve this issue, you should have a look at your incoming links in Webmaster Tools. To do so, go to the Your site on the web section and click on Links to your site. If you see suspicious links coming from blogs or other platforms allowing comments, you should check these URLs. If you see a spammy link you created, try to delete it, else contact the webmaster to ask to remove the link. Once you've cleared the spammy inbound links you made, you can file a reconsideration request.

For more information about this topic and to discuss it with others, join us in the Webmaster Help Forum. (But don't leave spammy comments!)

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Taking advantage of universal search, part 2 2013

salam every one, this is a topic from google web master centrale blog:

Universal search and personalized search were two of the hot topics at SMX West last month. Many webmasters wanted to know how these evolutions in search influence the way their content appears in search results, and how they can use these features to gain more relevant search traffic. We posted several recommendations on how to take advantage of universal search last year. Here are a few additional tips:
  1. Local search: Help nearby searchers find your business.
    Of the various search verticals, local search was the one we heard the most questions about. Here are a few tips to help business owners get the most out of local search:
  2. Video search: Enhance your video results.
    Several site owners asked whether they could specify a preferred thumbnail image for videos when they appear in search results. Good news: our Video Sitemaps protocol lets you suggest a thumbnail for each video.
  3. Personalized search basics
    A few observations from Googler Phil McDonnell:
    • Personalization of search results is usually accomplished through subtle ranking changes, rather than a drastic rearrangement of results. You shouldn't worry about personalization radically altering your site's ranking for a particular query.
    • Targeting a niche, or filling a very specific need, may be a good way to stand out in personalized results. For example, rather than creating a site about "music," you could create a site about the musical history of Haiti. Or about musicians who recorded with Elton John between 1969-1979.
    • Some personalization is based on the geographic location of the searcher; for example, a user searching for [needle] in Seattle is more likely to get search results about the Space Needle than, say, a searcher in Florida. Take advantage of features like Local Business Center and geographic targeting to let us know whether your website is especially relevant to searchers in a particular location.
    • As always, create interesting, unique and compelling content or tools.
  4. Image search: Increase your visibility.
    One panelist presented a case study in which a client's images were being filtered out of search results by SafeSearch because they had been classified as explicit. If you find yourself in this situation and believe your site should not be filtered by SafeSearch, use this contact form to let us know. Select the Report a problem > Inappropriate or irrelevant search results option and describe your situation.
Feel free to leave a comment if you have other tips to share!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: 1000 Words About Images 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Creativity is an important aspect of our lives and can enrich nearly everything we do. Say I'd like to make my teammate a cup of cool-looking coffee, but my creative batteries are empty; this would be (and is!) one of the many times when I look for inspiration on Google Images.


The images you see in our search results come from publishers of all sizes — bloggers, media outlets, stock photo sites — who have embedded these images in their HTML pages. Google can index image types formatted as BMP, GIF, JPEG, PNG and WebP, as well as SVG.

But how does Google know that the images are about coffee and not about tea? When our algorithms index images, they look at the textual content on the page the image was found on to learn more about the image. We also look at the page's title and its body; we might also learn more from the image’s filename, anchor text that points to it, and its "alt text;" we may use computer vision to learn more about the image and may also use the caption provided in the Image Sitemap if that text also exists on the page.

 To help us index your images, make sure that:
  • we can crawl both the HTML page the image is embedded in, and the image itself;
  • the image is in one of our supported formats: BMP, GIF, JPEG, PNG, WebP or SVG.
Additionally, we recommend:
  • that the image filename is related to the image’s content;
  • that the alt attribute of the image describes the image in a human-friendly way;
  • and finally, it also helps if the HTML page’s textual contents as well as the text near the image are related to the image.
Now some answers to questions we’ve seen many times:


Q: Why do I sometimes see Googlebot crawling my images, rather than Googlebot-Image?
A: Generally this happens when it’s not clear that a URL will lead to an image, so we crawl the URL with Googlebot first. If we find the URL leads to an image, we’ll usually revisit with Googlebot-Image. Because of this, it’s generally a good idea to allow crawling of your images and pages by both Googlebot and Googlebot-Image.

Q: Is it true that there’s a maximum file size for the images?
A: We’re happy to index images of any size; there’s no file size restriction.

Q: What happens to the EXIF, XMP and other metadata my images contain?
A: We may use any information we find to help our users find what they’re looking for more easily. Additionally, information like EXIF data may be displayed in the right-hand sidebar of the interstitial page that appears when you click on an image.


Q: Should I really submit an Image Sitemap? What are the benefits?
A: Yes! Image Sitemaps help us learn about your new images and may also help us learn what the images are about.


Q: I’m using a CDN to host my images; how can I still use an Image Sitemap?
A: Cross-domain restrictions apply only to the Sitemaps’ tag. In Image Sitemaps, the tag is allowed to point to a URL on another domain, so using a CDN for your images is fine. We also encourage you to verify the CDN’s domain name in Webmaster Tools so that we can inform you of any crawl errors that we might find.


Q: Is it a problem if my images can be found on multiple domains or subdomains I own — for example, CDNs or related sites?
A: Generally, the best practice is to have only one copy of any type of content. If you’re duplicating your images across multiple hostnames, our algorithms may pick one copy as the canonical copy of the image, which may not be your preferred version. This can also lead to slower crawling and indexing of your images.


Q: We sometimes see the original source of an image ranked lower than other sources; why is this?
A: Keep in mind that we use the textual content of a page when determining the context of an image. For example, if the original source is a page from an image gallery that has very little text, it can happen that a page with more textual context is chosen to be shown in search. If you feel you've identified very bad search results for a particular query, feel free to use the feedback link below the search results or to share your example in our Webmaster Help Forum.

SafeSearch

Our algorithms use a great variety of signals to decide whether an image — or a whole page, if we’re talking about Web Search — should be filtered from the results when the user’s SafeSearch filter is turned on. In the case of images some of these signals are generated using computer vision, but the SafeSearch algorithms also look at simpler things such as where the image was used previously and the context in which the image was used. 
One of the strongest signals, however, is self-marked adult pages. We recommend that webmasters who publish adult content mark up their pages with one of the following meta tags:

<meta name="rating" content="adult" />
<meta name="rating" content="RTA-5042-1996-1400-1577-RTA" />

Many users prefer not to have adult content included in their search results (especially if kids use the same computer). When a webmaster provides one of these meta tags, it helps to provide a better user experience because users don't see results which they don't want to or expect to see. 

As with all algorithms, sometimes it may happen that SafeSearch filters content inadvertently. If you think your images or pages are mistakenly being filtered by SafeSearch, please let us know using the following form

If you need more information about how we index images, please check out the section of our Help Center dedicated to images, read our SEO Starter Guide which contains lots of useful information, and if you have more questions please post them in the Webmaster Help Forum

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: How to move your content to a new location 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate

While maintaining a website, webmasters may decide to move the whole website or parts of it to a new location. For example, you might move content from a subdirectory to a subdomain, or to a completely new domain. Changing the location of your content can involve a bit of effort, but it’s worth doing it properly.

To help search engines understand your new site structure better and make your site more user-friendly, make sure to follow these guidelines:
  • It’s important to redirect all users and bots that visit your old content location to the new content location using 301 redirects. To highlight the relationship between the two locations, make sure that each old URL points to the new URL that hosts similar content. If you’re unable to use 301 redirects, you may want to consider using cross domain canonicals for search engines instead.
  • Check that you have both the new and the old location verified in the same Google Webmaster Tools account.
  • Make sure to check if the new location is crawlable by Googlebot using the Fetch as Googlebot feature. It’s important to make sure Google can actually access your content in the new location. Also make sure that the old URLs are not blocked by a robots.txt disallow directive, so that the redirect or rel=canonical can be found.
  • If you’re moving your content to an entirely new domain, use the Change of address option under Site configuration in Google Webmaster Tools to let us know about the change.
Change of address option in Google Webmaster Tools
Tell us about moving your content via Google Webmaster Tools
  • If you've also changed your site's URL structure, make sure that it's possible to navigate it without running into 404 error pages. Google Webmaster Tools may prove useful in investigating potentially broken links. Just look for Diagnostics > Crawl errors for your new site.
  • Check your Sitemap and verify that it’s up to date.
  • Once you've set up your 301 redirects, you can keep an eye on users to your 404 error pages to check that users are being redirected to new pages, and not accidentally ending up on broken URLs. When a user comes to a 404 error page on your site, try to identify which URL they were trying to access, why this user was not redirected to the new location of your content, and then make changes to your 301 redirect rules as appropriate.
  • Have a look at the Links to your site in Google Webmaster Tools and inform the important sites that link to your content about your new location.
  • If your site’s content is specific to a particular region you may want to double check the geotargeting preferences for your new site structure in Google Webmaster Tools.
  • As a general rule of thumb, try to avoid running two crawlable sites with completely or largely identical content without a 301 redirection or specifying a rel=”canonical”
  • Lastly, we recommend not implementing other major changes when you’re moving your content to a new location, like large-scale content, URL structure, or navigational updates. Changing too much at once may confuse users and search engines.
We hope you find these suggestions useful. If you happen to have further questions on how to move your content to a new location we’d like to encourage you to drop by our Google Webmaster Help Forum and seek advice from expert webmasters.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.