Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Google now indexes SVG 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

You can now use Google search to find SVG documents. SVG is an open, XML-based format for vector graphics with support for interactive elements. We’re big fans of open standards, and our mission is to organize the world’s information, so indexing SVG is a natural step.

We index SVG content whether it is in a standalone file or embedded directly in HTML. The web is big, so it may take some time before we crawl and index most SVG files, but as of today you may start seeing them in your search results. If you want to see it yourself, try searching for [sitemap site:fastsvg.com] or [HideShow site:svg-whiz.com]

If you host SVG files and you wish to exclude them from Google’s search results, you can use the “X-Robots-Tag: noindex” directive in the HTTP header.

Check out Webmaster Central for a full list of file types we support.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Target visitors or search engines? 2013

salam every one, this is a topic from google web master centrale blog: Last Friday afternoon, I was able to catch the end of the Blog Business Summit in Seattle. At the session called "Blogging and SEO Strategies," John Battelle brought up a good point. He said that as a writer, he doesn't want to have to think about all of this search engine optimization stuff. Dave Taylor had just been talking about order of words in title tags and keyword density and using hyphens rather than underscores in URLs.

We agree, which is why you'll find that the main point in our webmaster guidelines is to make sites for visitors, not for search engines. Visitor-friendly design makes for search engine friendly design as well. The team at Google webmaster central talks a lot with site owners who care a lot about the details of how Google crawls and indexes sites (hyphens and underscores included), but many site owners out there are just concerned with building great sites. The good news is that the guidelines and tips about how Google crawls and indexes sites come down to wanting great content for our search results.

In the spirit of John Battelle's point, here's a recap of some quick tips for ensuring your site is friendly for visitors.

Make good use of page titles
This is true of the main heading on the page itself, but is also true of the title that appears in the browser's title bar.


Whenever possible, ensure each page has a unique title that describes the page well. For instance, if your site is for your store "Buffy's House of Sofas", a visitor may want to bookmark your home page and the order page for your red, fluffy sofa. If all of your pages have the same title: "Wecome to my site!", then a visitor will have trouble finding your site again in the bookmarks. However, if your home page has the title "Buffy's House of Sofas" and your red sofa page has the title "Buffy's red fluffy sofa", then visitors can glance at the title to see what it's about and can easily find it in the bookmarks later. And if your visitors are anything like me, they may have several browser tabs open and appreciate descriptive titles for easier navigation.

This simple tip for visitors helps search engines too. Search engines index pages based on the words contained in them, and including descriptive titles helps search engines know what the pages are about. And search engines often use a page's title in the search results. "Welcome to my site" may not entice searchers to click on your site in the results quite so much as "Buffy's House of Sofas".
Write with words
Images, flash, and other multimedia make for pretty web pages, but make sure your core messages are in text or use ALT text to provide textual descriptions of your multimedia. This is great for search engines, which are based on text: searchers enter search queries as word, after all. But it's also great for visitors, who may have images or Flash turned off in their browsers or might be using screen readers or mobile devices. You can also provide HTML versions of your multimedia-based pages (if you do that, be sure to block the multimedia versions from being indexed using a robots.txt file).

Make sure the text you're talking about is in your content
Visitors may not read your web site linearly like they would a newspaper article or book. Visitors may follow links from elsewhere on the web to any of your pages. Make sure that they have context for any page they're on. On your order page, don't just write "order now!" Write something like "Order your fluffy red sofa now!" But write it for people who will be reading your site. Don't try to cram as many words in as possible, thinking search engines can index more words that way. Think of your visitors. What are they going to be searching for? Is your site full of industry jargon when they'll be searching for you with more informal words?

As I wrote in that guest post on Matt Cutts' blog when I talked about hyphens and underscores:

You know what your site’s about, so it may seem completely obvious to you when you look at your home page. But ask someone else to take a look and don’t tell them anything about the site. What do they think your site is about?

Consider this text:

“We have hundreds of workshops and classes available. You can choose the workshop that is right for you. Spend an hour or a week in our relaxing facility.”

Will this site show up for searches for [cooking classes] or [wine tasting workshops] or even [classes in Seattle]? It may not be as obvious to visitors (and search engine bots) what your page is about as you think.

Along those same lines, does your content use words that people are searching for? Does your site text say “check out our homes for sale” when people are searching for [real estate in Boston]?

Make sure your pages are accessible
I know -- this post was supposed to be about writing content, not technical details. But visitors can't read your site if they can't access it. If the network is down or your server returns errors when someone tries to access the pages of your site, it's not just search engines who will have trouble. Fortunately, webmaster tools makes it easy. We'll let you know if we've had any trouble accessing any of the pages. We tell you the specific page we couldn't access and the exact error we got. These problems aren't always easy to fix, but we try to make them easy to find.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Better geographic choices for webmasters 2013

salam every one, this is a topic from google web master centrale blog: Written by Amanda Camp, Webmaster Tools and Trystan Upstill, International Search Quality Team

Starting today Google Webmaster Tools helps you better control the country association of your content on a per-domain, per-subdomain, or per-directory level. The information you give us will help us determine how your site appears in our country-specific search results, and also improves our search results for geographic queries.

We currently only allow you to associate your site with a single country and location. If your site is relevant to an even more specific area, such as a particular state or region, feel free to tell us that. Or let us know if your site isn't relevant to any particular geographic location at all. If no information is entered in Webmaster Tools, we'll continue to make geographic associations largely based on the top-level domain (e.g. .co.uk or .ca) and the IP of the webserver from which the context was served.

For example, if we wanted to associate www.google.com with Hungary:


But you don't want www.google.com/webmasters/tools" associated with any country...


This feature is restricted for sites with a country code top level domain, as we'll always associate that site with the country domain. (For example, google.ru will always be the version of Google associated with Russia.)


Note that in the same way that Google may show your business address if you register your brick-and-mortar business with the Google Local Business Center, we may show the information that you give us publicly.

This feature was largely initiated by your feedback, so thanks for the great suggestion. Google is always committed towards helping more sites and users get better and more relevant results. This is a new step as we continue to think about how to improve searches around the world.

We encourage you to tell us what you think in the Webmaster Tools section of our discussion group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Improved Flash indexing 2013

salam every one, this is a topic from google web master centrale blog:

We've received numerous requests to improve our indexing of Adobe Flash files. Today, Ron Adler and Janis Stipins—software engineers on our indexing team—will provide us with more in-depth information about our recent announcement that we've greatly improved our ability to index Flash.

Q: Which Flash files can Google better index now?
We've improved our ability to index textual content in SWF files of all kinds. This includes Flash "gadgets" such as buttons or menus, self-contained Flash websites, and everything in between.

Q: What content can Google better index from these Flash files?
All of the text that users can see as they interact with your Flash file. If your website contains Flash, the textual content in your Flash files can be used when Google generates a snippet for your website. Also, the words that appear in your Flash files can be used to match query terms in Google searches.

In addition to finding and indexing the textual content in Flash files, we're also discovering URLs that appear in Flash files, and feeding them into our crawling pipeline—just like we do with URLs that appear in non-Flash webpages. For example, if your Flash application contains links to pages inside your website, Google may now be better able to discover and crawl more of your website.

Q: What about non-textual content, such as images?
At present, we are only discovering and indexing textual content in Flash files. If your Flash files only include images, we will not recognize or index any text that may appear in those images. Similarly, we do not generate any anchor text for Flash buttons which target some URL, but which have no associated text.

Also note that we do not index FLV files, such as the videos that play on YouTube, because these files contain no text elements.

Q: How does Google "see" the contents of a Flash file?
We've developed an algorithm that explores Flash files in the same way that a person would, by clicking buttons, entering input, and so on. Our algorithm remembers all of the text that it encounters along the way, and that content is then available to be indexed. We can't tell you all of the proprietary details, but we can tell you that the algorithm's effectiveness was improved by utilizing Adobe's new Searchable SWF library.

Q: What do I need to do to get Google to index the text in my Flash files?
Basically, you don't need to do anything. The improvements that we have made do not require any special action on the part of web designers or webmasters. If you have Flash content on your website, we will automatically begin to index it, up to the limits of our current technical ability (see next question).

That said, you should be aware that Google is now able to see the text that appears to visitors of your website. If you prefer Google to ignore your less informative content, such as a "copyright" or "loading" message, consider replacing the text within an image, which will make it effectively invisible to us.

Q: What are the current technical limitations of Google's ability to index Flash?
There are three main limitations at present, and we are already working on resolving them:

1. Googlebot does not execute some types of JavaScript. So if your web page loads a Flash file via JavaScript, Google may not be aware of that Flash file, in which case it will not be indexed.
2. We currently do not attach content from external resources that are loaded by your Flash files. If your Flash file loads an HTML file, an XML file, another SWF file, etc., Google will separately index that resource, but it will not yet be considered to be part of the content in your Flash file.
3. While we are able to index Flash in almost all of the languages found on the web, currently there are difficulties with Flash content written in bidirectional languages. Until this is fixed, we will be unable to index Hebrew language or Arabic language content from Flash files.

We're already making progress on these issues, so stay tuned!



Update in July 2008: Everyone, thanks for your great questions and feedback. Our focus is to improve search quality for all users, and with better Flash indexing we create more meaningful search results. Listed below, we’ve also answered some of the most prevalent questions. Thanks again!

Flash site in search results before improvements


Flash site after improved indexing, querying [nasa deep impact animation]


Helping us access and index your Flash files
@fintan: We verified with Adobe that the textual content from legacy sites, such as those scripted with AS1 and AS2, can be indexed by our new algorithm.

@andrew, jonny m, erichazann, mike, ledge, stu, rex, blog, dis: For our July 1st launch, we didn't enable Flash indexing for Flash files embedded via SWFObject. We're now rolling out an update that enables support for common JavaScript techniques for embedding Flash, including SWFObject and SWFObject2.

@mike: At this time, content loaded dynamically from resource files is not indexed. We’ve noted this feature request from several webmasters -- look for this in a near future update.

Update on July 29, 1010: Please note that our ability to load external resources is live.

Interaction of HTML pages and Flash
@captain cuisine: The text found in Flash files is treated similarly to text found in other files, such as HTML, PDFs, etc. If the Flash file is embedded in HTML (as many of the Flash files we find are), its content is associated with the parent URL and indexed as single entity.

@jeroen: Serving the same content in Flash and an alternate HTML version could cause us to find duplicate content. This won't cause a penalty -- we don’t lower a site in ranking because of duplicate content. Be aware, though, that search results will most likely only show one version, not both.

@All: We’re trying to serve users the most relevant results possible regardless of the file type. This means that standalone Flash, HTML with embedded Flash, HTML only, PDFs, etc., can all have the potential to be returned in search results.

Indexing large Flash files
@dsfdgsg: We’ve heard requests for deep linking (linking to specific content inside file) not just for Flash results, but also for other large documents and presentations. In the case of Flash, the ability to deep link will require additional functionality in Flash with which we integrate.

@All: The majority of the existing Flash files on the web are fine in regard to filesize. It shouldn’t be too much of a concern.

More details about our Flash indexing algorithm
@brian, marcos, bharath: Regarding ActionScript, we’re able to find new links loaded through ActionScript. We explore Flash like a website visitor does, we do not decompile the SWF file. Unless you're making ActionScript visible to users, Google will not expose ActionScript code.

@dlocks: We respect rel="nofollow" wherever we encounter it in HTML.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part I: URLs & directories 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:
Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or web-sniffer.net to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We cover what to do in this situation in a subsequent post. in Part II of our removals series.

If a URL meets one of the above criteria, you can remove it by going to http://www.google.com/webmasters/tools/removals, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter
   http://www.example.com/embarrassing-stuff.html
not
   http://www.google.com/search?q=embarrassing+stuff

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Edit: Read the rest of this series:
Part II: Removing & updating cached content
Part III: Removing content you don't own
Part IV: Tracking requests, what not to remove

Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: BlogHer 2007: Building your audience 2013

salam every one, this is a topic from google web master centrale blog: Last week, I spoke at BlogHer Business about search engine optimization issues. I presented with Elise Bauer, who talked about the power of community in blogging. She made great points about the linking patterns of blogs. Link out to sites that would be relevant and useful for your readers. Comment on blogs that you like to continue the conversation and provide a link back to your blog. Write useful content that other bloggers will want to link to. Blogging connects readers and writers and creates real communities where valuable content can be exchanged. I talked more generally about search and a few things you might consider when developing your site and blog.

Why is search important for a business?
With search, your potential customers are telling you exactly what they are looking for. Search can be a powerful tool to help you deliver content that is relevant and useful and meets your customers' needs. For instance, do keyword research to find out the most common types of searches that are relevant to your brand. Does your audience most often search for "houses for sale" or "real estate"? Check your referrer logs to see what searches are bringing visitors to your site (you can find a list of the most common searches that return your site in the results from the Query stats page of webmaster tools). Does your site include valuable content for those searches? A blog is a great way to add this content. You can write unique, targeted articles that provide exactly what the searcher wanted.

How do search engines index sites?
The first step in the indexing process is discovery. A search engine has to know the pages exist. Search engines generally learn about pages from following links, and this process works great. If you have new pages, ensure relevant sites link to them, and provide links to them from within your site. For instance, if you have a blog for your business, you could provide a link from your main site to the latest blog post. You can also let search engines know about the pages of your site by submitting a Sitemap file. Google, Yahoo!, and Microsoft all support the Sitemaps protocol and if you have a blog, it couldn't be easier! Simply submit your blog's RSS feed. Each time you update your blog and your RSS feed is updated, the search engines can extract the URL of the latest post. This ensures search engines know about the updates right away.

Once a search engine knows about the pages, it has to be able to access those pages. You can use the crawl errors reports in webmaster tools to see if we're having any trouble crawling your site. These reports show you exactly what pages we couldn't crawl, when we tried to crawl them, and what the error was.

Once we access the pages, we extract the content. You want to make sure that what your page is about is represented by text. What does the page look like with Javascript, Flash, and images turned off in the browser? Use ALT text and descriptive filenames for images. For instance, if your company name is in a graphic, the ALT text should be the company name rather than "logo". Put text in HTML rather than in Flash or images. This not only helps search engines index your content, but also makes your site more accessible to visitors with mobile browsers, screen readers, or older browsers.

What is your site about?
Does each page have unique title and meta description tags that describe the content? Are the words that visitors search for represented in your content? Do a search of your pages for the queries you expect searchers to do most often and make sure that those words do indeed appear in your site. Which of the following tells visitors and search engines what your site is about?

Option 1
If you're plagued by the cliffs of insanity or the pits of despair, sign up for one of our online classes! Learn the meaning of the word inconceivable. Find out the secret to true love overcoming death. Become skilled in hiding your identity with only a mask. And once you graduate, you'll get a peanut. We mean it.

Option 2
See our class schedule here. We provide extensive instruction and valuable gifts upon graduation.

When you link to other pages in your site, ensure that the anchor text (the text used for the link) is descriptive of those pages. For instance, you might link to your products page with the text "Inigo Montoya's sword collection" or "Buttercup's dresses" rather than "products page" or the ever-popular "click here".

Why are links important?
Links are important for a number of reasons. They are a key way to drive traffic to your site. Visitors of other sites can learn about your site through links to it. You can use links to other sites to provide valuable information to your visitors. And just as links let visitors know about your site, they also let search engines know about it. Links also tell search engines and potential visitors about your site. The anchor text describes what your site is about and the number of relevant links to your pages are an indicator of how popular and useful those pages are. (You can find a list of the links to your site and the most common anchor text used in those links in webmaster tools.)

A blog is a great way to build links, because it enables you to create new content on a regular basis. The more useful content you have, the greater the chances someone else will find that content valuable to their readers and link to it. Several people at the BlogHer session asked about linking out to other sites. Won't this cause your readers to abandon your site? Won't this cause you to "leak out" your PageRank? No, and no. Readers will appreciate that you are letting them know about resources they might be interested in and will remember you as a valuable source of information (and keep coming back for more!). And PageRank isn't a set of scales, where incoming links are weighted against outgoing ones and cancel each other out. Links are content, just as your words are. You want your site to be as useful to your readers as possible, and providing relevant links is a way, just as writing content is, to do that.

The key is compelling content
Google's main goal is to provide the most useful and relevant search results possible. That's the key thing to keep in mind as you look at optimizing your site. How can you make your site the most useful and relevant result for the queries you care about? This won't just help you in the search results, which after all, are just the means to the end. What you are really interested in is keeping your visitors happy and coming back. And creating compelling and useful content is the best way to do that.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Using RSS/Atom feeds to discover new URLs 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate

Google uses numerous sources to find new webpages, from links we find on the web to submitted URLs. We aim to discover new pages quickly so that users can find new content in Google search results soon after they go live. We recently launched a feature that uses RSS and Atom feeds for the discovery of new webpages.

RSS/Atom feeds have been very popular in recent years as a mechanism for content publication. They allow readers to check for new content from publishers. Using feeds for discovery allows us to get these new pages into our index more quickly than traditional crawling methods. We may use many potential sources to access updates from feeds including Reader, notification services, or direct crawls of feeds. Going forward, we might also explore mechanisms such as PubSubHubbub to identify updated items.

In order for us to use your RSS/Atom feeds for discovery, it's important that crawling these files is not disallowed by your robots.txt. To find out if Googlebot can crawl your feeds and find your pages as fast as possible, test your feed URLs with the robots.txt tester in Google Webmaster Tools.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Traffic drops and site architecture issues 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: Intermediate.

We hear lots of questions about site architecture issues and traffic drops, so it was a pleasure to talk about it in greater detail at SMX London and I'd like to highlight some key concepts from my presentation here. First off, let's gain a better understanding of drops in traffic, and then we'll take a look at site design and architecture issues.

Understanding drops in traffic

As you know, fluctuations in search results happen all the time; the web is constantly evolving and so is our index. Improvements in our ability to understand our users' interests and queries also often lead to differences in how our algorithms select and rank pages. We realize, however, that such changes might be confusing and sometimes foster misconceptions, so we'd like to address a couple of these myths head-on.

Myth number 1: Duplicate content causes drops in traffic!
Webmasters often wonder if the duplicates on their site can have a negative effect on their site's traffic. As mentioned in our guidelines, unless this duplication is intended to manipulate Google and/or users, the duplication is not a violation of our Webmaster Guidelines. The second part of my presentation illustrates in greater detail how to deal with duplicate content using canonicalization.

Myth number 2: Affiliate programs cause drops in traffic!
Original and compelling content is crucial for a good user experience. If your website participates in affiliate programs, it's essential to consider whether the same content is available in many other places on the web. Affiliate sites with little or no original and compelling content are not likely to rank well in Google search results, but including affiliate links within the context of original and compelling content isn't in itself the sort of thing that leads to traffic drops.

Having reviewed a few of the most common concerns, I'd like to highlight two important sections of the presentation. The first illustrates how malicious attacks -- such as an injection of hidden text and links -- might cause your site to be removed from Google's search results. On a happier note, it also covers how you can use the Google cache and Webmaster Tools to identify this issue. On a related note, if we've found a violation of the Webmaster Guidelines such as the use of hidden text or the presence of malware on your site, you will typically find a note regarding this in your Webmaster Tools Message center.
You may also find your site's traffic decreased if your users are being redirected to another site...for example, due to a hacker-applied server- or page-level redirection triggered by referrals from search engines. A similar scenario -- but with different results -- is the case in which a hacker has instituted a redirection for crawlers only. While this will cause no immediate drop in traffic since users and their visits are not affected, it might lead to a decrease in pages indexed over time.




Site design and architecture issues
Now that we've seen how malicious changes might affect your site and its traffic, let's examine some design and architecture issues. Specifically, you want to ensure that your site is able to be both effectively crawled and indexed, which is the prerequisite to being shown in our search results. What should you consider?

  • First off, check that your robots.txt file has the correct status code and is not returning an error.
  • Keep in mind some best practices when moving to a new site and the new "Change of address" feature recently added to Webmaster Tools.
  • Review the settings of the robots.txt file to make sure no pages -- particularly those rewritten and/or dynamic -- are blocked inappropriately.
  • Finally, make good use of the rel="canonical" attribute to reduce the indexing of duplicate content on your domain. The example in the presentation shows how using this attribute helps Google understand that a duplicate can be clustered with the canonical and that the original, or canonical, page should be indexed.


In conclusion, remember that fluctuations in search results are normal but there are steps that you can take to avoid malicious attacks or design and architecture issues that might cause your site to disappear or fluctuate unpredictably in search results. Start by learning more about attacks by hackers and spammers, make sure everything is running properly at crawling and indexing level by double-checking the HTML suggestions in Webmaster Tools, and finally, test your robots.txt file in case you are accidentally blocking Googlebot. And don't forget about those "robots.txt unreachable" errors!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Speaking the language of robots 2013

salam every one, this is a topic from google web master centrale blog:

We all know how friendly Googlebot is. And like all benevolent robots, he listens to us and respects our wishes about parts of our site that we don't want crawled. We can just give him a robots.txt file explaining what we want, and he'll happily comply. But what if you're intimidated by the idea of communicating directly with Googlebot? After all, not all of us are fluent in the language of robots.txt. This is why we're pleased to introduce you to your personal robot translator: the Robots.txt Generator in Webmaster Tools. It's designed to give you an easy and interactive way to build a robots.txt file. It can be as simple as entering the files and directories you don't want crawled by any robots.

Or, if you need to, you can create fine-grained rules for specific robots and areas of your site.
Once you're finished with the generator, feel free to test the effects of your new robots.txt file with our robots.txt analysis tool. When you're done, just save the generated file to the top level (root) directory of your site, and you're good to go. There are a couple of important things to keep in mind about robots.txt files:
  • Not every search engine will support every extension to robots.txt files
The Robots.txt Generator creates files that Googlebot will understand, and most other major robots will understand them too. But it's possible that some robots won't understand all of the robots.txt features that the generator uses.
  • Robots.txt is simply a request
Although it's highly unlikely from a major search engine, there are some unscrupulous robots that may ignore the contents of robots.txt and crawl blocked areas anyway. If you have sensitive content that you need to protect completely, you should put it behind password protection rather than relying on robots.txt.

We hope this new tool helps you communicate your wishes to Googlebot and other robots that visit your site. If you want to learn more about robots.txt files, check out our Help Center. And if you'd like to discuss robots.txt and robots with other webmasters, visit our Google Webmaster Help Group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Date with Googlebot, Part II: HTTP status codes and If-Modified-Since 2013

salam every one, this is a topic from google web master centrale blog:
Our date with Googlebot was so wonderful, but it's hard to tell if we, the websites, said the right thing. We returned 301 permanent redirect, but should we have responded with 302 temporary redirect (so he knows we're playing hard to get)? If we sent a few new 404s, will he ever call our site again? Should we support the header "If-Modified-Since?" These questions can be confusing, just like young love. So without further ado, let's ask the expert, Googlebot, and find out how he judged our response (code).


Supporting the "If-Modified-Since" header and returning 304 can save bandwidth.


-----------
Dearest Googlebot,
  Recently, I did some spring cleaning on my site and deleted a couple of old, orphaned pages. They now return the 404 "Page not found" code. Is this ok, or have I confused you?
Frankie O'Fore

Dear Frankie,
  404s are the standard way of telling me that a page no longer exists. I won't be upset—it's normal that old pages are pruned from websites, or updated to fresher content. Most websites will show a handful of 404s in the Crawl Diagnostics over at Webmaster Tools. It's really not a big deal. As long as you have good site architecture with links to all your indexable content, I'll be happy, because it means I can find everything I need.

  But don't forget, it's not just me who comes to your website—there may be humans seeing these pages too. If you've only got a very simple '404 page not found' message, visitors who aren't as savvy could be baffled. There are lots of ways to make your 404 page more friendly; a quick one is our 404 widget over at Webmaster Tools, which will help direct people to content which does exist. For more information, you can read the blog post. Most web hosting companies, big and small, will let you customise your 404 page (and other return codes too).

Love and kisses,
Googlebot


Hey Googlebot,
  I was just reading your reply to Frankie above, and it raised a couple of questions.
What if I have someone linking to a page that no longer exists? How can I make sure my visitors still find what they're after? Also, what if I just move some pages around? I'd like to better organise my site, but I'm worried you'll get confused. How can I help you?
Yours hopefully,
Little Jimmy


Hello Jimmy,
   Let's pretend there are no anachronisms in your letter, and get to the meat of the matter. Firstly, let's look at links coming from other sites. Obviously, these can be a great source of traffic, and you don't want visitors presented with an unfriendly 'Page not found' message. So, you can harness the power of the mighty redirect.

   There are two types of redirect—301 and 302. Actually, there are lots more, but these are the two we'll concern ourselves with now. Just like 404, 301 and 302 are different types of responses codes you can send to users and search engine crawlers. They're both redirects, but a 301 is permanent and a 302 is temporary. A 301 redirect tells me that whatever this page used to be, now it lives somewhere else. This is perfect for when you're re-organising your site, and also helps with links from offsite. Whenever I see a 301, I'll update all references to that old page with the new one you've told me about. Isn't that easy?

   If you don't know where to begin with redirects, let me get you started. It depends on your webserver, but here are some searches that may be helpful:
Apache: http://www.google.com/search?q=301+redirect+apache
IIS: http://www.google.com/search?q=301+redirect+iis
You can also check your manual, or the README files that came with your server.

   As an alternative to a redirect, you can email the webmaster of the site linking to you and ask them to update their link. Not sure what sites are linking to you? Don't despair - my human co-workers have made that easy to figure out. In the "Links" portion of Webmaster Tools, you can enter a specific URL on your site to determine who's linking to it.

  My human co-workers also just released a tool which shows URLs linking to non-existent pages on your site. You can read more about that here.

Yours informationally,
Googlebot



Darling Googlebot,
   I have a problem—I live in a very dynamic part of the web, and I keep changing my mind about things. When you ask me questions, I never respond the same way twice—my top threads change every hour, and I get new content all the time! You seem like a straightforward guy who wants straightforward answers. How can I tell you when things change without confusing you?
Temp O'Rary


Dear Temp,
   I just told little Jimmy that 301's are the best way to tell a Googlebot about your new address, but what you're looking for is a 302.
   Once you're indexed, it's the polite way to tell your visitors that your address is still the right one, but that the content can temporarily be found elsewhere. In these situations, a 302 (or the rarer '307 Temporary Redirect') would be better. For example, orkut redirects from http://orkut.com to http://google.com/accounts/login?service=orkut, which isn't a page that humans would find particularly useful when searching for Orkut***.
It's on a different domain, for starters. So, a 302 has been used to tell me that all the content and linking properties of the URL shouldn't be updated to the target - it's just a temporary page.

  That's why when you search for orkut, you see orkut.com and not that longer URL.

  Remember: simple communication is the key to any relationship.

Your friend,
Googlebot


*** Please note, I simplified the URL to make it easier to read. It's actually much more complex than that.

Captain Googlebot,
   I am the kind of site who likes to reinvent herself. I noticed that the links to me on my friends' sites are all to URLs I got rid of several redesigns ago! I had set up 301s to my new URLs for those pages, but after that I 301'ed the newer URLs to my next version. Now I'm afraid that if you follow their directions when you come to crawl, you'll end up following a string of 301s so long that by the end you won't come calling any more.
Ethel Binky


Dear Ethel,
   It sounds like you have set up some URLs that redirect to more redirects to... well, goodness! In small amounts, these "repeat redirects" are understandable, but it may be worth considering why you're using them in the first place. If you remove the 301s in the middle and send me straight to the final destination on all of them, you'll save the both of us a bunch of time and HTTP requests. But don't just think of us. Other people get tired of seeing that same old 'contacting.... loading ... contacting...' game in their status bar.

   Put yourself in their shoes—if your string of redirects starts to look rather long, users might fear that you have set them off into an infinite loop! Bots and humans alike can get scared by that kind of "eternal commitment." Instead, try to get rid of those chained redirects, or at least keep 'em short. Think of the humans!

Yours thoughtfully,
Googlebot


Dear Googlebot,
   I know you must like me—you even ask me for unmodified files, like my college thesis that hasn't changed in 10 years. It's starting to be a real hassle! Is there anything I can do to prevent your taking up my lovely bandwidth?

Janet Crinklenose


Janet, Janet, Janet,
   It sounds like you might want to learn a new phrase—'304 Not Modified'. If I've seen a URL before, I insert an 'If-Modified-Since' in my request's header. This line also includes an HTTP-formatted date string. If you don't want to send me yet another copy of that file, stand up for yourself and send back a normal HTTP header with the status '304 Not Modified'! I like information, and this qualifies too. When you do that, there's no need to send me a copy of the file—which means you don't waste your bandwidth, and I don't feel like you're palming me off with the same old stuff.

   You'll probably notice that a lot of browsers and proxies will say 'If-Modified-Since' in their headers, too. You can be well on your way to curbing that pesky bandwidth bill.

Now go out there and save some bandwidth!
Good ol' Googlebot

-----------

Googlebot has been so helpful! Now we know how to best respond to users and search engines. The next time we get together, though, it's time to sit down for a good long heart-to-heart with the guy (Date with Googlebot: Part III, is coming soon!).



UPDATE: Added a missing link. Thanks to Boris for pointing that out.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Help us make the web better: An update on Rich Snippets 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

In May this year we announced Rich Snippets which makes it possible to show structured data from your pages on Google's search results.


We're convinced that structured data makes the web better, and we've worked hard to expand Rich Snippets to more search results and collect your feedback along the way. If you have review or people/social networking content on your site, it's easier than ever to mark up your content using microformats or RDFa so that Google can better understand it to generate useful Rich Snippets. Here are a few helpful improvements on our end to enable you to mark up your content:

Testing tool. See what Google is able to extract, and preview how microformats or RDFa marked-up pages would look on Google search results. Test your URLs on the Rich Snippets Testing Tool.


Google Custom Search users can also use the Rich Snippets Testing Tool to test markup usable in their Custom Search engine.

Better documentation. We've extended our documentation to include a new section containing Tips & Tricks and Frequently Asked Questions. Here we have responded to common points of confusion and provided instructions on how to maximize the chances of getting Rich Snippets for your site.

Extended RDFa support. In addition to the Person RDFa format, we have added support for the corresponding fields from the FOAF and vCard vocabularies for all those of you who asked for it.

Videos. If you have videos on your page, you can now mark up your content to help Google find those videos.

As before, marking up your content does not guarantee that Rich Snippets will be shown for your site. We will continue to expand this feature gradually to ensure a great user experience whenever Rich Snippets are shown in search results.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: GENERIC CIALIS on my website? I think my site has been hacked! 2013

salam every one, this is a topic from google web master centrale blog: How to use "Fetch as Googlebot", part 1
Webmaster level: Intermediate

Has your site ever dropped suddenly from the index or disappeared mysteriously from search results? Have you ever received a notice that your site is using cloaking techniques? Unfortunately, sometimes a malicious party "hacks" a website: they penetrate the security of a site and insert undesirable content. Sophisticated attackers can camouflage this spammy or dangerous content so that it doesn't appear for normal users, and appears only to Googlebot, which could negatively impact your site in Google's results.

In such cases it used to be very difficult to detect the problem, because the site would appear normal in the eyes of the user. It may be possible that only requests with a User-agent: of Googlebot and coming from Googlebot's IP could see the hidden content. But that's over: with Fetch as Googlebot, the new Labs feature in Webmaster Tools, you can see exactly what Googlebot is seeing, and avoid any kind of cloaking problems. We'll show you how:

Let's imagine that Bob, the administrator of www.example.com, is searching for his site but he finds this instead:



That's strange, because when he looks at the source code of www.example.com, it looks fine:



With much surprise Bob may receive a notice from Google warning him that his site is not complying with Google's quality guidelines. Fortunately he has his site registered with Webmaster Tools, let's see how he can check what Googlebot sees:

First Bob logs into Webmaster Tools and selects www.example.com. The Fetch as Googlebot feature will be at the bottom of the navigation menu, in the Labs section:



The page will contain a field where you can insert the URL to fetch. It can also be left blank to fetch the homepage.



Bob can simply click Fetch and wait a few seconds. After refreshing the page, he can see the status of the fetch request. If it succeeds, he can click on the "Success" link...



...and that will show the details, with the content of the fetched page:



Aha! There's the spammy content! Now Bob can be certain that www.example.com has been hacked.

Confirming that the website has been hacked (and perhaps is still hacked) is an important step. It is, however, only the beginning. For more information, we strongly suggest getting help from your server administrator or hoster and reading our previous blog posts on the subject of hacked sites:


If you have any questions about how to use the Fetch as Googlebot feature, feel free to drop by the Webmaster Help Forum. If you feel that your website might be hacked but are having problems resolving it, you might want to ask the experts in our "Malware and Hacked sites" category.

PS Keep in mind that once you have removed hacked content from your site, it will generally still take time for us to update our search results accordingly. There are a number of factors that affect crawling and indexing of your content so it's impossible to give a time-frame for that.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: A new opt-out tool 2013

salam every one, this is a topic from google web master centrale blog:

Webmasters have several ways to keep their sites' content out of Google's search results. Today, as promised, we're providing a way for websites to opt out of having their content that Google has crawled appear on Google Shopping, Advisor, Flights, Hotels, and Google+ Local search.

Webmasters can now choose this option through our Webmaster Tools, and crawled content currently being displayed on Shopping, Advisor, Flights, Hotels, or Google+ Local search pages will be removed within 30 days.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Google News now crawling with Googlebot 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: intermediate

(Cross-posted on the Google News Blog)

Google News recently updated our infrastructure to crawl with Google’s primary user-agent, Googlebot. What does this mean? Very little to most publishers. Any news organizations that wish to opt out of Google News can continue to do so: Google News will still respect the robots.txt entry for Googlebot-News, our former user-agent, if it is more restrictive than the robots.txt entry for Googlebot.

Our Help Center provides detailed guidance on using the robots exclusion protocol for Google News, and publishers can contact the Google News Support Team if they have any questions, but we wanted to first clarify the following:
  • Although you’ll now only see the Googlebot user-agent in your site’s logs, no need to worry: the appearance of Googlebot instead of Googlebot-News is independent of our inclusion policies. (You can always check whether your site is included in Google News by searching with the “site:” operator. For instance, enter “site:yournewssite.com” in the search field for Google News, and if you see results then we are currently indexing your news site.)

  • Your analytics tool will still be able to differentiate user traffic coming to your website from Google Search and traffic coming from Google News, so you should see no changes there. The main difference is that you will no longer see occasional automated visits to your site from the Googlebot-news crawler.

  • If you’re currently respecting our guidelines for Googlebot, you will not need to make any code changes to your site. Sites that have implemented subscriptions using a metered model or who have implemented First Click Free will not experience any changes. For sites which require registration, payment or login prior to reading any full article, Google News will only be able to crawl and index the title and snippet that you show all users who visit your page. Our Webmaster Guidelines provide additional information about “cloaking” (i.e., showing a bot a different version than what users experience). Learn more about Google News and subscription publishers in this Help Center article.

  • Rest assured, your Sitemap will still be crawled. This change does not affect how we crawl News Sitemaps. If you are a News publisher who hasn’t yet set up a News Sitemap and are interested in getting started, please follow this link.

  • For any publishers that wish to opt out of Google News and stay in Google Search, you can simply disallow Googlebot-news and allow Googlebot. For more information on how to do this, consult our Help Center.


As with any website, from time to time we need to make updates to our infrastructure. At the same time, we want to continue to provide as much control as possible to news web sites. We hope we have answered any questions you might have about this update. If you have additional questions, please check out our Help Center.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Canonical Link Element: presentation from SMX West 2013

salam every one, this is a topic from google web master centrale blog:
A little while ago, Google and other search engines announced support for a canonical link element that can help site owners with duplicate content issues. I recreated my presentation from SMX West and you can watch it below:



You can access the slides directly or follow along here:



By the way, Ask just announced that they will support the canonical link element. Read all about it in the Ask.com blog entry.

Thanks again to Wysz for turning this into a great video.

In fact, you might not have seen it, but we recently created a webmaster videos channel on YouTube. If you're interested, you can watch the new webmaster channel. If you subscribe to that channel, you'll always find out about new webmaster-related videos from Google.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: How to deal with planned site downtime 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: Intermediate to Advanced

Once in a while we get asked whether a site’s visibility in Google’s search results can be impacted in a negative way if it’s unavailable when Googlebot tries to crawl it. Sometimes downtime is unavoidable: a webmaster might decide to take a site down due to ongoing site maintenance, or legal or cultural requirements. Outages that are not clearly marked as such can negatively affect a site’s reputation. While we cannot guarantee any crawling, indexing or ranking, there are methods to deal with planned website downtime in a way that will generally not negatively affect your site’s visibility in the search results.

For example, instead of returning an HTTP result code 404 (Not Found) or showing an error page with the status code 200 (OK) when a page is requested, it’s better to return a 503 HTTP result code (Service Unavailable) which tells search engine crawlers that the downtime is temporary. Moreover, it allows webmasters to provide visitors and bots with an estimated time when the site will be up and running again. If known, the length of the downtime in seconds or the estimated date and time when the downtime will be complete can be specified in an optional Retry-After header, which Googlebot may use to determine when to recrawl the URL.

Returning a 503 HTTP result code can be a great solution for a number of other situations. We encounter a lot of problems with sites that return 200 (OK) result codes for server errors, downtime, bandwidth-overruns or for temporary placeholder pages (“Under Construction”). The 503 HTTP result code is the webmaster’s solution of choice for all these situations. As for planned server downtime like hardware maintenance, it’s a good idea to have a separate
server available to actually return the 503 HTTP result code. It is important, however, to not treat 503 as a permanent solution: lasting 503s can eventually be seen as a sign that the server is now permanently unavailable and can result in us removing URLs from Google’s index.

header('HTTP/1.1 503 Service Temporarily Unavailable');
header('Retry-After: Sat, 8 Oct 2011 18:27:00 GMT');

If you set up a 503 (Service Unavailable) response, the header information might look like this when using PHP.
Similar to how you can make 404 pages more useful to users, it’s also a good idea to provide a customized 503 message explaining the situation to users and letting them know when the site will be available again. For further information regarding HTTP result codes, please see RFC 2616.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: 1000 Words About Images 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Creativity is an important aspect of our lives and can enrich nearly everything we do. Say I'd like to make my teammate a cup of cool-looking coffee, but my creative batteries are empty; this would be (and is!) one of the many times when I look for inspiration on Google Images.


The images you see in our search results come from publishers of all sizes — bloggers, media outlets, stock photo sites — who have embedded these images in their HTML pages. Google can index image types formatted as BMP, GIF, JPEG, PNG and WebP, as well as SVG.

But how does Google know that the images are about coffee and not about tea? When our algorithms index images, they look at the textual content on the page the image was found on to learn more about the image. We also look at the page's title and its body; we might also learn more from the image’s filename, anchor text that points to it, and its "alt text;" we may use computer vision to learn more about the image and may also use the caption provided in the Image Sitemap if that text also exists on the page.

 To help us index your images, make sure that:
  • we can crawl both the HTML page the image is embedded in, and the image itself;
  • the image is in one of our supported formats: BMP, GIF, JPEG, PNG, WebP or SVG.
Additionally, we recommend:
  • that the image filename is related to the image’s content;
  • that the alt attribute of the image describes the image in a human-friendly way;
  • and finally, it also helps if the HTML page’s textual contents as well as the text near the image are related to the image.
Now some answers to questions we’ve seen many times:


Q: Why do I sometimes see Googlebot crawling my images, rather than Googlebot-Image?
A: Generally this happens when it’s not clear that a URL will lead to an image, so we crawl the URL with Googlebot first. If we find the URL leads to an image, we’ll usually revisit with Googlebot-Image. Because of this, it’s generally a good idea to allow crawling of your images and pages by both Googlebot and Googlebot-Image.

Q: Is it true that there’s a maximum file size for the images?
A: We’re happy to index images of any size; there’s no file size restriction.

Q: What happens to the EXIF, XMP and other metadata my images contain?
A: We may use any information we find to help our users find what they’re looking for more easily. Additionally, information like EXIF data may be displayed in the right-hand sidebar of the interstitial page that appears when you click on an image.


Q: Should I really submit an Image Sitemap? What are the benefits?
A: Yes! Image Sitemaps help us learn about your new images and may also help us learn what the images are about.


Q: I’m using a CDN to host my images; how can I still use an Image Sitemap?
A: Cross-domain restrictions apply only to the Sitemaps’ tag. In Image Sitemaps, the tag is allowed to point to a URL on another domain, so using a CDN for your images is fine. We also encourage you to verify the CDN’s domain name in Webmaster Tools so that we can inform you of any crawl errors that we might find.


Q: Is it a problem if my images can be found on multiple domains or subdomains I own — for example, CDNs or related sites?
A: Generally, the best practice is to have only one copy of any type of content. If you’re duplicating your images across multiple hostnames, our algorithms may pick one copy as the canonical copy of the image, which may not be your preferred version. This can also lead to slower crawling and indexing of your images.


Q: We sometimes see the original source of an image ranked lower than other sources; why is this?
A: Keep in mind that we use the textual content of a page when determining the context of an image. For example, if the original source is a page from an image gallery that has very little text, it can happen that a page with more textual context is chosen to be shown in search. If you feel you've identified very bad search results for a particular query, feel free to use the feedback link below the search results or to share your example in our Webmaster Help Forum.

SafeSearch

Our algorithms use a great variety of signals to decide whether an image — or a whole page, if we’re talking about Web Search — should be filtered from the results when the user’s SafeSearch filter is turned on. In the case of images some of these signals are generated using computer vision, but the SafeSearch algorithms also look at simpler things such as where the image was used previously and the context in which the image was used. 
One of the strongest signals, however, is self-marked adult pages. We recommend that webmasters who publish adult content mark up their pages with one of the following meta tags:

<meta name="rating" content="adult" />
<meta name="rating" content="RTA-5042-1996-1400-1577-RTA" />

Many users prefer not to have adult content included in their search results (especially if kids use the same computer). When a webmaster provides one of these meta tags, it helps to provide a better user experience because users don't see results which they don't want to or expect to see. 

As with all algorithms, sometimes it may happen that SafeSearch filters content inadvertently. If you think your images or pages are mistakenly being filtered by SafeSearch, please let us know using the following form

If you need more information about how we index images, please check out the section of our Help Center dedicated to images, read our SEO Starter Guide which contains lots of useful information, and if you have more questions please post them in the Webmaster Help Forum

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: How to move your content to a new location 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Intermediate

While maintaining a website, webmasters may decide to move the whole website or parts of it to a new location. For example, you might move content from a subdirectory to a subdomain, or to a completely new domain. Changing the location of your content can involve a bit of effort, but it’s worth doing it properly.

To help search engines understand your new site structure better and make your site more user-friendly, make sure to follow these guidelines:
  • It’s important to redirect all users and bots that visit your old content location to the new content location using 301 redirects. To highlight the relationship between the two locations, make sure that each old URL points to the new URL that hosts similar content. If you’re unable to use 301 redirects, you may want to consider using cross domain canonicals for search engines instead.
  • Check that you have both the new and the old location verified in the same Google Webmaster Tools account.
  • Make sure to check if the new location is crawlable by Googlebot using the Fetch as Googlebot feature. It’s important to make sure Google can actually access your content in the new location. Also make sure that the old URLs are not blocked by a robots.txt disallow directive, so that the redirect or rel=canonical can be found.
  • If you’re moving your content to an entirely new domain, use the Change of address option under Site configuration in Google Webmaster Tools to let us know about the change.
Change of address option in Google Webmaster Tools
Tell us about moving your content via Google Webmaster Tools
  • If you've also changed your site's URL structure, make sure that it's possible to navigate it without running into 404 error pages. Google Webmaster Tools may prove useful in investigating potentially broken links. Just look for Diagnostics > Crawl errors for your new site.
  • Check your Sitemap and verify that it’s up to date.
  • Once you've set up your 301 redirects, you can keep an eye on users to your 404 error pages to check that users are being redirected to new pages, and not accidentally ending up on broken URLs. When a user comes to a 404 error page on your site, try to identify which URL they were trying to access, why this user was not redirected to the new location of your content, and then make changes to your 301 redirect rules as appropriate.
  • Have a look at the Links to your site in Google Webmaster Tools and inform the important sites that link to your content about your new location.
  • If your site’s content is specific to a particular region you may want to double check the geotargeting preferences for your new site structure in Google Webmaster Tools.
  • As a general rule of thumb, try to avoid running two crawlable sites with completely or largely identical content without a 301 redirection or specifying a rel=”canonical”
  • Lastly, we recommend not implementing other major changes when you’re moving your content to a new location, like large-scale content, URL structure, or navigational updates. Changing too much at once may confuse users and search engines.
We hope you find these suggestions useful. If you happen to have further questions on how to move your content to a new location we’d like to encourage you to drop by our Google Webmaster Help Forum and seek advice from expert webmasters.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.