Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

The "Links to your site" feature in Webmaster Tools is now updated to show you which domains link the most to your site, in addition to other improvements. On the overview page you'll notice that there are three main sections: the domains linking most to your site, the pages on your site with the most links, and a sampling of the anchor text external sites are using when they link to your site.


Who links the most
Clicking the “More »” link under the “Who links the most” section will take you to a new view that shows a listing of all the domains that link to your site. Each domain in the list can be expanded to display a sample of pages from your site which are linked to by that domain.


The "More »" link under each specific domain lists all the pages linked to by that domain. At the top of the page there's a total count of links from that domain and a total count of your site's pages linked to from that domain.


Your most linked content
If you drill into the “Your most linked content” view from the overview page, you’ll see a listing of all your site’s most important linked pages. There's also a link count for each page as well as a count of domains linking to that page. Clicking any of the pages listed will expand the view to show you examples of the leading domains linking to that page and the number of links to the given page from each domain listed. The data used for link counts and throughout the "Links to your site" feature is more comprehensive now, including links redirected using 301 or 302 HTTP redirects.


Each page listed in the "All linked pages" view has an associated "More »" link which displays all the domains linking to that specific page on your site.


Each domain listed leads to a report of all the pages from that domain linking to your specific page.


We hope the updated “Links to your site” feature in Webmaster Tools will help you better understand where the links to your site are coming from and improve your ability to track changes to your site’s link profile. Please post any comments you have about this updated feature or post your questions in the Webmaster Help Forum. We appreciate your feedback since it helps us to continue to improve the functionality of Webmaster Tools.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: Last week, I participated in the duplicate content summit at SMX Advanced. I couldn't resist the opportunity to show how Buffy is applicable to the everday Search marketing world, but mostly I was there to get input from you on the duplicate content issues you face and to brainstorm how search engines can help.

A few months ago, Adam wrote a great post on dealing with duplicate content. The most important things to know about duplicate content are:
  • Google wants to serve up unique results and does a great job of picking a version of your content to show if your sites includes duplication. If you don't want to worry about sorting through duplication on your site, you can let us worry about it instead.
  • Duplicate content doesn't cause your site to be penalized. If duplicate pages are detected, one version will be returned in the search results to ensure variety for searchers.
  • Duplicate content doesn't cause your site to be placed in the supplemental index. Duplication may indirectly influence this however, if links to your pages are split among the various versions, causing lower per-page PageRank.
At the summit at SMX Advanced, we asked what duplicate content issues were most worrisome. Those in the audience were concerned about scraper sites, syndication, and internal duplication. We discussed lots of potential solutions to these issues and we'll definitely consider these options along with others as we continue to evolve our toolset. Here's the list of some of the potential solutions we discussed so that those of you who couldn't attend can get in on the conversation.

Specifying the preferred version of a URL in the site's Sitemap file
One thing we discussed was the possibility of specifying the preferred version of a URL in a Sitemap file, with the suggestion that if we encountered multiple URLs that point to the same content, we could consolidate links to that page and could index the preferred version.

Providing a method for indicating parameters that should be stripped from a URL during indexing
We discussed providing this in either an interface such as webmaster tools on in the site's robots.txt file. For instance, if a URL contains sessions IDs, the webmaster could indicate the variable for the session ID, which would help search engines index the clean version of the URL and consolidate links to it. The audience leaned towards an addition in robots.txt for this.

Providing a way to authenticate ownership of content
This would provide search engines with extra information to help ensure we index the original version of an article, rather than a scraped or syndicated version. Note that we do a pretty good job of this now and not many people in the audience mentioned this to be a primary issue. However, the audience was interested in a way of authenticating content as an extra protection. Some suggested using the page with the earliest date, but creation dates aren't always reliable. Someone also suggested allowing site owners to register content, although that could raise issues as well, as non-savvy site owners wouldn't know to register content and someone else could take the content and register it instead. We currently rely on a number of factors such as the site's authority and the number of links to the page. If you syndicate content, we suggest that you ask the sites who are using your content to block their version with a robots.txt file as part of the syndication arrangement to help ensure your version is served in results.

Making a duplicate content report available for site owners
There was great support for the idea of a duplicate content report that would list pages within a site that search engines see as duplicate, as well as pages that are seen as duplicates of pages on other sites. In addition, we discussed the possibility of adding an alert system to this report so site owners could be notified via email or RSS of new duplication issues (particularly external duplication).

Working with blogging software and content management systems to address duplicate content issues
Some duplicate content issues within a site are due to how the software powering the site structures URLs. For instance, a blog may have the same content on the home page, a permalink page, a category page, and an archive page. We are definitely open to talking with software makers about the best way to provide easy solutions for content creators.

In addition to discussing potential solutions to duplicate content issues, the audience had a few questions.

Q: If I nofollow a substantial number of my internal links to reduce duplicate content issues, will this raise a red flag with the search engines?
The number of nofollow links on a site won't raise any red flags, but that is probably not the best method of blocking the search engines from crawling duplicate pages, as other sites may link to those pages. A better method may be to block pages you don't want crawled with a robots.txt file.

Q: Are the search engines continuing the Sitemaps alliance?
We launched sitemaps.org in November of last year and have continued to meet regularly since then. In April, we added the ability for you to let us know about your Sitemap in your robots.txt file. We plan to continue to work together on initiatives such as this to make the lives of webmasters easier.

Q: Many pages on my site primarily consist of graphs. Although the graphs are different on each page, how can I ensure that search engines don't see these pages as duplicate since they don't read images?
To ensure that search engines see these pages as unique, include unique text on each page (for instance, a different title, caption, and description for each graph) and include unique alt text for each image. (For instance, rather than use alt="graph", use something like alt="graph that shows Willow's evil trending over time".

Q: I've syndicated my content to many affiliates and now some of those sites are ranking for this content rather than my site. What can I do?
If you've freely distributed your content, you may need to enhance and expand the content on your site to make it unique.

Q: As a searcher, I want to see duplicates in search results. Can you add this as an option?
We've found that most searchers prefer not to have duplicate results. The audience member in particular commented that she may not want to get information from one site and would like other choices, but for that case, other sites will likely not have identical information and therefore will show up in the results. Bear in mind that you can add the "&filter=0" parameter to the end of a Google web search URL to see additional results which might be similar.

I've brought back all the issues and potential solutions that we discussed at the summit back to my team and others within Google and we'll continue to work on providing the best search results and expanding our partnership with you, the webmaster. If you have additional thoughts, we'd love to hear about them!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog:
It's been well over three years since we initially announced the Python Sitemap generator in June 2005. In this time, we've seen lots of people create great third-party Sitemap generators to help webmasters create better Sitemap files. While most Sitemap generators either crawl websites or list the files on a server, we have created a different kind of Sitemap generator that uses several ways to find URLs on your website and then allows you to automatically create and maintain different kinds of Sitemap files.

Google Sitemap Generator screenshot of the admin console

About Google Sitemap Generator


Our new open-source Google Sitemap Generator finds new and modified URLs based on your webserver's traffic, its log files, or the files found on the server. By combining these methods, Google Sitemap Generator can be very fast in finding these URLs and calculating relevant metadata, thereby making your Sitemap files as effective as possible. Once Google Sitemap Generator has collected the URLs, it can create the following Sitemap files for you:

In addition, Google Sitemap Generator can send a ping to Google Blog Search for all of your new or modified URLs. You can optionally include the URLs of the Sitemap files in your robots.txt file as well as "ping" the other search engines that support the sitemaps.org standard.

Sending the URLs to the right Sitemap files is simple thanks to the web-based administration console. This console gives you access to various features that make administration a piece of cake while maintaining a high level of security by default.

Getting started


Google Sitemap Generator is a server plug-in that can be installed on both Linux/Apache and Microsoft IIS Windows-based servers. As with other server-side plug-ins, you will need to have administrative access to the server to install it. You can find detailed information for the installation in the Google Sitemap Generator documentation.

We're excited to release Google Sitemap Generator with the source code and hope that this will encourage more web hosters to include this or similar tools in their hosting packages!

Do you have any questions? Feel free to drop by our Help Group for Google Sitemap Generator or ask general Sitemaps question in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.