Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you:
This latest algo update this is being rolled out is predicted to impact around 3% of search queries, and to put that into perspective, the original Panda algorithm was said to affect around 12% of all queries. However, us SEO's have learned to take Google's percentile predictions with a pinch of salt after Matt Cutts stated that the "(not provided)" keyword would account for less than 10% of all website traffic.



Before releasing any details on the algorithm update itself, Google kindly gave us some background information on how they feel about search engine optimisation. This is likely to counter the speculation from some SEO circles when Google make an announcement, the most recent example is the speculation that followed misreporting of the "over-optimization" penalty, which Matt Cutts discussed at SXSW. There was a suggestion that his speech was perhaps 'anti-SEO', however, those who have read the transcript of listened to the talk in full will know that this couldn't be further from the truth.

In this latest blog, Google have left no room for debate as they empathically state that: "SEO can be positive and constructive", "search engine optimization can make a site more crawlable and make individual pages more accessible" and "'White hat' search engine optimizers often improve the usability of a site, help create great content, or make sites faster, which is good for both users and search engines." These are only a few examples of the positive endorsement that ethical, organic white-hat SEO received from Google in this blog. The problem G have is with those who manipulate and game the system and rob search users of the quality user experience that they expect, I of course refer to the propagators of Black Hat SEO. As mentioned, it is sites that use black hat SEO tactics and violate the Webmaster Guidelines that will be hit by this algo update, in an attempt by Google to return higher quality search results.

G aren't able to reveal specifics about the changes to how they handle certain signals, as this would leave the door open to those wanting to game the system. However, from the examples given in the blog, it seems there is a real focus on on-page aspects of webspam such as keyword stuffing, and excessive exact match outbound links. SEO Consult will also be conducting a review in an attempt to identify other metrics that this algorithm update targets.

The second screenshot in the blog seems to indicate that this is another step by Google to clamp down on the blog networks favoured by spammers to acquire anchor-text rich links. It identifies a piece of blatantly spun content with three links semantically connected to the search query 'loans', which are completely irrelevant to the content in which they are placed. This is the kind of spam that would be found on low-quality blog networks such as BuildMyRank, which was recently de-indexed by Google.



As I alluded to in the second paragraph, Matt Cutts recently spoke about an "over-optimization" penalty that is expected to be rolled out imminently. We've cleared up that this wasn't a criticism of SEO general, but again, those who abuse the system and lower the quality of results that are returned to users. We don't think that this announcement is directly linked to the over-optimisation penalty, but we expect to see that released soon, most likely with a US launch followed by a global launch, similar to how Panda was launched.

While we haven't seen any dramatic changes in the SERPs just yet (and we're not expecting to see any change for clients), we will be closely monitoring the social networks and SEO blogs for a better understanding of the initial impact of this algorithm update. We have already seen numerous people complaining in the Google Webmaster Forums and in other blog comments about their site incurring a penalty. This seems to indicate that the update has already begun rolling out, but the full impact won't be known until later this week when the update is fully rolled out.
2013, By: Seo Master
Seo Master present to you:

Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do.

Robots.txt file tells search engines which directories to crawl and which not to. You can use it to block crawlers from looking at your image directory if you don't want your images showing up on google search. Be careful not to use this to try and block people from directories you want to keep secret. Anyone can view you robots.txt file. Make sure you password protect directories that need to be secured.


The location of robots.txt is very important. It must be in the main directory because otherwise user agents (search engines) will not be able to find it – they do not search the whole site for a file named robots.txt. Instead, they look first in the main directory (i.e. http://mydomain.com/robots.txt) and if they don't find it there, they simply assume that this site does not have a robots.txt file and therefore they index everything they find along the way. So, if you don't put robots.txt in the right place, do not be surprised that search engines index your whole site.


Creating the robots.txt file

Robots.txt should be put in the top-level directory of your web server.

Take the following robots.txt file for example:


1) Here's a basic "robots.txt":
User-agent: *
Disallow: /
With the above declared, all robots (indicated by "*") are instructed to not index any of your pages (indicated by "/"). Most likely not what you want, but you get the idea.

2) Lets get a little more discriminatory now. While every webmaster loves Google, you may not want Google's Image bot crawling your site's images and making them searchable online, if just to save bandwidth. The below declaration will do the trick:
User-agent: Googlebot-Image
Disallow: /

3) The following disallows all search engines and robots from crawling select directories and pages:
User-agent: *
Disallow: /cgi-bin/
Disallow: /privatedir/
Disallow: /tutorials/blank.htm

4) You can conditionally target multiple robots in "robots.txt." Take a look at the below:
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow: /cgi-bin/
Disallow: /privatedir/
This is interesting- here we declare that crawlers in general should not crawl any parts of our site, EXCEPT for Google, which is allowed to crawl the entire site apart from /cgi-bin/ and /privatedir/. So the rules of specificity apply, not inheritance.

5) There is a way to use Disallow: to essentially turn it into "Allow all", and that is by not entering a value after the semicolon(:):
User-agent: *
Disallow: /
User-agent: ia_archiver
Disallow:
Here I'm saying all crawlers should be prohibited from crawling our site, except for Alexa, which is allowed.

6) Finally, some crawlers now support an additional field called "Allow:", most notably, Google. As its name implies, "Allow:" lets you explicitly dictate what files/folders can be crawled. However, this field is currently not part of the "robots.txt" protocol, so my recommendation is to use it only if absolutely needed, as it might confuse some less intelligent crawlers.

Per Google's FAQs for webmasters, the below is the preferred way to disallow all crawlers from your site EXCEPT Google:
User-agent: *
Disallow: /
User-agent: Googlebot
Allow: /
2013, By: Seo Master
Seo Master present to you:


Google Tag Manager is also known as GTM, is a free container tag system from Google. A container tag helps you manage different kinds of tags that you may have on your site. This include web analytics tags, advertising conversion tags, general JavaScript, etc.

Users can add and update their own tags anytime. It’s not limited to Google-specific tags. It includes asynchronous tag loading, so “tags can fire faster without getting in each other’s way,” as Google puts it. It comes with tag templates for marketers to quickly add tags with Google’s interface, and supports custom tags. It also has error prevention tools like Preview Mode, a Debug Console, and Version History “to ensure new tags won’t break your site.”
2013, By: Seo Master
Powered by Blogger.