Création des Logiciels de gestion d'Entreprise, Création et référencement des sites web, Réseaux et Maintenance, Conception
Création des Logiciels de gestion d'Entreprise, Création et référencement des sites web, Réseaux et Maintenance, Conception
SEO Factor | Brief SEO Checklsit Information |
Keyword in Domain Name/URL | MainKeyword(s) should always appear in the URL and if possible in thedomain name. |
Keyword in Title | Main keyword should be as close to the beginning as possible. Don't stuff your title though! Don't use special characters. |
Keyword in Meta Description Tag | Use1 to 2 reasonable sentences in your meta description and use yourkeywords at least once. |
Keywords Meta Tag | Main keyword should be as close to the beginning as possible. Don'tuse more than about 10 keywords. Don't repeat any single word more thantwice! |
Keyword Density in the body | There is no recommended or perfect keyword density, but don't use more than 15% to 20% per main keyword. |
Keywords in header tags | Most important isH1, then follow H2, and H3. The others are proably not that important. |
Keyword proximity | The closer the keywords are, the better they might rank. |
Order of key phrases | Try to order your keywords so that they form key phrases that might be queried for more often in search engines. |
Keyword frequency | The most important main keyword should be repeated more than the others. However, don't spam the keyword in nonsensical points. |
Keyword prominence | The earlier the main keyword appears on the page, the higher its relevance. |
Keyword in ALT and TITLE tags | Describe the image and if possible with one of the main keywords, however, never spam the ALT tag, this is a common SEO mistake! |
Keyword Anchor Text | Try to have keywords in your inbound anchor texts, that are links that direct to subpages of your site. |
Keyword Stemming | Stem your keywords. Use singular, plural, past forms, etc. |
Keyword Semantics | Make use of synonyms and don't spam your site with one and the same keyword. |
No excessive deep linking | Check that all your pages can be reached with no more than 3 or 4 clicks! |
Page to Page Linking | Try to link to appropriate subpages from a related subpage. |
Domain Name Extension | .EDUand .ORG seem to be very important. For any .COM domain it is a littlemore difficult to prove trustworthiness and relevance due to the hugeamount of spamming sites. |
File Sizes | Never exceed more than 100kb per page. Try to keep below 40kb per page. |
Hyphenate file names | Do never use more than 4 words in the file name, as this indicates spam. |
Fresh and new contents | Google loves new and regularly updated contents, so do your visitors! |
Total length of URL | Keepit at a minimum. That does not mean that you should not use mor than 60characters. |
Search engine | Submission URL | Help page |
---|---|---|
http://www.google.com/webmasters/tools/ping?sitemap= | How do I resubmit my Sitemap once it has changed? | |
Yahoo! | Does Yahoo! support Sitemaps? | |
Ask.com | http://submissions.ask.com/ping?sitemap= | Q: Does Ask.com support sitemaps? |
Live Search | http://webmaster.live.com/ping.aspx?siteMap= | Webmaster Tools (beta) |
Yandex | — | Sitemaps files |
Entry | Meaning |
---|---|
User-agent: * Disallow: | Because nothing is disallowed, everything is allowed for every robot. |
User-agent: mybot Disallow: / | mybot robot may not index anything, because the root path (/) is disallowed. |
User-agent: * Allow: / | For all user agents, allow. |
User-agent: BadBotAllow: /About/robot-policy.htmlDisallow: / | The BadBot robot can see the robot policy document, but nothing else.All other user-agents are by default allowed to see everything.This only protects a site if "BadBot" follows the directives in robots.txt |
User-agent: *Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /private | In this example, all robots can visit the whole site, with the exception of the two directories mentioned and any path that starts with private at the host root directory, including items in privatedir/mystuff and the file privateer.html |
User-agent: BadBot Disallow: / User-agent: * Disallow: /*/private/* | The blank line indicates a new "record" - a new user agent command. All other robots can see everything except any subdirectory named "private" (using the wildcard character) |
User-agent: WeirdBotDisallow: /links/listing.htmlDisallow: /tmp/ Disallow: /private/ User-agent: * Allow: / Disallow: /temp* Alllow: *temperature* Disallow: /private/ | This keeps the WeirdBot from visiting the listing page in the links directory, the tmp directory and the private directory. Allother robots can see everything except the temp directories or files,but should crawl files and directories named "temperature", and shouldnot crawl private directories. Note that the robots will use thelongest matching string, so temps and temporary will match the Disallow, while temperatures will match the Allow. |
Bad Examples - Common Wrong Entries | |
use one of the robots.txt checkers to see if your file is malformed | |
User-agent: googlebot Disallow / | NO! This entry is missing the colon after the disallow. |
User-agent: sidewiner Disallow: /tmp/ | NO! Robots will ignore misspelled User Agent names (it should be "sidewinder"). Check your server logs for User Agent name and the listings of User Agent names. |
User-agent: MSNbot Disallow: /PRIVATE | WARNING! Many robots and webservers are case-sensitive. So this path will not match any root-level folders named private or Private. |
User-agent: * Disallow: /tmp/ User-agent: Weirdbot Disallow: /links/listing.html Disallow: /tmp/ | Robots generally read from top to bottom and stop when they reach something that applies to them. So Weirdbot would probably stop at the first record, *. Ifthere's a specific User Agent, robots don't check the * (all useragents) block, so any general directives should be repeated in thespecial blocks. |