Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you:
email subscription form, blogger blogspot, gadgets
When you are providing useful information in your blog, then many times some readers will need to get the latest updates from your blog. For that purpose, you need an Email Subscription Form in your blog, so that the interested visitors can easily get the latest updates.


To add email or Feed Subscription Form to your blogger blog (blogspot) is very easy.
Just follow the next steps:

1. Log in to Blogger, then go to Layout > click on "Add a Gadget" link:


 2. From the pop-up window, scroll down and click on the "HTML/JavaScript" gadget:


 3. Paste the following code inside the empty box:
<style>
.hl-email{
background:url(http://www.matrixar.com/-u3UaeUufpmI/T8lFuelsg8I/AAAAAAAACQY/tOWbHsgTYKc/s1600/mail.png) no-repeat 0px 12px ;
width:300px;
padding:10px 0 0 55px;
float:left;
font-size:1.4em;
font-weight:bold;
margin:0 0 10px 0;
color:#686B6C;
}
.hl-emailsubmit{
background:#9B9895;
cursor:pointer;
color:#fff;
border:none;
padding:3px;
text-shadow:0 -1px 1px rgba(0,0,0,0.25);
-moz-border-radius:6px;
-webkit-border-radius:6px;
border-radius:6px;
font:12px sans-serif;
}
.hl-emailsubmit:hover{
background:#E98313;
}
.textarea{
padding:2px;
margin:6px 2px 6px 2px;
background:#f9f9f9;
border:1px solid #ccc;
resize:none;
box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1);
-moz-box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1);
-webkit-box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1); font-size:13px;
width:130px;
color:#666;}
</style>
<div class="hl-email">
Subscribe via Email <form action="http://feedburner.google.com/fb/a/mailverify" id="feedform" method="post" target="popupwindow" onsubmit="window.open('http://feedburner.google.com/fb/a/mailverify?uri=helplogger', 'popupwindow', 'scrollbars=yes,width=550,height=520');return true">
<input gtbfieldid="3" class="textarea" name="email" onblur="if (this.value == &quot;&quot;) {this.value = &quot;Enter email address here&quot;;}" onfocus="if (this.value == &quot;Enter email address here&quot;) {this.value = &quot;&quot;;}" value="Enter email address here" type="text" />
<input type="hidden" value="helplogger" name="uri"/><input type="hidden" name="loc" value="en_US"/>
<input class="hl-emailsubmit" value="Submit" type="submit" />
</form>
</div>

Settings 
  • Replace the url address in green to change the email icon
  • Increase/Decrease the 130 width value for a wider text area
  • Replace http://feedburner.google.com/fb/a/mailverify?uri=helplogger with your Feedburner Email Feed link. You can get it by visiting your feedburner account then navigate to Publicize and then to Email Subscriptions.
  • Replace helplogger with your feed title. It appears at the end of your feed link. In my case it is http://feedburner.google.com/fb/a/mailverify?uri=helplogger

4. Now Save your widget and check your blog. Enjoy!

2013, By: Seo Master
salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

As the web evolves, Google’s crawling and indexing capabilities also need to progress. We improved our indexing of Flash, built a more robust infrastructure called Caffeine, and we even started crawling forms where it makes sense. Now, especially with the growing popularity of JavaScript and, with it, AJAX, we’re finding more web pages requiring POST requests -- either for the entire content of the page or because the pages are missing information and/or look completely broken without the resources returned from POST. For Google Search this is less than ideal, because when we’re not properly discovering and indexing content, searchers may not have access to the most comprehensive and relevant results.

We generally advise to use GET for fetching resources a page needs, and this is by far our preferred method of crawling. We’ve started experiments to rewrite POST requests to GET, and while this remains a valid strategy in some cases, often the contents returned by a web server for GET vs. POST are completely different. Additionally, there are legitimate reasons to use POST (e.g., you can attach more data to a POST request than a GET). So, while GET requests remain far more common, to surface more content on the web, Googlebot may now perform POST requests when we believe it’s safe and appropriate.

We take precautions to avoid performing any task on a site that could result in executing an unintended user action. Our POSTs are primarily for crawling resources that a page requests automatically, mimicking what a typical user would see when they open the URL in their browser. This will evolve over time as we find better heuristics, but that’s our current approach.

Let’s run through a few POSTs request scenarios that demonstrate how we’re improving our crawling and indexing to evolve with the web.

Examples of Googlebot’s POST requests
  • Crawling a page via a POST redirect
    <html>
      <body onload="document.foo.submit();">
        <form name="foo" action="request.php" method="post">       <input type="hidden" name="bar" value="234"/>
        </form>
      </body>
    </html>
  • Crawling a resource via a POST XMLHttpRequest
    In this step-by-step example, we improve both the indexing of a page and its Instant Preview by following the automatic XMLHttpRequest generated as the page renders.

    1. Google crawls the URL, yummy-sundae.html.
    2. Google begins indexing yummy-sundae.html and, as a part of this process, decides to attempt to render the page to better understand its content and/or generate the Instant Preview.
    3. During the render, yummy-sundae.html automatically sends an XMLHttpRequest for a resource, hot-fudge-info.html, using the POST method.
      <html>
        <head>
          <title>Yummy Sundae</title>
          <script src="jquery.js"></script>
        </head>
        <body>
          This page is about a yummy sundae.
          <div id="content"></div>
          <script type="text/javascript">
            $(document).ready(function() {
              $.post('hot-fudge-info.html', function(data)
                {$('#content').html(data);});
            });
          </script>
        </body>
      </html>
    4. The URL requested through POST, hot-fudge-info.html, along with its data payload, is added to Googlebot’s crawl queue.
    5. Googlebot performs a POST request to crawl hot-fudge-info.html.
    6. Google now has an accurate representation of yummy-sundae.html for Instant Previews. In certain cases, we may also incorporate the contents of hot-fudge-info.html into yummy-sundae.html.
    7. Google completes the indexing of yummy-sundae.html.
    8. User searches for [hot fudge sundae].
    9. Google’s algorithms can now better determine how yummy-sundae.html is relevant for this query, and we can properly display a snapshot of the page for Instant Previews.
Improving your site’s crawlability and indexability

General advice for creating crawlable sites is found in our Help Center. For webmasters who want to help Google crawl and index their content and/or generate the Instant Preview, here are a few simple reminders:
  • Prefer GET for fetching resources, unless there’s a specific reason to use POST.
  • Verify that we're allowed to crawl the resources needed to render your page. In the example above, if hot-fudge-info.html is disallowed by robots.txt, Googlebot won't fetch it. More subtly, if the JavaScript code that issues the XMLHttpRequest is located in an external .js file disallowed by robots.txt, we won't see the connection between yummy-sundae.html and hot-fudge-info.html, so even if the latter is not disallowed itself, that may not help us much. We've seen even more complicated chains of dependencies in the wild. To help Google better understand your site it's almost always better to allow Googlebot to crawl all resources.

    You can test whether resources are blocked through Webmaster Tools “Labs -> Instant Previews.”
  • Make sure to return the same content to Googlebot as is returned to users’ web browsers. Cloaking (sending different content to Googlebot than to users) is a violation of our Webmaster Guidelines because, among other things, it may cause us to provide a searcher with an irrelevant result -- the content the user views in their browser may be a complete mismatch from what we crawled and indexed. We’ve seen numerous POST-request examples where a webmaster non-maliciously cloaked (which is still a violation), and their cloaking -- on even the smallest of changes -- then caused JavaScript errors that prevented accurate indexing and completely defeated their reason for cloaking in the first place. Summarizing, if you want your site to be search-friendly, cloaking is an all-around sticky situation that’s best to avoid.

    To verify that you're not accidentally cloaking, you can use Instant Previews within Webmaster Tools, or try setting the User-Agent string in your browser to something like:

    Mozilla/5.0 (compatible; Googlebot/2.1;
      +http://www.google.com/bot.html)

    Your site shouldn't look any different after such a change. If you see a blank page, a JavaScript error, or if parts of the page are missing or different, that means that something's wrong.
  • Remember to include important content (i.e., the content you’d like indexed) as text, visible directly on the page and without requiring user-action to display. Most search engines are text-based and generally work best with text-based content. We’re always improving our ability to crawl and index content published in a variety of ways, but it remains a good practice to use text for important information.
Controlling your content

If you’d like to prevent content from being crawled or indexed for Google Web Search, traditional robots.txt directives remain the best method. To prevent the Instant Preview for your page(s), please see our Instant Previews FAQ which describes the “Google Web Preview” User-Agent and the nosnippet meta tag.

Moving forward

We’ll continue striving to increase the comprehensiveness of our index so searchers can find more relevant information. And we expect our crawling and indexing capability to improve and evolve over time, just like the web itself. Please let us know if you have questions or concerns.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog:
In February, we launched the Webmaster Central YouTube channel, and since then we've been busy keeping it fresh with new content. We're not yet uploading 20 hours every minute, but did you know that we've been releasing a new video almost every weekday? You can keep up with the latest updates by subscribing to our channel in YouTube, or by adding our feed to your RSS reader.

If you're more of a weekly digest kind of person, we're here for you too. For every week that we have new videos available, we'll let you know here on the blog. In the videos uploaded in the past week, you'll find answers to these questions from Matt's Grab Bag:

Can product descriptions be considered duplicate content?
How can new pages get indexed quickly?
What are your views on PageRank sculpting?
Will you ever switch to a Mac?

Feel free to leave comments letting us know how you liked the videos, and if you have any specific questions, ask the experts in the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.