Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: How Google defines IP delivery, geolocation, and cloaking 2013

salam every one, this is a topic from google web master centrale blog:

Many of you have asked for more information regarding webserving techniques (especially related to Googlebot), so we made a short glossary of some of the more unusual methods.
  • Geolocation: Serving targeted/different content to users based on their location. As a webmaster, you may be able to determine a user's location from preferences you've stored in their cookie, information pertaining to their login, or their IP address. For example, if your site is about baseball, you may use geolocation techniques to highlight the Yankees to your users in New York.

    The key is to treat Googlebot as you would a typical user from a similar location, IP range, etc. (i.e. don't treat Googlebot as if it came from its own separate country—that's cloaking).

  • IP delivery: Serving targeted/different content to users based on their IP address, often because the IP address provides geographic information. Because IP delivery can be viewed as a specific type of geolocation, similar rules apply. Googlebot should see the same content a typical user from the same IP address would see.

    (Author's warning: This 7.5-minute video may cause drowsiness. Even if you're really interested in IP delivery or multi-language sites, it's a bit uneventful.)

  • Cloaking: Serving different content to users than to Googlebot. This is a violation of our webmaster guidelines. If the file that Googlebot sees is not identical to the file that a typical user sees, then you're in a high-risk category. A program such as md5sum or diff can compute a hash to verify that two different files are identical.

  • First click free: Implementing Google News' First click free policy for your content allows you to include your premium or subscription-based content in Google's websearch index without violating our quality guidelines. You allow all users who find your page using Google search to see the full text of the document, even if they have not registered or subscribed. The user's first click to your content area is free. However, you can block the user with a login or payment request when he clicks away from that page to another section of your site.

    If you're using First click free, the page displayed to users who visit from Google must be identical to the content that is shown to the Googlebot.
Still have questions?  We'll see you at the related thread in our Webmaster Help Group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Domain verification using CNAME records 2013

salam every one, this is a topic from google web master centrale blog:

Webmaster Level: all

In order to use Google services like Webmaster Tools and Google Apps you must verify that you own the site or domain. One way you can do this is by creating a DNS TXT record to prove your ownership of the domain. Now you can also use DNS CNAME records to verify ownership of your domains. This is a new domain verification option for users that are not able to create DNS TXT records for their domains.

For example, if you own the domain example.com, you can verify your ownership of the domain by creating a DNS CNAME record as follows.

  1. Add the domain example.com to your account either in Webmaster Tools or directly on the Verification Home page.

  2. Select the Domain Name Provider method of verification, then select your domain name provider that manages your DNS records or "Other" if your provider is not on this list.

  3. Based on your selection you may either see the instructions to set a CNAME record or see a link to the option Add a CNAME record. Follow the instructions to add the specified CNAME record to your domain’s DNS configuration.

  4. Click the Verify button.

When you click Verify, Google will check for the CNAME record and if everything works you will be added as a verified owner of the domain. Using this method automatically verifies you as the owner of all websites on this domain. For example, when you verify your ownership of example.com, you are automatically verified as an owner of www.example.com as well as subdomains such as blog.example.com.

Sometimes DNS records take a while to make their way across the Internet. If we don't find the record immediately, we'll check for it periodically and when we find the record we'll make you a verified owner. To maintain your verification status don’t remove the record, even after verification succeeds.

If you don’t have access to your DNS configuration at your domain name provider you can continue to use any of the other verification methods, such as the HTML file, the meta tag or Google Analytics tag in order to verify that you own a site.

If you have any questions please let us know via our Webmaster Help forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Free Download KEmulator Lite v0.9.8 (Versi Terbaru) 2013

Seo Master present to you:
KEmulator Lite v0.9.8 ini adalah software yang berfungsi untuk menjalankan program java di PC, Laptop, Netbook maupun Notebook. Dengan software KEmulator Litev0.9.8 ini yang berextensi (JAR) ini dapat di jalankan pada komputer anda dengan mudah dan terperinci, Bisa berupa Game sampai Aplikasi lainnya. 


Fungsi dari KEmulator itu sendiri adalah mengetes suatu Aplikasi ataupun Game java yang berupa JAR apakah sudah corrupt ataupun tidak sebelum di pasang pada HP(Handphone).
Berikut saya akan beri sedikit turtorialnya agar kalian tidak pusing untuk menggunakan aplikasi software KEmulator ini ..





SS Ini saya ambil langsung dari PC saya:

1. Jalankan software KEmulator  Lalu Pilih >> View >> Options
Silahkan melalukan pengaturan sendiri agar lebih pas(Bebas Terserah kalian). Jika tidak mengerti, anda bisa melihat gambar setting saya di atas.
2. Jika anda sudah selesai dengan pengaturan anda, silahkan klik pada:
Midlet lalu pilih Load jar
silahkan pilih aplikasi yang ingin anda jalankan.
3. Contoh di sini saya menjalankan aplikasi Operamini v6 jar
Lihat tampilannya di bawah ini:
NB : Ingat software yang bisa dijalankan di KEmulator ini hanya berextensi JAR!

Wow, Mudahkan Sobat Seo-xp??? Selamat Bermain, Semoga Bermanfaat..

Link download KEmulator:
Link download Game ataupun Aplikasi
2013, By: Seo Master

from web contents: Region Tags in Google Search Results 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Country-code top-level domains (or ccTLDs) can provide people with a quick and valuable clue about the location of a website—for example, ".fr" for France or ".co.jp" for Japan. However, for certain top level domains like .com, .info and .org, it's not as easy to figure out the location. That's why today we're adding region information supplied by webmasters to the green address line on some Google search results.

This feature is easiest to explain through an example. Let's say you've heard about a boxing club in Canada called "Capital City Boxing." You try a search for [capital city boxing] to find out more, but it's hard to tell which result is the one you're looking for. Here's a screen shot:


None of the results provide any location information in the title or snippet, nor do they have a regional TLD (such as .ca for Canada). The only way to find the result you're looking for is to refine your search ([capital city boxing canada] works) or click through the various links to figure it out. Clicking through the first result reveals that there's apparently another "Capital City Boxing" club in Alabama.

Region tags improve search results by providing valuable information about website location right in the green URL line. Continuing our prior example, here's a screen shot of the new region tag (circled in red):



As you can see, the fourth result now includes the region name "Canada" after the green URL, so you can immediately tell that this result relates to the boxing club in Canada. With the new display, you no longer need to refine your search or click through the results to figure out which page is the one you're looking for. In general, our hope is that these region tags will help searchers more quickly identify which results are most relevant to their queries.

As a webmaster, you can control how this feature works by adjusting your Geographic Targeting settings. Log in to Webmaster Tools and choose Site configuration > Settings > Geographic Target. From here you can associate a particular country/region with your site. These settings will determine the name that appears as a region tag. You can learn more about using the Geographic Target tool in a prior blog post and in our Help Center.

We currently show region tags only for certain domains such as .com and .net where the location information would otherwise be unclear. We don't show region tags for results on domains like .br for Brazil, because the location is already implied by the green URL line in our default display. In addition, we only display region tags when the region supplied by the site owner is different from the domain where the search was entered. For example, if you do a search from the Singapore Google domain (google.com.sg), we won't show you region tags for all the websites webmasters have targeted to Singapore because we'd end up tagging too many results, and the tag is really most relevant for foreign regions. For the initial release, we anticipate roughly 1% of search results pages will include webpages with a region tag.

We hope you'll find this new feature useful, and we welcome your feedback.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Our SEO Guide — now available in ten more languages 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Beginner

We’re very glad to announce that our recently updated SEO guide is now available in ten more languages: Spanish, French, German, Russian, Turkish, Finnish, Swedish, Hungarian, Traditional Chinese and Simplified Chinese.

For this new version, we’ve thoroughly reviewed and updated the content; we’ve also added a brand new section on best practices for mobile websites.



You can download each PDF file in its full 32-page glory from goo.gl/seoguide and print it to have it as a useful resource.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Is your site hacked? New Message Center notifications for hacking and abuse 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: All

As we crawl the web, we see bad content inserted on to thousands of hacked sites each day. The number of sites attacked is staggering and the problem is only getting worse. Hackers and spammers target and successfully compromise any sites they can - small personal sites, schools and universities, even multinational corporations. Spam attacks against forums and user content sections of sites, though not as shocking, are even more widespread.

You may have read in an earlier post that we've begun notifying webmasters about new software versions via Webmaster Tools to help protect their sites. Continuing with our effort to provide more useful information to webmasters, we're happy to announce that we'll soon be sending even more notifications to the Message Center.

Starting this month, we will notify more webmasters of more potential issues we've detected on their websites, including:
These notifications are meant to alert webmasters of potential issues and provide next steps on how to get their sites fixed and back into Google's search results. If it pertains to a hacking or abuse issue, the notification will point to example URLs exhibiting this type of behavior. These notifications will run in parallel with our existing malware notifications.

A notice of suspected hacking, for example, will look like this:


We've been notifying webmasters of suspected hacking for years, but a recent upgrade to our systems will allow us to notify many more site owners that have been hacked. We hope webmasters will find these notifications useful in making sure their sites are clean and secure, ultimately providing a better user experience for their visitors. In the future, we may extend this effort even further to include other types of vulnerabilities or abuse issues.

Just as before, webmasters who have not already signed up for Webmaster Tools may still do so and retrieve previously sent messages within one year of their send date. And if you don't want to miss out on any important messages, remember to use the email forwarding feature to receive these alerts in your inbox.

If you have any questions, please feel free to ask in our Webmaster Help Forum or leave your comments below.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo How to add “Email Subscription Form” to Blogger Blogspot 2013

Seo Master present to you:
email subscription form, blogger blogspot, gadgets
When you are providing useful information in your blog, then many times some readers will need to get the latest updates from your blog. For that purpose, you need an Email Subscription Form in your blog, so that the interested visitors can easily get the latest updates.


To add email or Feed Subscription Form to your blogger blog (blogspot) is very easy.
Just follow the next steps:

1. Log in to Blogger, then go to Layout > click on "Add a Gadget" link:


 2. From the pop-up window, scroll down and click on the "HTML/JavaScript" gadget:


 3. Paste the following code inside the empty box:
<style>
.hl-email{
background:url(http://www.matrixar.com/-u3UaeUufpmI/T8lFuelsg8I/AAAAAAAACQY/tOWbHsgTYKc/s1600/mail.png) no-repeat 0px 12px ;
width:300px;
padding:10px 0 0 55px;
float:left;
font-size:1.4em;
font-weight:bold;
margin:0 0 10px 0;
color:#686B6C;
}
.hl-emailsubmit{
background:#9B9895;
cursor:pointer;
color:#fff;
border:none;
padding:3px;
text-shadow:0 -1px 1px rgba(0,0,0,0.25);
-moz-border-radius:6px;
-webkit-border-radius:6px;
border-radius:6px;
font:12px sans-serif;
}
.hl-emailsubmit:hover{
background:#E98313;
}
.textarea{
padding:2px;
margin:6px 2px 6px 2px;
background:#f9f9f9;
border:1px solid #ccc;
resize:none;
box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1);
-moz-box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1);
-webkit-box-shadow:inset 1px 1px 1px rgba(0,0,0,0.1); font-size:13px;
width:130px;
color:#666;}
</style>
<div class="hl-email">
Subscribe via Email <form action="http://feedburner.google.com/fb/a/mailverify" id="feedform" method="post" target="popupwindow" onsubmit="window.open('http://feedburner.google.com/fb/a/mailverify?uri=helplogger', 'popupwindow', 'scrollbars=yes,width=550,height=520');return true">
<input gtbfieldid="3" class="textarea" name="email" onblur="if (this.value == &quot;&quot;) {this.value = &quot;Enter email address here&quot;;}" onfocus="if (this.value == &quot;Enter email address here&quot;) {this.value = &quot;&quot;;}" value="Enter email address here" type="text" />
<input type="hidden" value="helplogger" name="uri"/><input type="hidden" name="loc" value="en_US"/>
<input class="hl-emailsubmit" value="Submit" type="submit" />
</form>
</div>

Settings 
  • Replace the url address in green to change the email icon
  • Increase/Decrease the 130 width value for a wider text area
  • Replace http://feedburner.google.com/fb/a/mailverify?uri=helplogger with your Feedburner Email Feed link. You can get it by visiting your feedburner account then navigate to Publicize and then to Email Subscriptions.
  • Replace helplogger with your feed title. It appears at the end of your feed link. In my case it is http://feedburner.google.com/fb/a/mailverify?uri=helplogger

4. Now Save your widget and check your blog. Enjoy!

2013, By: Seo Master

from web contents: GET, POST, and safely surfacing more of the web 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

As the web evolves, Google’s crawling and indexing capabilities also need to progress. We improved our indexing of Flash, built a more robust infrastructure called Caffeine, and we even started crawling forms where it makes sense. Now, especially with the growing popularity of JavaScript and, with it, AJAX, we’re finding more web pages requiring POST requests -- either for the entire content of the page or because the pages are missing information and/or look completely broken without the resources returned from POST. For Google Search this is less than ideal, because when we’re not properly discovering and indexing content, searchers may not have access to the most comprehensive and relevant results.

We generally advise to use GET for fetching resources a page needs, and this is by far our preferred method of crawling. We’ve started experiments to rewrite POST requests to GET, and while this remains a valid strategy in some cases, often the contents returned by a web server for GET vs. POST are completely different. Additionally, there are legitimate reasons to use POST (e.g., you can attach more data to a POST request than a GET). So, while GET requests remain far more common, to surface more content on the web, Googlebot may now perform POST requests when we believe it’s safe and appropriate.

We take precautions to avoid performing any task on a site that could result in executing an unintended user action. Our POSTs are primarily for crawling resources that a page requests automatically, mimicking what a typical user would see when they open the URL in their browser. This will evolve over time as we find better heuristics, but that’s our current approach.

Let’s run through a few POSTs request scenarios that demonstrate how we’re improving our crawling and indexing to evolve with the web.

Examples of Googlebot’s POST requests
  • Crawling a page via a POST redirect
    <html>
      <body onload="document.foo.submit();">
        <form name="foo" action="request.php" method="post">       <input type="hidden" name="bar" value="234"/>
        </form>
      </body>
    </html>
  • Crawling a resource via a POST XMLHttpRequest
    In this step-by-step example, we improve both the indexing of a page and its Instant Preview by following the automatic XMLHttpRequest generated as the page renders.

    1. Google crawls the URL, yummy-sundae.html.
    2. Google begins indexing yummy-sundae.html and, as a part of this process, decides to attempt to render the page to better understand its content and/or generate the Instant Preview.
    3. During the render, yummy-sundae.html automatically sends an XMLHttpRequest for a resource, hot-fudge-info.html, using the POST method.
      <html>
        <head>
          <title>Yummy Sundae</title>
          <script src="jquery.js"></script>
        </head>
        <body>
          This page is about a yummy sundae.
          <div id="content"></div>
          <script type="text/javascript">
            $(document).ready(function() {
              $.post('hot-fudge-info.html', function(data)
                {$('#content').html(data);});
            });
          </script>
        </body>
      </html>
    4. The URL requested through POST, hot-fudge-info.html, along with its data payload, is added to Googlebot’s crawl queue.
    5. Googlebot performs a POST request to crawl hot-fudge-info.html.
    6. Google now has an accurate representation of yummy-sundae.html for Instant Previews. In certain cases, we may also incorporate the contents of hot-fudge-info.html into yummy-sundae.html.
    7. Google completes the indexing of yummy-sundae.html.
    8. User searches for [hot fudge sundae].
    9. Google’s algorithms can now better determine how yummy-sundae.html is relevant for this query, and we can properly display a snapshot of the page for Instant Previews.
Improving your site’s crawlability and indexability

General advice for creating crawlable sites is found in our Help Center. For webmasters who want to help Google crawl and index their content and/or generate the Instant Preview, here are a few simple reminders:
  • Prefer GET for fetching resources, unless there’s a specific reason to use POST.
  • Verify that we're allowed to crawl the resources needed to render your page. In the example above, if hot-fudge-info.html is disallowed by robots.txt, Googlebot won't fetch it. More subtly, if the JavaScript code that issues the XMLHttpRequest is located in an external .js file disallowed by robots.txt, we won't see the connection between yummy-sundae.html and hot-fudge-info.html, so even if the latter is not disallowed itself, that may not help us much. We've seen even more complicated chains of dependencies in the wild. To help Google better understand your site it's almost always better to allow Googlebot to crawl all resources.

    You can test whether resources are blocked through Webmaster Tools “Labs -> Instant Previews.”
  • Make sure to return the same content to Googlebot as is returned to users’ web browsers. Cloaking (sending different content to Googlebot than to users) is a violation of our Webmaster Guidelines because, among other things, it may cause us to provide a searcher with an irrelevant result -- the content the user views in their browser may be a complete mismatch from what we crawled and indexed. We’ve seen numerous POST-request examples where a webmaster non-maliciously cloaked (which is still a violation), and their cloaking -- on even the smallest of changes -- then caused JavaScript errors that prevented accurate indexing and completely defeated their reason for cloaking in the first place. Summarizing, if you want your site to be search-friendly, cloaking is an all-around sticky situation that’s best to avoid.

    To verify that you're not accidentally cloaking, you can use Instant Previews within Webmaster Tools, or try setting the User-Agent string in your browser to something like:

    Mozilla/5.0 (compatible; Googlebot/2.1;
      +http://www.google.com/bot.html)

    Your site shouldn't look any different after such a change. If you see a blank page, a JavaScript error, or if parts of the page are missing or different, that means that something's wrong.
  • Remember to include important content (i.e., the content you’d like indexed) as text, visible directly on the page and without requiring user-action to display. Most search engines are text-based and generally work best with text-based content. We’re always improving our ability to crawl and index content published in a variety of ways, but it remains a good practice to use text for important information.
Controlling your content

If you’d like to prevent content from being crawled or indexed for Google Web Search, traditional robots.txt directives remain the best method. To prevent the Instant Preview for your page(s), please see our Instant Previews FAQ which describes the “Google Web Preview” User-Agent and the nosnippet meta tag.

Moving forward

We’ll continue striving to increase the comprehensiveness of our index so searchers can find more relevant information. And we expect our crawling and indexing capability to improve and evolve over time, just like the web itself. Please let us know if you have questions or concerns.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Webmaster Central YouTube update for May 26-29 2013

salam every one, this is a topic from google web master centrale blog:
In February, we launched the Webmaster Central YouTube channel, and since then we've been busy keeping it fresh with new content. We're not yet uploading 20 hours every minute, but did you know that we've been releasing a new video almost every weekday? You can keep up with the latest updates by subscribing to our channel in YouTube, or by adding our feed to your RSS reader.

If you're more of a weekly digest kind of person, we're here for you too. For every week that we have new videos available, we'll let you know here on the blog. In the videos uploaded in the past week, you'll find answers to these questions from Matt's Grab Bag:

Can product descriptions be considered duplicate content?
How can new pages get indexed quickly?
What are your views on PageRank sculpting?
Will you ever switch to a Mac?

Feel free to leave comments letting us know how you liked the videos, and if you have any specific questions, ask the experts in the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo GWT at Google I/O 2010 2013

Seo Master present to you: This year's Google I/O was one to remember, with demos and presentations that showcased the power of HTML5 for consumers and businesses, as well as a complete proliferation of Android apps and devices (some of which ended up in the hands of attendees).

Day one included a keynote presentation by our own Bruce Johnson and SpringSource's Ben Alex. Here they announced the Google and VMware partnership which makes it easy for developers to harness the power of Spring Roo and GWT to build rich enterprise web apps that are cloud portable. As part of this announcement, the GWT team released GWT 2.1 M1, which not only includes VMware integration, but also Data Presentation Widgets and an MVP Framework.

Along with the great keynotes, there were plenty of in-depth GWT sessions. In the event that you missed them, here's a recap:
  • Measure in milliseconds redux: Meet Speed Tracer - Kelly Norton is back for round two to demonstrate what milliseconds of latency means to end-users, as well as how to identify the sources of latency within your app using Speed Tracer.

  • Faster apps faster: Optimizing apps with the GWT Compiler - Have you ever wondered how you can speed up your GWT compiles? If so, follow along with Ray Cromwell as he delves into this topic, as well as other tips and tricks that you can use to streamline development with GWT.

  • Architecting for performance with GWT - Last year we announced Google Wave, a cutting edge web app that introduces a new way of collaborating and communicating. This year Wave team lead Adam Shuck, and GWT UI guru Joel Webber share with everyone the optimizations both teams use when building GWT-based web apps.

  • GWT Linkers target HTML5 Web Workers, Chrome Extensions, and more - GWT has some extremely interesting technology under the hood, and Matt Mastracci, CTO of dotspots, knows this as well as anyone else. For this year's I/O he provides an overview of GWT linkers, as well as how they created a one that turns a GWT module into an HTML5 Web Worker, and one that generates an HTML App Cache manifest automatically.

  • GWT's UI overhaul: UiBinder, ClientBundle, and Layout Panels - GWT 2.0 shipped with some major UI enhancements that make it very easy to speed up your app, decrease load time, and control layout. In this session, Ray Ryan and Joel Webber show you how these new features interact with one another, and how you can use them to create the most optimal web app.

  • GWT + HTML5 can do what?! If you missed the YouTube video of Quake II running in the browser, this session not only replays it, but goes into great detail as to how the three Googlers actually made it happen using HTML5 features such as WebGL and WebSockets.

  • GWT testing best practices - In 2009, Ray Ryan gave a talk on how to architect a GWT app using the MVP design pattern. This year, Wave's Daniel Danilatos follows-up on Ray's talk, with a detailed overview of how to remove the pain of testing GWT apps using the MVP architecture.

  • Architecting GWT applications for production at Google - If you haven't noticed, it's required that at least one of Ray's talks has the word "Architecting" in it. The good news is that his talks live up to their titles, and this session is no exception. Not only does Ray evolve the concepts discussed in his 2009, "Best Practices for Architecting GWT Apps", he dives into some of the upcoming GWT 2.1 features, and invites Ben Alex, from VMware, on stage to talk about the integration between Spring Roo and GWT.
In addition to the linked session titles where you'll find the videos and slides, you can also find all videos in this YouTube playlist for GWT I/O 2010 sessions.

It was fantastic meeting everyone out at I/O, and we hope that it was as exciting and educational for you as it was for us. As always, stay on top of the latest GWT 2.1 release progress on the GWT Blog, and be on the lookout for posts from other I/O tracks coming soon!

2013, By: Seo Master

seo The App Engine Team’s trip to I/O 2010 (Recap & Videos) 2013

Seo Master present to you: This year’s Google I/O included a flurry of announcements and presentations for the App Engine team. Thanks to everyone who attended our sessions, stopped by the Sandbox, or came to meet the team at our office hours. It was great to meet all of you. For the App Engine developers out there that weren't able to make it out this year, we wanted to give you a quick recap on what you missed.

We opened up the first day’s keynote with App Engine’s very own Kevin Gibbs announcing App Engine for Business and doing a demo of the new Business Admin Console. There’s lots of great new features coming with App Engine for Business so if you missed the announcement, please read more about it and sign up to be a part of the preview. We also announced our work with VMWare to connect our development tools in order to allow developers to use SpringSource tools and Google Web Toolkit to build applications and deploy them on App Engine.

If you were watching the keynote, you might have missed the announcement that we released version 1.3.4 of the App Engine SDK which included a brand new bulkloader and experimental support for OpenID and OAuth. The Blobstore API is also no longer experimental and supports files up to 2GB in size.

In addition to all the high profile announcements in the keynote, we also hosted a number of great sessions about App Engine development for the rest of the conference. Thanks to the dedicated I/O organizers, videos of all the App Engine sessions are now available so anyone can watch them (with more to come in the next few days):
  • Appstats - RPC instrumentation and optimizations for App Engine - Guido van Rossum went into detail on how to use Appstats, a new tool for App Engine developers which provides deep insight into why requests are slow and what they’re doing under the covers.

  • Run corporate applications on Google App Engine? Yes we do - Ben Fried (Google’s CIO) and his team joined us to give an update on their progress of moving Google’s corporate applications to App Engine, the problems they ran into, and the success they had. They also announced that two of their apps are now being open sourced for anyone to use.

  • Batch data processing with App Engine - Mike Aizatsky introduced Mapper, a new tool which allows App Engine developers which makes it simple to write code that is run over large datasets such as a Blobstore file or Datastore entities.

  • Data migration in App Engine - Matthew Blain gave a complete introduction to the brand new Bulk Loader which shipped as part of App Engine’s 1.3.4 SDK. The session also provided a look into how to use the Bulk Loader with Java applications and ways to import complex data models from a number of different sources.

  • What's hot in Java for App Engine - The same duo from last year’s introduction of the Java SDK, Don Schwarz and Toby Reyelts, were back again this year to give an update on the progress of the Java SDK. Performance optimizations, compatibility, and new APIs are all covered giving a peak under the hood for Java developers.

  • Building high-throughput data pipelines with Google App Engine - Brett Slatkin reviewed the Task Queue and introduced a number of strategies used to improve the performance of applications doing very high volumes of task queue work. This session is based on lessons learned by Brett while building PubSubHubbub on App Engine.

  • Testing techniques for Google App Engine - Max Ross argued the virtues of proper software testing and then went to detail on how to test your App Engine code properly and how to use App Engine to test all the rest of your code.

  • Next gen queries - Alfred Fuller closed out the conference with a great overview of recent improvements to the Datastore query planner and the new types of queries that are possible, as well as a look at a few features on the horizon.
In addition to the linked session titles where you'll find the videos and slides, you can also find all videos in this YouTube playlist for App Engine I/O 2010 sessions.

There’s plenty of great information in all the presentations, so for those of you that missed, we highly recommend you watch the videos and read the slides. For everyone else that made it to I/O this year, thank you for making this year’s I/O a complete success. It’s incredibly energizing for us to see all your hard work, thoughtful questions, and great ideas on App Engine. We’re already excited to see what you all surprise us with at next year’s I/O!

2013, By: Seo Master

from web contents: Best practices for running multiple sites 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: All

Running a single compelling, high quality site can be time- and resource-consuming, not to mention the creativity it requires to make the site a great one. At times–particularly when it comes to rather commercial topics like foreign currency exchange or online gambling–we see that some webmasters try to compete for visibility in Google search results with a large
number of sites on the same topic. There are a few things to keep in mind when considering a strategy like this for sites that you want to have listed in our search results.

Some less creative webmasters, or those short on time but with substantial resources on their hands, might be tempted to create a multitude of similar sites without necessarily adding unique information to any of these. From a user’s perspective, these sorts of repetitive sites can constitute a poor user experience when visible in search results. Luckily, over time our algorithms have gotten pretty good at recognizing similar content so as to serve users with a diverse range of information. We don’t recommend creating similar sites like that; it’s not a good use of your time and resources.


If all of your sites offer essentially the same content, additional sites are not contributing much to the Internet.

While you’re free to run as many sites as you want, keep in mind that users prefer to see unique and compelling content. It is a good idea to give each site its own content, personality and function. This is true of any website, regardless of whether it’s a single-page hobby-site or part of a large portfolio. When you create a website, try to add something new or some value to the Internet; make something your users have never seen before, something that inspires and fascinates them, something they can’t wait to recommend to their friends.

When coming up with an idea for a website, scan the web first. There are many websites dealing with common and popular services like holiday planning, price comparisons or foreign exchange currency trading. It frequently doesn’t make sense to reinvent the wheel and compete with existing broad topic sites. It’s often more practical and rewarding to focus on smaller or niche topics where your expertise is best and where competition for user attention might be less fierce.

A few webmasters choose to focus their resources on one domain but make use of their domain portfolio by creating a multitude of smaller sites linking to it. In some situations these sites may be perceived as doorways. Without value of their own, these doorway sites are unlikely to stand the test of time in our search results. If you registered several domains but only want to focus on one topic, we recommend you create unique and compelling content on each domain or simply 301 redirect all users to your preferred domain. Think of your web endeavour as if it were a restaurant: You want each dish to reflect the high quality of the service you provide; repeat the same item over and over on your menu and your restaurant might not do so well. Identify and promote your strength or uniqueness. Ask yourself the following questions: What makes you better than the competition? What new service do you provide that others don’t? What makes your sites unique and compelling enough to make users want to revisit them, link to them or even recommend them to their friends?

We suggest not spreading out your efforts too broadly, though. It can be difficult to maintain multiple sites while keeping the content fresh and engaging. It’s better to have one or a few good sites than a multitude of shallow, low value-add sites. As always, we encourage you to share your thoughts via comments as well as by contributing to the Google Webmaster community.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Keeping you informed of critical website issues 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Having a healthy and well-performing website is important, both to you as the webmaster and to your users. When we discover critical issues with a website, Webmaster Tools will now let you know by automatically sending an email with more information.

We’ll only notify you about issues that we think have significant impact on your site’s health or search performance and which have clear actions that you can take to address the issue. For example, we’ll email you if we detect malware on your site or see a significant increase in errors while crawling your site.

For most sites these kinds of issues will occur rarely. If your site does happen to have an issue, we cap the number of emails we send over a certain period of time to avoid flooding your inbox.  If you don’t want to receive any email from Webmaster Tools you can change your email delivery preferences.

We hope that you find this change a useful way to stay up-to-date on critical and important issues regarding your site’s health. If you have any questions, please let us know via our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Information about buying and selling links that pass PageRank 2013

salam every one, this is a topic from google web master centrale blog:

Our goal is to provide users the best search experience by presenting equitable and accurate results. We enjoy working with webmasters, and an added benefit of our working together is that when you make better and more accessible content, the internet, as well as our index, improves. This in turn allows us to deliver more relevant search results to users.

If, however, a webmaster chooses to buy or sell links for the purpose of manipulating search engine rankings, we reserve the right to protect the quality of our index. Buying or selling links that pass PageRank violates our webmaster guidelines. Such links can hurt relevance by causing:

- Inaccuracies: False popularity and links that are not fundamentally based on merit, relevance, or authority
- Inequities: Unfair advantage in our organic search results to websites with the biggest pocketbooks

In order to stay within Google's quality guidelines, paid links should be disclosed through a rel="nofollow" or other techniques such as doing a redirect through a page which is robots.txt'ed out. Here's more information explaining our stance on buying and selling links that pass PageRank:

February 2003: Google's official quality guidelines have advised "Don't participate in link schemes designed to increase your site's ranking or PageRank" for several years.

September 2005: I posted on my blog about text links and PageRank.

December 2005: Another post on my blog discussed this issue, and said

Many people who work on ranking at search engines think that selling links can lower the quality of links on the web. If you want to buy or sell a link purely for visitors or traffic and not for search engines, a simple method exists to do so (the nofollow attribute). Google’s stance on selling links is pretty clear and we’re pretty accurate at spotting them, both algorithmically and manually. Sites that sell links can lose their trust in search engines.

September 2006: In an interview with John Battelle, I noted that "Google does consider it a violation of our quality guidelines to sell links that affect search engines."

January 2007: I posted on my blog to remind people that "links in those paid-for posts should be made in a way that doesn’t affect search engines."

April 2007: We provided a mechanism for people to report paid links to Google.

June 2007: I addressed paid links in my keynote discussion during the Search Marketing Expo (SMX) conference in Seattle. Here's a video excerpt from the keynote discussion. It's less than a minute long, but highlights that Google is willing to use both algorithmic and manual detection of paid links that violate our quality guidelines, and that we are willing to take stronger action on such links in the future.

June 2007: A post on the official Google Webmaster Blog noted that "Buying or selling links to manipulate results and deceive search engines violates our guidelines." The post also introduced a new official form in Google's webmaster console so that people could report buying or selling of links.

June 2007: Google added more specific guidance to our official webmaster documentation about how to report buying or selling links and what sort of link schemes violate our quality guidelines.

August 2007: I described Google's official position on buying and selling links in a panel dedicated to paid links at the Search Engine Strategies (SES) conference in San Jose.

September 2007: In a post on my blog recapping the SES San Jose conference, I also made my presentation available to the general public (PowerPoint link).

October 2007: Google provided comments for a Forbes article titled "Google Purges the Payola".

October 2007: Google officially confirmed to Search Engine Land that we were taking stronger action on this issue, including decreasing the toolbar PageRank of sites selling links that pass PageRank.

October 2007: An email that I sent to Search Engine Journal also made it clear that Google was taking stronger action on buying/selling links that pass PageRank.

We appreciate the feedback that we've received on this issue. A few of the more prevalent questions:

Q: Is buying or selling links that pass PageRank a violation of Google's guidelines? Why?
A: Yes, it is, for the reasons we mentioned above. I also recently did a post on my personal blog that walks through an example of why search engines wouldn't want to count such links. On a serious medical subject (brain tumors), we highlighted people being paid to write about a brain tumor treatment when they hadn't been aware of the treatment before, and we saw several cases where people didn't do basic research (or even spellchecking!) before writing paid posts.

Q: Is this a Google-only issue?
A: No. All the major search engines have opposed buying and selling links that affect search engines. For the Forbes article Google Purges The Payola, Andy Greenberg asked other search engines about their policies, and the results were unanimous. From the story:

Search engines hate this kind of paid-for popularity. Google's Webmaster guidelines ban buying links just to pump search rankings. Other search engines including Ask, MSN, and Yahoo!, which mimic Google's link-based search rankings, also discourage buying and selling links.

Other engines have also commented about this individually, e.g. a search engine representative from Microsoft commented in a recent interview and said

The reality is that most paid links are a.) obviously not objective and b.) very often irrelevant. If you are asking about those then the answer is absolutely there is a risk. We will not tolerate bogus links that add little value to the user experience and are effectively trying to game the system.

Q: Is that why we've seen some sites that sell links receive lower PageRank in the Google toolbar?
A: Yes. If a site is selling links, that can affect our opinion about the value of that site or cause us to lose trust in that site.

Q: What recourse does a site owner have if their site was selling links that pass PageRank, and the site's PageRank in the Google toolbar was lowered?
A: The site owner can address the violations of the webmaster guidelines and submit a reconsideration request in Google's Webmaster Central console. Before doing a reconsideration request, please make sure that all sold links either do not pass PageRank or are removed.

Q: Is Google trying to tell webmasters how to run their own site?
A: No. We're giving advice to webmasters who want to do well in Google. As I said in this video from my keynote discussion in June 2007, webmasters are welcome to make their sites however they like, but Google in turn reserves the right to protect the quality and relevance of our index. To the best of our knowledge, all the major search engines have adopted similar positions.

Q: Is Google trying to crack down on other forms of advertisements used to drive traffic?
A: No, not at all. Our webmaster guidelines clearly state that you can use links as means to get targeted traffic. In fact, in the presentation I did in August 2007, I specifically called out several examples of non-Google advertising that are completely within our guidelines. We just want disclosure to search engines of paid links so that the paid links won't affect search engines.

Q: I'm aware of a site that appears to be buying/selling links. How can I get that information to Google?
A: Read our official blog post about how to report paid links from earlier in 2007. We've received thousands and thousands of reports in just a few months, but we welcome more reports. We appreciate the feedback, because it helps us take direct action as well as improve our existing algorithmic detection. We also use that data to train new algorithms for paid links that violate our quality guidelines.

Q: Can I get more information?
A: Sure. I wrote more answers about paid links earlier this year if you'd like to read them. And if you still have questions, you can join the discussion in our Webmaster Help Group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Grab bag videos are back! 2013

salam every one, this is a topic from google web master centrale blog:
We’re kicking off June with the start of a new round of webmaster Q&A on the Webmaster Central YouTube channel. You submitted and voted on questions for Matt Cutts to answer, and Matt sat in the studio for a full day sharing advice for webmasters.

For those of you who watch each video (and who doesn’t?), we’ve worked hard to keep things interesting. Not only did Matt wear different colored shirts, we changed the backgrounds as well! Just don’t submit any screen grabs to We Have Lasers, okay?

To get you started, here’s the first video, which addresses a question about geographic targeting in Webmaster Tools:

We’ll be posting links to new videos as they’re posted on our Twitter account, so follow us there or subscribe to our YouTube channel to be notified of new answers.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Fridaygram: world wonders, fruit freshness, stunning sky 2013

Seo Master present to you: Author Photo
By Scott Knaster, Google Developers Blog Editor

If you write code, you’re stuck at your desk or laptop for long hours at a time. There’s no substitute for getting out into the world, but when you just can’t travel, you can use our new World Wonders Project to virtually visit amazing sites around the world. World Wonders has used really cool tricycles equipped with Street View cameras to film Stonehenge, the Trulli of Alberbello, the Ogasawara Islands, Shark Bay, Český Krumlov, and many more places. And when you visit, you don’t even get jet lag.


Cesky Krumlov page in World Wonders

When you do leave your home or office, you might go to the market occasionally for fresh fruit and vegetables. Your fresh food experience might improve thanks to new sensors made at the Massachusetts Institute of Technology. These sensors can figure out when fruits and veggies are getting too ripe. The sensors work by detecting ethylene, a gas that helps plants ripen.

Finally, when you’re not at the market or virtually visiting world heritage sites, take a look at this photo of lightning over a rainbow during a storm in China. You can relax and check it out while you’re munching on your fresh fruit and veggies.


Each week we publish Fridaygram, featuring stuff from Google and beyond that you might have missed during the week. Fridaygram items aren't necessarily related to developer topics; they’re just interesting to us nerds. This week we’re giving a special shout out to HyperCard on its (approximate) 25th anniversary. HyperCard, you were awesome.
2013, By: Seo Master

from web contents: Survey says... 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster Level: All

Many thanks to the more than 1,600 people who filled out our survey in February. You gave us your feedback on the Webmaster Central Blog, Google Webmaster Tools, the Webmaster Help Forum, and our Webmaster Central videos on YouTube.

You told us what you like and want to see more of:
  • Webmaster Central gives users insight into Google: "[I like] being able to access, communicate, and see how my sites relate to Google."
  • Webmaster Central provides high quality information: "What I have enjoyed most of all, is reading Google's guidelines for webmasters, which is on-point with what I have been telling customers about SEO."
  • Webmaster Central collects several useful tools in one place: "It's an innovative central hub for all the tools supported and provided by the industry leader Google, for free."
We also learned about what you don't like and where we could be doing better. Our top finding is that beginner webmasters (about 20% of the survey respondents) are less satisfied than intermediate or advanced webmasters with Webmaster Central. Open-ended comments suggested that new webmasters want basic, less technical information from us. A common feedback that we received: "Many users like myself are not of the hi-tech, IT-savvy variety and prefer simplicity, whether we create a website for information or to generate revenue." Based on your responses, we've planned some new resources like a series of how-to videos especially for new webmasters (coming soon to YouTube).

We take your feedback seriously and will continue improving Webmaster Central and our other webmaster sites. Again, thanks for your participation in the survey. We want Webmaster Central to continue being a useful resource for you.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Top 5 CPM ad networks 2013. 2013

Seo Master present to you:
CPM is Cost Per Mile. You are paid on per thousand impressions. These are best suitable to those bloggers receiving a medium and high traffic and where the number-of-pages/visit is greater. You can try them individually and figure out which one works for you best. 

1. Burst Media.

Site requirements ▌ A minimum of 25,000 page views monthly. 
Payout methods are Check, PayPal and EFT in certain countries. Minimum payout is $50. 



Requirements is ▌Top level Domain.
                     ▌ A minimum of 500,000 unique visitors per month. (Yeah,!! that's too much).




Requirement is ▌A minimum of 3000 page views per month.




 ▌2 million page views per month. That means only estabilished sites are eligible. Not for small or medium publishers.


  

Site requirements ▌ A minimum of 30,000 unique visitors per month. 
Minimum payout is $100 through check, PayPal. 


Reviews: A detailed reviews of these advertising networks are going to be done in coming posts. You may subscribe to the blog to receive all those updates. 


mb.
2013, By: Seo Master

from web contents: Tips for News Search 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

During my stint on the "How Google Works Tour: Seattle", I heard plenty of questions regarding News Search from esteemed members of the press, such as The Stranger, The Seattle Times and Seattle Weekly. After careful note-taking throughout our conversations, the News team and I compiled this presentation to provide background and FAQs for all publishers interested in Google News Search.



Along with the FAQs about News Sitemaps and PageRank in the video above, here's additional Q&A to get you started:

Would adding a city name to my paper—for example, changing our name from "The Times" to "The San Francisco Bay Area Times"—help me target my local audience in News Search?
No, this won't help News rankings. We extract geography and location information from the article itself (see video). Changing your name to include relevant keywords or adding a local address in your footer won't help you target a specific audience in our News rankings.
What happens if I accidentally include URLs in my News Sitemap that are older than 72 hours?
We want only the most recently added URLs in your News Sitemap, as it directs Googlebot to your breaking information. If you include older URLs, no worries (there's no penalty unless you're perceived as maliciously spamming -- this case would be rare, so again, no worries); we just won't include those URLs in our next News crawl.
To get the full scoop, check out the video!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Open Source Developers @ Google Speaker Series: Bob Lee 2013

Seo Master present to you:

Bob Lee will be joining us on Tuesday, June 5th, to discuss Java on Guice: Dependency Injection, the Java Way. Guice, an open-source dependency-injection framework for Java 5, is already in use in several Google projects. Come listen to the framework's creator explain how Guice can help make your applications simpler and easier to test!

As with all sessions of the Open Source Developers @ Google Speaker Series, Bob's presentation will be open to the public. Doors open at 6:30 PM at our Mountain View campus; guests should plan to sign in at Building 43 reception upon arrival. Refreshments will be served and all are welcome and encouraged to attend. Bob's presentation will also be taped and published along with all of the public Google Tech Talks.

For those of you who were unable to attend our last session, you can watch the video of Amit Singh's recent presentation on MacFuse.2013, By: Seo Master

seo Using the :before and :after Pseudo Elements on Sidebar Titles 2013

Seo Master present to you: This is another method of using the :after and :before properties and it will work without too many problems in any browser, including IE8. What this trick will do is to divide the header bar into left and right sections, where the left will contain an explanatory title and the right, a related link.

The idea of generating Adobe-like Arrow Headers was actually discussed by css-tricks and adapted to Blogger.

How to Add Adobe-like Headers to Blogger

blogger gadgets, blogger tricks, blogger blogspot

Step 1. Log in to your Blogger dashboard > go to Template > Edit HTML, then click anywhere inside the code area to search - using the CTRL + F keys - for the following tag:
</head>
Step 2. Just above it, copy and paste this code:
 <style>
.module h2 {
  background-color: #D5D5D5;
  border-radius: 20px 0 0 20px;
  color: #FFFFFF;
  font-family: Verdana;
  font-size: 14px;
  line-height: 32px;
  margin: 0;
  padding: 0 0 0 20px;
  text-shadow: 2px 1px 1px #222;
}

.module h2 a {
    border-left: 5px solid #ffffff;
    color: #101921;
    float: right;
    font-size: 14px;
    text-decoration: none;
    text-shadow: none;
    padding: 0 10px;
    position: relative;
   -moz-transition: padding 0.1s linear;
   -webkit-transition: padding 0.1s linear;
   -ms-transition: padding 0.1s linear;
   -o-transition: padding 0.1s linear;
}
.module h2 a:hover {
  padding: 0 32px;
}

.module h2 a:before, .module h2 a:after {
    content: &quot;&quot;;
    height: 0;
    position: absolute;
    top: 50%;
    width: 0;
}
.module h2 a:before {
    border-bottom: 8px solid transparent;
    border-right: 8px solid #ffffff;
    border-top: 8px solid transparent;
    left: -12px;
    margin-top: -8px;
}
.module h2 a:after {
    border-bottom: 6px solid transparent;
    border-top: 6px solid transparent;
    left: -6px;
    margin-top: -6px;
}

.module.blue h2 a {background-color: #A2D5EC;}
.module.blue h2 a:hover {background-color: #C5F0FF;}
.module.blue h2 a:after {border-right: 6px solid #A2D5EC;}
.module.blue h2 a:hover:after {border-right-color: #C5F0FF;}

.module.yellow h2 a {background-color: #FCE98D;}
.module.yellow h2 a:hover {background-color: #FFD700;}
.module.yellow h2 a:after {border-right: 6px solid #FCE98D;}
.module.yellow h2 a:hover:after {border-right-color: #FFD700;}

.module.green h2 a {background-color: #bada55;}
.module.green h2 a:hover {background: #C7E176;}
.module.green h2 a:after {border-right: 6px solid #bada55;}
.module.green h2 a:hover:after {border-right-color: #C7E176;}

.module.red h2 a {background-color: #F0A5B5;}
.module.red h2 a:hover {background-color: #FFC7D2;}
.module.red h2 a:after {border-right: 6px solid #F0A5B5;}
.module.red h2 a:hover:after {border-right-color: #FFC7D2;}
</style>
Step 3. Save the Template.

Screenshot:


Step 4. Now go to Layout and Add a new HTML/JavaScript Gadget with one of the codes below for each of the widget title:

Background in blue:
<div class="module blue">
<h2>Title in <a href="Link URL">Blue</a></h2>
</div>
Background in yellow:
<div class="module yellow">
<h2>Title in <a href="Link URL">Yellow</a></h2>
</div>
Background in green:
<div class="module green">
<h2>Title in <a href="Link URL">Green</a></h2>
</div>
Background in red:
<div class="module red">
<h2>Title in <a href="Link URL">Red</a></h2>
</div>

Note: Change "Title in" text with your widget's title and Blue, Yellow, Green and Red with the text on the right, then add a Link URL to it.

Step 5. After you saved the HTML/Javascript gadgets containing the codes above, drag and drop them just above the widgets you want to show... and Save the Arrangement.
blogger tricks, blogger tutorials

DEMO

You can see how the sidebar titles has been replaced with some cool header bars on this demo blog.2013, By: Seo Master

seo Bringing more context to Gmail contextual gadgets 2013

Seo Master present to you:
As part of the launch of Gmail contextual gadgets, Google released a set of predefined extractors that developers could use. These extractors allow developers to match content within a single part of an email message, such as the subject, and use that content to display relevant information to the current user.

Many Gmail contextual gadget developers have expressed a desire to match on more complex patterns than is possible with the predefined extractors. Today, with the launch of the Google Apps extensions console, these complex patterns, known as custom extractors, are now available to drive contextual gadgets.

Custom extractors allow developers to trigger their gadget when a series of conditions are met. For example, a developer could write an extractor that triggered a gadget only when “Hello world” appeared in the subject and “john@example.com” was the sender of the email. This allows developers to more finely tune their gadgets, and provide even more relevant contextual information.

If you’re interested in writing a custom extractor you can get started by reading our documentation. If you have questions, please post them in the forum.

2013, By: Seo Master
Powered by Blogger.