Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you: Which three letters could you hear proudly proclaimed again and again throughout the sessions of Google I/O 2010? ...that’s right, A-P-I! Google APIs form the foundation of many of our developer products, and across the board, APIs made a significant impact at Google I/O.

This year, I/O saw the launch of a number of new APIs -- including the read-write Google Buzz API, the Google Font API, version two of the Google Feed API (with push!), the Google BigQuery API, the Google Latitude API, the Google Moderator API, and the Google Prediction API. Additionally, many of the sessions this year focused on how to better use existing Google APIs like the Google Analytics APIs and the YouTube APIs. And of course, we discussed many great topics during Office Hours and Fireside Chats, and after each session -- a big thank you to everyone who attended!

If you missed any of this excitement, today we’re pleased to announce that the following videos for the official Google API track are now available:

  • Bringing Google to your site - Google’s DeWitt Clinton and Jeff Scudder discuss a number of ways to integrate Google products with a site, including the Google Custom Search Engine, the Feed API with push, the Google Checkout Element, AdSense, Buzz Buttons, and more. DeWitt and Jeff also show how to “make the web beautiful” by announcing the new Google Font API.

  • Knowledge is (less) power: Exploring the Google PowerMeter API - Google’s Srikanth Rajagopalan and Rus Heywood discuss the concept behind, design of, and how to use the Google PowerMeter API.

  • Google Chart Tools: Google's new unified approach for creating dynamic charts on the web - Google’s Michael Fink and Amit Weinstein announce several new charts and features and expose the new look of the Google Chart Tools gallery. They also present the relative advantages of the Interactive Chart API (based on JavaScript) vs. the Image Charts API (based on server-side rendering), and show how they can work together to augment the users’ experience.

  • Google Analytics APIs: End to end - Google's Nick Mikhaelovski delivers an unprecedented sneak peak at how Google Analytics processes and calculates the data in reports. He also discusses the vision for Google Analytics integration tools and takes a look at how to integrate web analytics data with business data using the Google Analytics Platform.

  • Building real-time web apps with App Engine and the Feed API - Google’s Brett Bavar and Moishe Lettvin introduce two new tools to power the real-time web: the App Engine Channel API and the Feed API v2 with push updates. In a technical deep dive, they discuss how the Channel API pushes data from App Engine to a browser and how the new version of the Feed API subscribes to PubSubHubbub feeds and receive updates pushed to the browser.

  • YouTube API uploads: Tools, tips, and best practices - Google’s Jeffrey Posnick, Gareth McSorley, and Kuan Yong start off by discussing Android and iPhone upload best practices and how to resume interrupted uploads. They conclude by demonstrating the YouTube Direct embeddable iframe for soliciting uploads on existing web pages.

  • How Google builds APIs - Google’s Zach Maier and Mark Stahl discuss the foundations of Google’s API infrastructure, as well as a collection of different issues that shaped how APIs exist today. In addition, Google’s Yaniv Inbar demonstrates the new Java client library for Google APIs on Android, and Google’s Joey Schorr offers a sneak peak at Google’s internal API-building tool.

  • Analyzing and monetizing your Android & iPhone apps - Google’s Chrix Finne and Jim Kelm discuss how to build, launch, grow, monetize, and manage your Android app using AdSense for Mobile Apps. In a quick demo, Jim shows how to quicky implement Analytics in a mobile application.

We hope that you enjoy watching (or re-watching) these sessions as much as we enjoyed preparing and presenting them. Videos and slides for all of the individual presentations can be found on the pages linked above, but if you’d prefer to embark on an API-watching marathon, you should check out our YouTube playlist and watch away.

As always, it’s exciting to see the great and powerful products that you’re building with Google’s suite of APIs. We look forward to coding and innovating with you over the next year, and can’t wait to see you at I/O 2011!

2013, By: Seo Master
salam every one, this is a topic from google web master centrale blog: Webmaster level: Advanced

Today we're excited to propose a new standard for making AJAX-based websites crawlable. This will benefit webmasters and users by making content from rich and interactive AJAX-based websites universally accessible through search results on any search engine that chooses to take part. We believe that making this content available for crawling and indexing could significantly improve the web.

While AJAX-based websites are popular with users, search engines traditionally are not able to access any of the content on them. The last time we checked, almost 70% of the websites we know about use JavaScript in some form or another. Of course, most of that JavaScript is not AJAX, but the better that search engines could crawl and index AJAX, the more that developers could add richer features to their websites and still show up in search engines.

Some of the goals that we wanted to achieve with this proposal were:
  • Minimal changes are required as the website grows
  • Users and search engines see the same content (no cloaking)
  • Search engines can send users directly to the AJAX URL (not to a static copy)
  • Site owners have a way of verifying that their AJAX website is rendered correctly and thus that the crawler has access to all the content


Here's how search engines would crawl and index AJAX in our initial proposal:
  • Slightly modify the URL fragments for stateful AJAX pages
    Stateful AJAX pages display the same content whenever accessed directly. These are pages that could be referred to in search results. Instead of a URL like http://example.com/page?query#state we would like to propose adding a token to make it possible to recognize these URLs: http://example.com/page?query#[FRAGMENTTOKEN]state . Based on a review of current URLs on the web, we propose using "!" (an exclamation point) as the token for this. The proposed URL that could be shown in search results would then be: http://example.com/page?query#!state.
  • Use a headless browser that outputs an HTML snapshot on your web server
    The headless browser is used to access the AJAX page and generates HTML code based on the final state in the browser. Only specially tagged URLs are passed to the headless browser for processing. By doing this on the server side, the website owner is in control of the HTML code that is generated and can easily verify that all JavaScript is executed correctly. An example of such a browser is HtmlUnit, an open-sourced "GUI-less browser for Java programs.
  • Allow search engine crawlers to access these URLs by escaping the state
    As URL fragments are never sent with requests to servers, it's necessary to slightly modify the URL used to access the page. At the same time, this tells the server to use the headless browser to generate HTML code instead of returning a page with JavaScript. Other, existing URLs - such as those used by the user - would be processed normally, bypassing the headless browser. We propose escaping the state information and adding it to the query parameters with a token. Using the previous example, one such URL would be http://example.com/page?query&[QUERYTOKEN]=state . Based on our analysis of current URLs on the web, we propose using "_escaped_fragment_" as the token. The proposed URL would then become http://example.com/page?query&_escaped_fragment_=state .
  • Show the original URL to users in the search results
    To improve the user experience, it makes sense to refer users directly to the AJAX-based pages. This can be achieved by showing the original URL (such as http://example.com/page?query#!state from our example above) in the search results. Search engines can check that the indexable text returned to Googlebot is the same or a subset of the text that is returned to users.



(Graphic by Katharina Probst)

In summary, starting with a stateful URL such as
http://example.com/dictionary.html#AJAX , it could be available to both crawlers and users as
http://example.com/dictionary.html#!AJAX which could be crawled as
http://example.com/dictionary.html?_escaped_fragment_=AJAX which in turn would be shown to users and accessed as
http://example.com/dictionary.html#!AJAX

View the presentation

We're currently working on a proposal and a prototype implementation. Feedback is very welcome — please add your comments below or in our Webmaster Help Forum. Thank you for your interest in making the AJAX-based web accessible and useful through search engines!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Today we’re releasing a feature to help you discover if your site serves undesirable "soft” or “crypto” 404s. A "soft 404" occurs when a webserver responds with a 200 OK HTTP response code for a page that doesn't exist rather than the appropriate 404 Not Found. Soft 404s can limit a site's crawl coverage by search engines because these duplicate URLs may be crawled instead of pages with unique content.

The web is infinite, but the time search engines spend crawling your site is limited. Properly reporting non-existent pages with a 404 or 410 response code can improve the crawl coverage of your site’s best content. Additionally, soft 404s can potentially be confusing for your site's visitors as described in our past blog post, Farewell to Soft 404s.    

You can find the new soft 404s reporting feature under the Crawl errors section in Webmaster Tools.



Here’s a list of steps to correct soft 404s to help both Google and your users:
  1. Check whether you have soft 404s listed in Webmaster Tools
  2. For the soft 404s, determine whether the URL:
    1. Contains the correct content and properly returns a 200 response (not actually a soft 404)
    2. Should 301 redirect to a more accurate URL
    3. Doesn’t exist and should return a 404 or 410 response
  3. Confirm that you’ve configured the proper HTTP Response by using Fetch as Googlebot in Webmaster Tools
  4. If you now return 404s, you may want to customize your 404 page to aid your users. Our custom 404 widget can help.

We hope that you’re now better enabled to find and correct soft 404s on your site. If you have feedback or questions about the new "soft 404s" reporting feature or any other Webmaster Tools feature, please share your thoughts with us in the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.