Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

seo Fridaygram: goodbye to 2011 2013

Seo Master present to you: Author Photo
By Scott Knaster, Google Code Blog Editor

This is the last Fridaygram of 2011, and like most everybody else, we’re in a reflective mood. It’s also the 208th post on Google Code Blog this year, which means we’ve averaged more than one post every two days, so that’s plenty of stuff for you to read. What did we write about?

At Google, we love to launch. Many of our posts were about new APIs and client libraries. We also posted a bunch of times about HTML5 and Chrome and about making the web faster. And we posted about Android, Google+, and Google Apps developer news.

Many of our 2011 posts were about the steady progress of App Engine, Cloud Storage, and other cloud topics for developers. We also published several times about commerce and in-app payments.

2011 was a stellar year for Google I/O and other developer events around the world. Some of our most popular posts provided announcements, details, and recaps of these events. And we welcomed a couple dozen guest posts during Google I/O from developers with cool stories to tell.

The two most popular Code Blog posts of the year were both launches: the Dart preview in October, and the Swiffy launch in June.

Last, and surely least, I posted 26 Fridaygrams in an attempt to amuse and enlighten you. Thank you for reading those, and thanks for dropping by and reading all the posts we’ve thrown your way this year. See you in 2012!

And finally, please enjoy one more Easter egg.

2013, By: Seo Master

seo A look back on 2009 2013

Seo Master present to you: 2009 was a remarkable year for developers. Vic Gundotra, VP of our developer team declared at Google I/O, "The web has won!" and this year was full of launches and announcements that remind us how the web has become the platform of our day. We found lots of inspiration from the developers at Google I/O in San Francisco and at our Google Developer Days in Japan, China, Brazil, Russia and the Czech Republic.



Here's a look back at some of our favorite highlights from 2009:
It is a very exciting time to be a developer...we are just starting to see what is possible with the web as the platform. It will be a lot of fun to see where all of us, together, can take the web in 2010!

Happy Holidays from the Google Developer Team!

2013, By: Seo Master

seo Speed metrics in Google Analytics 2013

Seo Master present to you: Author Photo
By Satish Kambala, Staff Software Engineer

At Google we believe that speed matters and a faster web is better for everyone. That’s why we started the Make The Web Faster initiative. To improve the speed of a website, we need to measure how fast web pages load. The Site Speed report, which is now available by default to all users of Google Analytics, provides just that: it enables website owners to measure page load time for their web pages.

You can use the Site Speed report to correlate speed with other metrics in Google Analytics, such as page views and conversions. This enables website owners to identify and optimize those pages that drive these metrics. Page load times can be analyzed by browser type or user location to understand if specific optimizations are required. Recently, we enhanced the Site Speed report by adding a new section called Technical (see screenshot below) which displays network and server time components of page load time.


site speed report screen shot

You can learn more about the Site Speed report here. This report, along with powerful page speed analysis tools such as Page Speed Online, will help website owners delight their users by building fast and responsive websites.

Have ideas on how to make your website faster or ways to speed up the entire Web? Send us your thoughts.


Satish Kambala works at Google on stuff that helps in making the web faster. In his free time, apart from watching cricket and movies, Satish likes exploring places with his wife.

Posted by Scott Knaster, Editor
2013, By: Seo Master

seo New mod_pagespeed: cache advances, progressive JPEGs 2013

Seo Master present to you:
Bharath
Jan-Willem
Joshua
By Joshua Marantz, Jan-Willem Maessen, and Bharath Bhushan, PageSpeed Team

When mod_pagespeed launched in November 2010, one of its benefits was to help websites better exploit browser caching by signing URLs with the resource content hash. This improves the user experience coming back to the same site, and navigating within a site.

In mod_pagespeed 1.2 we have released two new features that improve the caching experience for users coming to a site for the first time: canonicalize_javascript_libraries and insert_dns_prefetch. For additional speedups, converting jpegs to progressive format has been added to the Core Filter Set, and the scope of optimization has been extended to include resources served by external servers, even if they are not running mod_pagespeed.

Your web page loads faster when JQuery is preloaded in users' browser

Numerous web sites use common JavaScript libraries such as jQuery and jQuery UI. But when one library is stored on many sites, browsers end up re-downloading that library for each new site – a waste of time and bandwidth. The new canonicalize_javascript_libraries filter in mod_pagespeed finds such libraries on your site and replaces them with links to the equivalent libraries on ajax.googleapis.com. With the optimization, a browser will notice that your site is requesting the library from the same shared library provider as a previous site it visited, and will use the copy in its cache.

It’s possible to do this by hand, but there are a number of reasons why you might prefer to automate the process. Most important is that you may be using third-party code on your web sites that includes some of these libraries. Using canonicalize_javascript_libraries lets you replace these with hosted versions without having to touch third-party code. It also lets you use local, un-minified JavaScript source code for these libraries while you are debugging your site, and then transition automatically to using minified hosted code when you deploy. The filter spots external libraries using a hash signature; we’ve added a new configuration file, pagespeed_libraries.conf, that stores these signatures, so that you can upgrade the signature configuration without disrupting the rest of your apache installation.

Resolving DNS entries early for critical assets saves hundreds of milliseconds

DNS resolution time varies from <1ms for locally cached results, to hundreds of milliseconds due to the cascading nature of DNS. This can contribute significantly to total page load time. Below is a WebPagetest waterfall showing how DNS lookup time can affect page load time.


The new insert_dns_prefetch filter inserts <link rel="dns-prefetch"> tags to allow the browser to pre-resolve DNS for resources on the page. The waterfall below shows the improvement after inserting the hints.


<link rel="dns-prefetch"> is supported on Chrome, Firefox and Internet Explorer.

Improved performance by optimizing external resources and progressive JPEG

In addition to these new capabilities, mod_pagespeed 1.2 can proxy and optimize resources from trusted domains. This feature enables you to optimize resources even from servers that don't run mod_pagespeed. Beyond compressing and cache-extending such resources, this can improve performance of sites running SPDY where the best practices for performance are to serve all resources from the same domain (see mod_spdy).

Further, convert_jpeg_to_progressive is now a ‘core’ filter. Large JPEG images are now transcoded to progressive. This both improves the browser experience and makes such files smaller.

To see more details about the release, check out the release notes and mod_pagespeed download page.


Joshua Marantz runs Google’s PageSpeed team in Cambridge, MA, which is dedicated to making the web faster for everyone. Josh has been working on making software run fast for several decades, at Google and before that on accelerated chip simulation.

Jan Maessen wrote the earliest version of the image and JavaScript filters in mod_pagespeed and has been with the team ever since. Before joining Google, he was a co-designer and library implementer for the Fortress programming language.

Bharath Bhushan works on making website performance better. He has a Masters in CS from IIT Madras, India.


Posted by Scott Knaster, Editor
2013, By: Seo Master

seo Mobile Web performance challenges and strategies 2013

Seo Master present to you: Author Photo
By Ramki Krishnan, Technical Program Manager

Consumers are increasingly relying on their mobile devices to access the Web, thrusting mobile web performance into the limelight. Mobile users expect web pages to display on their mobile devices as fast as or faster than on their desktops.

As part of Google’s effort to Make The Web Faster, we invited Guy Podjarny, CTO of Blaze.io, to talk about some of the major performance concerns in the mobile web and ways to alleviate these issues. Guy’s talk focused on Front-End Optimization and highlighted 3 areas: mobile network, software, and hardware. Each of these impacts performance in myriad ways. The full video is available here, and runs just under an hour. If you don’t have time to watch this enlightening talk, this post discusses some key takeaways.

Mobile networks have high latency, and reducing the number of requests and the size of downloads are well-known optimization strategies. Guy also mentions using on-demand image displays such as loading above-the-fold images by default and other images only as they scroll into view. To handle network reliability, he recommends non-blocking requests eliminating single points of failure, with a selective aggregation of files needed for content display. Periodic pinging of the cell tower by the client can also reduce latency associated with dropped connections, but judicious timeouts and battery drain on the mobile device need to be factored in.

Modern mobile browsers are built mobile-friendly, and they can be helped further by exploiting localStorage to store CSS and JavaScript files. Pipelining multiple requests on a connection is an option, but developers need to work around head-of-line blocking by using techniques such as splitting dynamic and static resource requests on different domains.

Mobile hardware CPUs are weaker than their desktop counterparts. Guy points out the need to minimize JavaScript when designing mobile-friendly web pages and avoid reflows or defer JavaScript until after page loads. Clever image rendering techniques such as automatically resizing images to devices and loading full resolution only on zoom can also help.

Guy’s presentation makes clear that mobile web optimizations need to mitigate latencies introduced by mobile networks, software, and hardware. Rapidly changing OSes and browsers add to the challenges facing publishers. New and evolved tools and technologies will help ensure an optimal web browsing experience for mobile users.


Ramki Krishnan works at Google on the "Make The Web Faster" team. When not at work, he dreams of being a tennis pro, a humorist, and a rock drummer all rolled into one.

Posted by Scott Knaster, Editor

2013, By: Seo Master

seo Google Web Toolkit 2.0 - now with Speed Tracer 2013

Seo Master present to you: Tonight at a Google Campfire One we released Google Web Toolkit 2.0, aiming to do two main things for developers:
  • Make it easier to build faster apps
  • Speed up the overall development cycle
This is a very exciting release because it's the cumulation of a year and a half working with teams like Google Wave, AdWords, and Orkut (among many others inside and outside of Google) to evolve GWT to meet the needs of today's web applications. There are many features and improvements, but let me call out three which we're especially excited about.

Faster Apps

Introducing: Performance profiling with Speed Tracer
The first thing you'll notice in 2.0 is that we've added a new tool called Speed Tracer. Speed Tracer is a performance profiler for Google Chrome that allows developers to see what's going on in a way which hasn't been possible before. We've worked closely with the Webkit community to add instrumentation in the browser to enable developers to gain deep insights into how code behaves, uncovering problems which have been hidden up till now.

Introducing: Incremental app download with code splitting
Another feature we've added into Google Web Toolkit is developer-guided code splitting. Code splitting allows a developer to split up their application for much, much faster startup times. Imagine if you have a settings page that users go to once a week. Why download that JavaScript when the application starts up? With code splitting, your users download just the JavaScript they need to get started.

Faster Development

Introducing: Declarative UI with UiBinder
UiBinder is a new declarative UI framework in Google Web Toolkit which enables rapid design iteration and a clean separation between presentation layer and application logic.

Dive into the details and more features in GWT 2.0.



2013, By: Seo Master

seo Introducing Google Public DNS: A new DNS resolver from Google 2013

Seo Master present to you: Today, as part of our efforts to make the web faster, we are announcing Google Public DNS, a new experimental public DNS resolver.

The DNS protocol is an important part of the web's infrastructure, serving as the Internet's "phone book". Every time you visit a website, your computer performs a DNS lookup. Complex pages often require multiple DNS lookups before they complete loading. As a result, the average Internet user performs hundreds of DNS lookups each day, that collectively can slow down his or her browsing experience.

We believe that a faster DNS infrastructure could significantly improve the browsing experience for all web users. To enhance DNS speed but to also improve security and validity of results, Google Public DNS is trying a few different approaches that we are sharing with the broader web community through our documentation:
  • Speed: Resolver-side cache misses are one of the primary contributors to sluggish DNS responses. Clever caching techniques can help increase the speed of these responses. Google Public DNS implements prefetching: before the TTL on a record expires, we refresh the record continuously, asychronously and independently of user requests for a large number of popular domains. This allows Google Public DNS to serve many DNS requests in the round trip time it takes a packet to travel to our servers and back.

  • Security: DNS is vulnerable to spoofing attacks that can poison the cache of a nameserver and can route all its users to a malicious website. Until new protocols like DNSSEC get widely adopted, resolvers need to take additional measures to keep their caches secure. Google Public DNS makes it more difficult for attackers to spoof valid responses by randomizing the case of query names and including additional data in its DNS messages.

  • Validity: Google Public DNS complies with the DNS standards and gives the user the exact response his or her computer expects without performing any blocking, filtering, or redirection that may hamper a user's browsing experience.
We hope that you will help us test these improvements by using the Google Public DNS service today, from wherever you are in the world. We plan to share what we learn from this experimental rollout of Google Public DNS with the broader web community and other DNS providers, to improve the browsing experience for Internet users globally.

To get more information on Google Public DNS you can visit our site, read our documentation, and our logging policies. We also look forward to receiving your feedback in our discussion group.

2013, By: Seo Master

seo Edmunds partners with Google to make the web faster 2013

Seo Master present to you: Note: This is a guest post from Ismail Elshareef, who is the Principal Architect at Edmunds.com. Thanks for the post and for making the web faster Ismail!

In the Fall of 2008, we embarked on a complete redesign of our car enthusiast site, insideline.com. One of the main redesign objectives was to deliver the fastest page load possible to our consumers. Leading up to that point, we have been closely following and implementing the performance best practices championed by Google's Make the Web Faster team and others. We understood the impact performance has on user experience and the bottom line.

Some of the many performance-enhancing features that have been implemented on insideline.com (and now on our beta.edmunds.com) are:
  1. Reducing the number of HTTP requests: We combined CSS and JavaScript files as necessary as well as using sprites and data URIs when appropriate. We have also reduced the number of blocking requests as much as possible to make the pages "feel" faster
  2. Serving static content from different domains: This helped maximize the browser parallel download capacity and made the request payload faster since no cookies were sent over the wire to those domains
  3. Using Expires headers: Caching static files in the client's browser to eliminate unnecessary, redundant requests to our servers
  4. Lazy-loading Page Modules: Render the bare minimum page components first so that the user sees something on the page, and then go through the modules and load them in order of priority. We developed a JavaScript Loader component to help us accomplish that which you can read more on the Edmunds technology blog.
  5. Managing 3rd-party components: iFrame components could be lazy-loaded without a problem. JavaScript components, on the other hand, need to be loaded onto the page before the onLoad event fires. That had the potential of slowing down our pages. The solution we devised was to delay the calling of those components until we initiate the lazy-loading of modules and right before the onLoad event fires
  6. Using non-blocking calls: With the browser being a single thread process, we optimized ways of including resources on the page without affecting page rendering so that the page is perceived to be fast by the user.

The results on insideline.com have been incredbile. Page load time went from 9 seconds on average on the old site to 1.5 seconds on average on the new one, and that's with loading in much richer content onto the page (measured with WebPageTest). We have also seen a 3% increase in ad revenue. On the beta.edmunds.com, which will replace our legacy site fully in December 2010, we have seen a 17% increase in page views and a 2% reduction in the bounce rate for our landing pages in a controlled experiment.

Although we have a long way to go in making our pages and services faster, we are very pleased of the progress we’ve made so far. Working with Google to make the web faster has been an exciting adventure that will continue with more improvements and innovations for both our sites and the web as a whole. Get more details on the Edmunds technology blog and try these enhancements on your site today.

2013, By: Seo Master

seo Instant Previews: Under the hood 2013

Seo Master present to you:

If you’ve used Google Search recently, you may have noticed a new feature that we’re calling Instant Previews. By clicking on the (sprited) magnifying glass icon next to a search result you see a preview of that page, often with the relevant content highlighted. Once activated, you can mouse over the rest of the results and quickly (instantly!) see previews of those search results, too.

Adding this feature to Google Search involved a lot of client-side Javascript. Being Google, we had to make sure we could deliver this feature without slowing down the page. We know our users want their results fast. So we thought we’d share some techniques involved in making this new feature fast.

JavaScript compilation

This is nothing new for Google Search: all our Javascript is compiled to make it as small as possible. We use the open-sourced Closure Compiler. In addition to minimizing the Javascript code, it also re-writes expressions, reuses variables, and prunes out code that is not being used. The Javascript on the search results page is deferred, and also cached very aggressively on the client side so that it’s not downloaded more than once per version.

On-demand JSONP

When you activate Instant Previews, the result previews are requested by your web browser.There are several ways to fetch the data we need using Javascript. The most popular techniques are XmlHttpRequest (XHR) and JSONP. XHR generally gives you better control and error-handling, but it has two drawbacks: browsers caching tends to be less reliable, and only same-origin requests are permitted (this is starting to change with modern browsers and cross-origin resource sharing, though). With JSONP, on the other hand, the requested script returns the desired data as a JSON object wrapped in a Javascript callback function, which in our case looks something like

google.vs.r({"dim":[302,585],"url":"http://example.com",ssegs:[...]}).

Although error handling with JSONP is a bit harder to do compared to XHR (not all browsers support onerror events), JSONP can be cached aggressively by the browser, and is not subject to same-origin restrictions. This last point is important for Instant Previews because web browsers restrict the number of concurrent requests that they send to any one host. Using a different host for the preview requests means that we don’t block other requests in the page.

There are a couple of tricks when using JSONP that are worth noting:

  • If you insert the script tag directly, e.g. using document.createElement, some browsers will show the page as still “loading” until all script requests are finished. To avoid that, make your DOM call to insert the script tag inside a window.setTimeout call.
  • After your requests come back and your callbacks are done, it’s a good idea to set your script src to null, and remove the tag. On some browsers, allowing too many script tags to accumulate over time may slow everything down.

Data URIs

At this point you are probably curious as to what we’re returning in our JSONP calls, and in particular, why we are using JSON and not just plain images. Perhaps you even used Firebug or your browser’s Developer Tools to examine the Instant Previews requests. If so, you will have noticed that we send back the image data as sets of data URIs. Data URIs are base64 encodings of image data, that modern browsers (IE8+, Chrome, Safari, Firefox, Opera, etc) can use to display images, instead of loading them from a server as usual.

To show previews, we need the image, and the relevant content of the page for the particular query, with bounding boxes that we draw on top of the image to show where that content appears on the page. If we used static images, we’d need to make one request for the content and one request for the image; using JSONP with data URIs, we make just one request. Data URIs are limited to 32K on IE8, so we send “slices” that are all under that limit, and then use Javascript to generate the necessary image tags to display them. And even though base64 encoding adds about 33% to the size of the image, our tests showed that gzip-compressed data URIs are comparable in size to the original JPEGs.

We use caching throughout our implementation, but it’s important to not forget about client-side caching as well. By using JSONP and data URIs, we limit the number of requests made, and also make sure that the browser will cache the data, so that if you refresh a page or redo a query, you should get the previews, well... instantly!

2013, By: Seo Master

seo Scaling automatic web optimization with mod_pagespeed and memcached 2013

Seo Master present to you: Author PhotoBy Jud Porter, Software Engineer, PageSpeed Team

Making your website fast is crucial to creating a great user experience – but doing so can be complicated, with many factors to consider. That’s why we created mod_pagespeed, an open-source Apache module designed to optimize your web pages automatically and easily. We recently introduced our milestone 1.0 release, and today, we’re following it up with the release of mod_pagespeed 1.1.23.1 to our beta channel.

With this release we've reduced server load time and improved utilization for large, multi-server environments. We accomplished this by adding support for memcached (a popular, scalable cache), and improving logging and statistics reporting. With memcached, multiple Apache servers share and fetch the same resources optimized by mod_pagespeed. Logging and reporting have been improved to make it easier to keep track of resource consumption and optimization effectiveness across multiple sites hosted by a single Apache installation. These new features make mod_pagespeed even better for high-traffic sites and network providers hosting many individual websites on their infrastructure.




We’ve also added a number of other new features and optimizations including:
  • Improved CSS optimization. CSS media queries are now supported, and the new fallback_rewrite_css_urls filter allows partial optimization of CSS containing unsupported or proprietary extensions.
  • The default set of optimizers now includes the flatten_css_imports filter, improving out-of-the-box performance.
  • Improved mod_spdy interaction with support for custom mod_pagespeed configuration and filters for SPDY enabled clients. This makes it easier to deploy SPDY on your site, which can significantly decrease page load times.
Check out the release notes for all the new features and improvements. For more information about mod_pagespeed, please see our documentation, and if you have any questions or issues let us know on our issue tracker or discussion group.


Jud Porter is a software engineer working on mod_pagespeed, an Apache module designed to automatically make websites faster. In his free time he enjoys experimenting with cocktails, brushing up on his foosball game, and discovering obscure music.

Posted by Scott Knaster, Editor
2013, By: Seo Master

seo Use compression to make the web faster 2013

Seo Master present to you: Every day, more than 99 human years are wasted because of uncompressed content. Although support for compression is a standard feature of all modern browsers, there are still many cases in which users of these browsers do not receive compressed content. This wastes bandwidth and slows down users' interactions with web pages.

Uncompressed content hurts all users. For bandwidth-constrained users, it takes longer just to transfer the additional bits. For broadband connections, even though the bits are transferred quickly, it takes several round trips between client and server before the two can communicate at the highest possible speed.  For these users the number of round trips is the larger factor in determining the time required to load a web page. Even for well-connected users these round trips often take tens of milliseconds and sometimes well over one hundred milliseconds.

In Steve Souders' book Even Faster Web Sites, Tony Gentilcore presents data showing the page load time increase with compression disabled.  We've reproduced the results for three highest ranked sites from the Alexa top 100 with permission here:

Data, with permission, from Steve Souders, "Chapter 9: Going Beyond Gzipping," in Even Faster Web Sites (Sebastapol, CA: O'Reilly, 2009), 122.


The data from Google's web search logs show that the average page load time for users getting uncompressed content is 25% higher compared to the time for users getting compressed content. In a randomized experiment where we forced compression for some users who would otherwise not get compressed content, we measured a latency improvement of 300ms.  While this experiment did not capture the full difference, that is probably because users getting forced compression have older computers and older software.

We have found that there are 4 major reasons why users do not get compressed content: anti-virus software, browser bugs, web proxies, and misconfigured web servers.  The first three modify the web request so that the web server does not know that the browser can uncompress content. Specifically, they remove or mangle the Accept-Encoding header that is normally sent with every request.

Anti-virus software may try to minimize CPU operations by intercepting and altering requests so that web servers send back uncompressed content.  But if the CPU is not the bottleneck, the software is not doing users any favors.  Some popular antivirus programs interfere with compression.  Users can check if their anti-virus software is interfering with compression by visiting the browser compression test page at Browserscope.org.

By default, Internet Explorer 6 downgrades to HTTP/1.0 when behind a proxy, and as a result does not send the Accept-Encoding request header. The table below, generated from Google's web search logs, shows that IE 6 represents 36% of all search results that are sent without compression.  This number is far higher than the percentage of people using IE 6.

Data from Google Web Search Logs

There are a handful of ISPs,  where the percentage of uncompressed content is over 95%.  One likely hypothesis is that either an ISP or a corporate proxy removes or mangles the Accept-Encoding header.  As with anti-virus software, a user who suspects an ISP is interfering with compression should visit the browser compression test page at Browserscope.org.

Finally, in many cases, users are not getting compressed content because the websites they visit are not compressing their content.  The following table shows a few popular websites that do not compress all of their content. If these websites were to compress their content, they could decrease the page load times by hundreds of milliseconds for the average user, and even more for users on modem connections.

Data generated using Page Speed

To reduce uncompressed content, we all need to work together.
  • Corporate IT departments and individual users can upgrade their browsers, especially if they are using IE 6 with a proxy. Using the latest version of Firefox, Internet ExplorerOpera, Safari, or Google Chrome will increase the chances of getting compressed content.  A recent editorial in IEEE Spectrum lists additional reasons - besides compression - for upgrading from IE6.
  • Anti-virus software vendors can start handling compression properly and would need to stop removing or mangling the Accept-Encoding header in upcoming releases of their software.
  • ISPs that use an HTTP proxy which strips or mangles the Accept-Encoding header can upgrade, reconfigure or install a better proxy which doesn't prevent their users from getting compressed content.
  • Webmasters can use Page Speed (or other similar tools) to check that the content of their pages is compressed.
For more articles on speeding up the web, check out http://code.google.com/speed/articles/.

2013, By: Seo Master

seo Introducing Closure Tools 2013

Seo Master present to you: Millions of Google users worldwide use JavaScript-intensive applications such as Gmail, Google Docs, and Google Maps. Like developers everywhere, Googlers want great web apps to be easier to create, so we've built many tools to help us develop these (and many other) apps. We're happy to announce the open sourcing of these tools, and proud to make them available to the web development community.

Closure Compiler
Closure Compiler is a JavaScript optimizer that compiles web apps down into compact, high-performance JavaScript code. The compiler removes dead code, then rewrites and minimizes what's left so that it will run fast on browsers' JavaScript engines. The compiler also checks syntax, variable references, and types, and warns about other common JavaScript pitfalls. These checks and optimizations help you write apps that are less buggy and easier to maintain. You can use the compiler with Closure Inspector, a Firebug extension that makes debugging the obfuscated code almost as easy as debugging the human-readable source.

Because JavaScript developers are a diverse bunch, we've set up a number of ways to run the Closure Compiler. We've open-sourced a command-line tool. We've created a web application that accepts your code for compilation through a text box or a RESTful API. We are also offering a Firefox extension that you can use with Page Speed to conveniently see the performance benefits for your web pages.

Closure Library
Closure Library is a broad, well-tested, modular, and cross-browser JavaScript library. Web developers can pull just what they need from a wide set of reusable UI widgets and controls, as well as lower-level utilities for the DOM, server communication, animation, data structures, unit testing, rich-text editing, and much, much more. (Seriously. Check the docs.)

JavaScript lacks a standard class library like the STL or JDK. At Google, Closure Library serves as our "standard JavaScript library" for creating large, complex web applications. It's purposely server-agnostic and intended for use with the Closure Compiler. You can make your project big and complex (with namespacing and type checking), yet small and fast over the wire (with compilation). The Closure Library provides clean utilities for common tasks so that you spend your time writing your app rather than writing utilities and browser abstractions.

Closure Templates
Closure Templates grew out of a desire for web templates that are precompiled to efficient JavaScript.  Closure Templates have a simple syntax that is natural for programmers.  Unlike traditional templating systems, you can think of Closure Templates as small components that you compose to form your user interface, instead of having to create one big template per page.

Closure Templates are implemented for both JavaScript and Java, so you can use the same templates both on the server and client side.


Closure Compiler, Closure Library, Closure Templates, and Closure Inspector all started as 20% projects and hundreds of Googlers have contributed thousands of patches. Today, each Closure Tool has grown to be a key part of the JavaScript infrastructure behind web apps at Google.  That's why we're particularly excited (and humbled) to open source them to encourage and support web development outside Google. We want to hear what you think, but more importantly, we want to see what you make. So have at it and have fun!

2013, By: Seo Master

seo Make your websites run faster, automatically -- try mod_pagespeed for Apache 2013

Seo Master present to you:

Last year, as part of Google’s initiative to make the web faster, we introduced Page Speed, a tool that gives developers suggestions to speed up web pages. It’s usually pretty straightforward for developers and webmasters to implement these suggestions by updating their web server configuration, HTML, JavaScript, CSS and images. But we thought we could make it even easier -- ideally these optimizations should happen with minimal developer and webmaster effort.

So today, we’re introducing a module for the Apache HTTP Server called mod_pagespeed to perform many speed optimizations automatically. We’re starting with more than 15 on-the-fly optimizations that address various aspects of web performance, including optimizing caching, minimizing client-server round trips and minimizing payload size. We’ve seen mod_pagespeed reduce page load times by up to 50% (an average across a rough sample of sites we tried) -- in other words, essentially speeding up websites by about 2x, and sometimes even faster.

(Video comparison of the AdSense blog site with and without mod_pagespeed)

Here are a few simple optimizations that are a pain to do manually, but that mod_pagespeed excels at:

  • Making changes to the pages built by the Content Management Systems (CMS) with no need to make changes to the CMS itself,
  • Recompressing an image when its HTML context changes to serve only the bytes required (typically tedious to optimize manually), and
  • Extending the cache lifetime of the logo and images of your website to a year, while still allowing you to update these at any time.

We’re working with Go Daddy to get mod_pagespeed running for many of its 8.5 million customers. Warren Adelman, President and COO of Go Daddy, says:

"Go Daddy is continually looking for ways to provide our customers the best user experience possible. That's the reason we partnered with Google on the 'Make the Web Faster' initiative. Go Daddy engineers are seeing a dramatic decrease in load times of customers' websites using mod_pagespeed and other technologies provided. We hope to provide the technology to our customers soon - not only for their benefit, but for their website visitors as well.”

We’re also working with Cotendo to integrate the core engine of mod_pagespeed as part of their Content Delivery Network (CDN) service.

mod_pagespeed integrates as a module for the Apache HTTP Server, and we’ve released it as open-source for Apache for many Linux distributions. Download mod_pagespeed for your platform and let us know what you think on the project’s mailing list. We hope to work with the hosting, developer and webmaster community to improve mod_pagespeed and make the web faster.

2013, By: Seo Master

seo Let's make the mobile web faster 2013

Seo Master present to you: This week, we've been celebrating all things mobile across Google. Of course, this wouldn't be complete without a component for mobile web developers! Two months ago we asked you to make the web faster. Now, we've asked the Google Mobile team for some best practices, tips, and resources for mobile web development, and we've come up with a few things we wanted to share. "Go Mobile!" with our Make the mobile web faster article.

2013, By: Seo Master

seo Make the web faster with mod_pagespeed, now out of Beta 2013

Seo Master present to you:
Ilya
Joshua
By Joshua Marantz and Ilya Grigorik, Google PageSpeed Team

Cross-posted with Google Webmaster Central Blog and other Google blogs

If your page is on the web, speed matters. For developers and webmasters, making your page faster shouldn’t be a hassle, which is why we introduced mod_pagespeed in 2010. Since then the development team has been working to improve the functionality, quality and performance of this open-source Apache module that automatically optimizes web pages and their resources. Now, after almost two years and eighteen releases, we are announcing that we are taking off the Beta label.

We’re committed to working with the open-source community to continue evolving mod_pagespeed, including more, better and smarter optimizations and support for other web servers. Over 120,000 sites are already using mod_pagespeed to improve the performance of their web pages using the latest techniques and trends in optimization. The product is used worldwide by individual sites, and is also offered by hosting providers, such as DreamHost, Go Daddy and content delivery networks like EdgeCast. With the move out of beta we hope that even more sites will soon benefit from the web performance improvements offered through mod_pagespeed.

mod_pagespeed is a key part of our goal to help make the web faster for everyone. Users prefer faster sites and we have seen that faster pages lead to higher user engagement, conversions, and retention. In fact, page speed is one of the signals in search ranking and ad quality scores. Besides evangelizing for speed, we offer tools and technologies to help measure, quantify, and improve performance, such as Site Speed Reports in Google Analytics, PageSpeed Insights, and PageSpeed Optimization products. In fact, both mod_pagespeed and PageSpeed Service are based on our open-source PageSpeed Optimization Libraries project, and are important ways in which we help websites take advantage of the latest performance best practices.



To learn more about mod_pagespeed and how to incorporate it in your site, watch our recent Google Developers Live session or visit the mod_pagespeed product page.


Joshua Marantz is the technical lead for mod_pagespeed. Ilya Grigorik is the Developer Advocate for Make the Web Fast.

Posted by Scott Knaster, Editor
2013, By: Seo Master

seo Measuring Speed The Slow Way 2013

Seo Master present to you:

Let's say you figured out a wicked-cool way to speed up how quickly your website loads. You know with great certainty that the most popular web page will load much, much faster. Then, you remember that the page always loads much, much faster in your browser with the web server running on your development box. You need numbers that represent what your users will actually experience.

Depending on your development process, you may have several measuring opportunities such as after a live release, on a staging server, on a continuous build, or on your own development box.

Only a live release gives numbers that users experience--through different types of connections, different computers, different browsers. But, it comes with some challenges:
  • You must instrument your site (one example tool is jiffy-web).
  • You must isolate changes.
    • The most straight-forward way to do that is to release only one change at a time. That avoids having multiple changes altering performance numbers--for better or for worse--at the same time.
  • Consider releasing changes to a subset of users.
    • That helps safe-guard against real-world events like holidays, big news days, exploding hard drives, and, perhaps the worst possible fate of all: a slashdotting.
Measuring real traffic is important, but performance should also be measured earlier in the development process. Millions of users will not hit your development web server--they won't, right?! You need a way to measure your pages without that.

Today, Steve Souders has released Hammerhead, a Firefox plug-in that is just the ticket for measuring web page load times. It has a sweet feature that repeatedly loads pages both with and without the browser cache so you can understand different use cases. One thing Hammerhead will not do for you is slow down your web connection. The page load times that you measure on your development machine will likely be faster than your users' wildest dreams.

Measuring page load times with a real DSL or dial up connection would be ideal, but if you cannot do that, all hope is not lost. You can try the following tools that simulate slower connection speeds on a single box:Please share your own favorite tools in the comments. Each of these tools is super easy to install and setup options to simulate slower connections. However, they each have some caveats that you need to keep in mind.

Firefox Throttle hooks into the WinSock API to limit bandwidth and avoids using proxy settings. (If you use it, be sure to disable "burst-mode".) Right now, Firefox Throttle only limits bandwidth. That means it controls how much data arrives in a given time period after the first bits arrive. It does not limit latency. Latency controls how long it takes packets to travel to and from the server. See Wikipedia's Relationship between latency and throughput for more details. For certain webpages, latency can make up a large part of the overall load time. The next Firefox Throttle release is expected to include latency delays and other webmaster friendly features to simulate slower, less-reliable connections. With these enhancements, Firefox Throttle will be an easy recommendation.

Fiddler and Charles act as proxies, and, as a result they make browsers act rather differently. For instance, IE and Firefox drastically limit the maximum number of connections (IE8 from 60+ to 6 and FF3 from 30 to 8). If you happen to know that all your users go though a proxy anyway, then this will not matter to you. Otherwise, it can mean that web pages load substantially differently.

If you have more time and hardware with which to tinker, you may want to check out tools like dummynet (FreeBSD or Mac OS X), or netem (Linux). They have even more knobs and controls and can be put between the web browser hardware and the serving hardware.

Measurements at each stage of web development can guide performance improvements. Hammerhead combined with a connection simulator like Firefox Throttle can be a great addition to your web development tool chest.2013, By: Seo Master

seo WebP, a new image format for the Web 2013

Seo Master present to you:

Cross-posted from the Chromium Blog

As part of Google’s initiative to make the web faster, over the past few months we have released a number of tools to help site owners speed up their websites. We launched the Page Speed Firefox extension to evaluate the performance of web pages and to get suggestions on how to improve them, we introduced the Speed Tracer Chrome extension to help identify and fix performance problems in web applications, and we released a set of closure tools to help build rich web applications with fully optimized JavaScript code. While these tools have been incredibly successful in helping developers optimize their sites, as we’ve evaluated our progress, we continue to notice a single component of web pages is consistently responsible for the majority of the latency on pages across the web: images.

Most of the common image formats on the web today were established over a decade ago and are based on technology from around that time. Some engineers at Google decided to figure out if there was a way to further compress lossy images like JPEG to make them load faster, while still preserving quality and resolution. As part of this effort, we are releasing a developer preview of a new image format, WebP, that promises to significantly reduce the byte size of photos on the web, allowing web sites to load faster than before.

Images and photos make up about 65% of the bytes transmitted per web page today. They can significantly slow down a user’s web experience, especially on bandwidth-constrained networks such as a mobile network. Images on the web consist primarily of lossy formats such as JPEG, and to a lesser extent lossless formats such as PNG and GIF. Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today.

To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We applied the techniques from VP8 video intra frame coding to push the envelope in still image coding. We also adapted a very lightweight container based on RIFF. While this container format contributes a minimal overhead of only 20 bytes per image, it is extensible to allow authors to save meta-data they would like to store.

While the benefits of a VP8 based image format were clear in theory, we needed to test them in the real world. In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality. This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.

To help you assess WebP’s performance with other formats, we have shared a selection of open-source and classic images along with file sizes so you can visually compare them on this site. We are also releasing a conversion tool that you can use to convert images to the WebP format. We’re looking forward to working with the browser and web developer community on the WebP spec and on adding native support for WebP. While WebP images can’t be viewed until browsers support the format, we are developing a patch for WebKit to provide native support for WebP in an upcoming release of Google Chrome. We plan to add support for a transparency layer, also known as alpha channel in a future update.

We’re excited to hear feedback from the developer community on our discussion group, so download the conversion tool, try it out on your favorite set of images, and let us know what you think.

2013, By: Seo Master

seo Gmail for Mobile HTML5 Series: Reducing Startup Latency 2013

Seo Master present to you: On April 7th, Google launched a new version of Gmail for mobile for iPhone and Android-powered devices. We shared the behind-the-scenes story through this blog and decided to share more of what we've learned in a brief series of follow-up blog posts. This week, I'll talk about how modularization can be used to greatly reduce the startup latency of a web app.

To a user, the startup latency of an HTML 5 based application is critical. It is their first impression of the application's performance. If it's really slow, they might not even bother to wait for the app to load before navigating away. Even if your application is blazing fast after it loads, the user may never get the chance to experience it.

There are several aspects of an HTML 5 based application that contribute to startup latency:
  1. Network time to fetch the application (JavaScript + HTML)
  2. JavaScript parse time
  3. Code execution time to fetch the data and render the home page of your application
The third issue is up to you! The first two issues, however, are directly correlated with the size of the application. This is a tricky problem since as your application matures, it will have more features and the code size will get bigger. So, what to do? Modularize your application! Split up your code into independent, standalone modules. Consider splitting each view/screen of your application and implement each new feature as its own module. This is only half the story. Now that you have your code modularized, you need to decide which subset of these modules are critical to load your application's home page. All the non-core modules should be downloaded and parsed at a later time. With a consistent code size for your startup code, you can maintain a consistent startup time. Now, let's go into some nitty gritty details of how we built an application with lazy-loaded modules.

How to Split Your Code into Modules

Splitting an application into individual modules might not be as simple as you think. Code that serves a common purpose/functionality should be grouped together and form a module (comparable to a library). As mentioned earlier, we selected which modules are critical to the home page of the app and which modules can be lazy-loaded at a later time. Let's use a Weather application as an example:

High Level Functionality:
  • A "Weather in my Favourite Cities" home page
  • Click on a city to view the cities entire week forecast
  • Weather data comes from an external web service
Possible Module Separation:
  • Weather data model
  • Weather web service API
  • Common UI widgets (buttons, toolbars, navigation, etc)
  • Favourite Cities page
  • City Weather Forecast page
Now let's say your users want a "breaking news" feature. No problem: just put the page, the news data API and the data model into a new module.

One thing to keep in mind is the dependency order of your modules. For modules that have many downstream dependencies, it might make sense to include them as part of the core modules.

How to Lazy Load the Modules

Option 1: Script as DOM

This method uses JavaScript to insert SCRIPT tags into the HEAD's DOM.
<script type="text/JavaScript">
  function loadFile(url) {
    var script = document.createElement('SCRIPT');
    script.src = url;
    document.getElementsByTagName('HEAD')[0].appendChild(script);
  }
</script>
Option 2: XmlHttpRequest (XHR)

This method sets up XmlHttpRequests to retrieve the JavaScript . The returned string should be evaluated in the XHR callbacks (using the eval(string) method). This method is a little more complicated but it gives you more control over error handling.
<script type="text/JavaScript">
  function loadFile(url) {
     function callback() {
      if (req.readyState == 4) { // 4 = Loaded
        if (req.status == 200) {
          eval(req.responseText);
        } else {
          // Error
        }
      }
    };
    var req = new XMLHttpRequest();
    req.onreadystatechange = callback;
    req.open("GET", url, true);
    req.send("");
  }
</script>
The next question is, when to lazy load the modules? One strategy is to lazy load the modules in the background once the home page has been loaded. This approach has some drawbacks. First, JavaScript execution in the browser is single threaded. So while you are loading the modules in the background, the rest of your app becomes non-responsive to user actions while the modules load. Second, it's very difficult to decide when, and in what order, to load the modules. What if a user tries to access a feature/page you have yet to lazy load in the background? A better strategy is to associate the loading of a module with a user's action. Typically, user actions are associated with an invocation of an asynchronous function (for example, an onclick handler). This is the perfect time for you to lazy load the module since the code will have to be fetched over the network. If mobile networks are slow, you can adopt a strategy where you prefetch the code of the modules in advance and keep them stored in the javascript heap. Only then parse and load the corresponding module on user action. One word of caution is that you should make sure your prefetching strategy doesn't impact the user's experience - for example, don't prefetch all the modules while you are fetching user data. Remember, dividing up the latency has far better for users than bunching it all together during startup.

For an HTML 5 application that takes advantage of the application cache to reduce startup latency and to serve the application offline, there are a few caveats one should be aware of. Mobile networks have decent bandwidth, but poor round trip latency, so listing each module as a separate resource in the manifest incurs quite a bit of extra startup latency when the application cache is empty. Also, if one of the module resources fails to be downloaded by the application cache (e.g. disconnected from network), additional error handling code needs to be written to handle such a case. Finally, applications today have no control when the application cache decides to download the resources in the manifest (such a feature is not defined in the current specification of the draft standard). Typically, resources are downloaded once the main page is loaded, but that's not an ideal time since that's when the application requests user data.

To work-around these caveats, we found a trick that allows you to bundle all of your modules into a single resource without having to parse any of the JavaScript. Of course, with this strategy, there is greater latency with the initial download of the single resource (since it has all your JavaScript modules), but once the resource is stored in the browser's application cache, this issue becomes much less of a factor.

To combine all modules into a single resource, we wrote each module into a separate script tag and hid the code inside a comment block (/* */). When the resource first loads, none of the code is parsed since it is commented out. To load a module, find the DOM element for the corresponding script tag, strip out the comment block, and eval() the code. If the web app supports XHTML, this trick is even more elegant as the modules can be hidden inside a CDATA tag instead of a script tag. An added bonus is the ability to lazy load your modules synchronously since there's no longer a need to fetch the modules asynchronously over the network.

On an iPhone 2.2 device, 200k of JavaScript held within a block comment adds 240ms during page load, whereas 200k of JavaScript that is parsed during page load added 2600 ms. That's more than a 10x reduction in startup latency by eliminating 200k of unneeded JavaScript during page load! Take a look at the code sample below to see how this is done.
<html>
...
<script id="lazy">
// Make sure you strip out (or replace) comment blocks in your JavaScript first.
/*
JavaScript of lazy module
*/
</script>

<script>
  function lazyLoad() {
    var lazyElement = document.getElementById('lazy');
    var lazyElementBody = lazyElement.innerHTML;
    var jsCode = stripOutCommentBlock(lazyElementBody);
    eval(jsCode);
  }
</script>

<div onclick=lazyLoad()> Lazy Load </div>
</html>
In the future, we hope that the HTML5 standard will allow more control over when the application cache should download resources in the manifest, since using comments to pass along code is not elegant but worked nicely for us. In addition, the snippets of code are not meant to be a reference implementation and one should consider many additional optimizations such as stripping white space and compiling the JavaScript to make its parsing and execution faster. To learn more about web performance, get tips and tricks to improve the speed of your web applications and to download tools, please visit http://code.google.com/speed.

Previous posts from Gmail for Mobile HTML5 Series

HTML5 and Webkit pave the way for mobile web applications
Using AppCache to launch offline - Part 1
Using AppCache to launch offline - Part 2
Using AppCache to launch offline - Part 3
A Common API for Web Storage
Suggestions for better performance
Cache pattern for offline HTML5 web application


2013, By: Seo Master

seo Drupal 7 - faster than ever 2013

Seo Master present to you:

This is a guest post by Owen Barton, partner and director of engineering at CivicActions. Owen has been working with Google's “Make the Web Faster” project team and the Drupal community to make improvements in Drupal 7 front-end performance. This is a condensed version of a more in-depth post over at the CivicActions blog.



Drupal is a popular free and open source publishing platform, powering high profile sites such as The White House, The New York Observer and Amnesty International. The Drupal community has long understood the importance of good front-end performance to successful web sites, being ahead of the game in many ways. This post highlights some of the improvements developed for the upcoming Drupal 7 release, several of which can save an additional second or more of page load times.



Drupal 7 has made its caching system more easily pluggable - to allow for easier memcache integration, for example. It has also enabled caching HTTP headers to be set so that logged out users can cache entire pages locally as well as improve compatibility with reverse proxies and content distribution networks (CDNs). There is also a patch waiting which reduces both the response size and the time taken to generate 404 responses for inlined page assets. Depending on the type of 404 (CSS have a larger effect than images, for example) the slower 404s were adding 0.5 to 1 second to the calling page load times.



Drupal currently has the ability to aggregate multiple CSS and JavaScript files by concatenating them into a smaller number of files to reduce the number of HTTP requests. There is a patch in the queue for Drupal 7 that could allow aggregation to be enabled by default, which is great because the large number of individual files can add anything from 0-1.5 seconds to page loads.



One issue that has become apparent with the Drupal 6 aggregation system is that users can end up downloading aggregate files that include a large amount of duplicate code. On one page the aggregate may contain files a, b and c, whilst on a second page the aggregate may contain files a, b and d - the “c” and “d” files being added conditionally on specific pages. This breaks the benefits of browser caching and slows down subsequent page loads. Benchmarking on core alone shows that avoiding duplicate aggregates can save over a second across 5 page loads. A patch has already been committed that means files need to be explicitly added to the aggregate, and fix Drupal core to add appropriate files to the aggregate unconditionally.



Drupal has supported gzip compression of HTML output for a long time, however for CSS and JavaScript, the files are delivered directly by the webserver, so Drupal has less control. There are webserver based compressors such as Apache’s mod_deflate, but these are not always available. A patch is in the queue that stores compressed versions of aggregated files on write and uses rewrite and header directives in .htaccess that allow these files to be served correctly. Benchmarks show that this patch can make initial page views 20-60% faster, saving anything from 0.3 to 3 seconds total.



The Drupal 7 release promises some real improvements from a front-end performance point of view. Other performance optimizations will no doubt continue to appear and be refined in contributed modules and themes, as well as in site building best practices and documentation. In Drupal 8 we will hopefully see further improvements in the CSS/JS file aggregation system, increased high-level caching effectiveness and hopefully more tools to help site builders reduce file sizes. If you have yet to try Drupal, download it now and give it a try and tell us in the comments if your site performance improves!



2013, By: Seo Master

seo Lossless and Transparency Modes in WebP 2013

Seo Master present to you: Author PictureBy Jyrki Alakuijala, WebP Team

At Google, we are constantly looking at ways to make web pages load faster. One way to do this is by making web images smaller. This is especially important for mobile devices where smaller images save both bandwidth and battery life. Earlier this month, we released version 0.2 of the WebP library that adds support for lossless and transparency modes to compress images. This version provides CPU and memory performance comparable to or better than PNG, yet results in 26% smaller files.

WebP’s improved compression comes from advanced techniques such as dedicated entropy codes for different color channels, exploiting 2D locality of backward reference distances and a color cache of recently used colors. This complements basic techniques such as dictionary coding, Huffman coding and color indexing transform. We think that we've only scratched the surface in improving compression. Our newly added support for alpha transparency with lossy images promises additional gains in this space, helping make WebP an efficient replacement for PNG.

The new WebP modes are supported natively in the latest Beta version of Chrome. The bit stream specification for these new WebP modes has been finalized and the container specification has been updated. We thank the community for their valuable feedback and for helping us evolve WebP as a new image compression format for the web. We encourage you to try these new compression methods on your favorite set of images, check out the code, and continue to provide feedback.

Dr. Jyrki Alakuijala is a Software Engineer with a special interest in data compression. He is a father of five daughters, and sings in the Finnish Choir in Zürich. Before joining Google, Jyrki worked in neurosurgical and radiotherapy development.

Posted by Ashleigh Rentz, Editor Emerita

2013, By: Seo Master
Powered by Blogger.