Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you:

(Cross-posted from the Google Analytics Blog)

Back in Episode 10 of Web Analytics TV, (32:00), Lisa C from Melbourne asked how to pull a trending report from Google Analytics for the top organic search landing pages. This was such a great question, that we wrote 2 articles and released sample code describing how you can automate retrieving this data from Google Analytics Data Export API. But first let’s look at the results.

Here is a graph plotting traffic to the top 100 landing pages for organic search for all of June for www.googlestore.com.

Let’s Analyze:
This is the typical trend graph you can find across the Google Analytics web interface. By itself, all you can tell is that something happened during the spike. what you can’t figure out is which page actually increased in traffic; to do so would require lots more digging.

Now let’s try again:
Here is a stacked area graph of each of the top 100 landing pages for organic search.

Let’s Analyze:
Awesome right! So obvious why this is cooler. But let me explain.

Lisa’s graph, above, presents significantly simplified insights. Notice how much more we can get from this graph. We can see the green page is what caused the big spike. Also we see that the blue and orange pages had interesting changes in traffic patterns; changes we couldn’t identify from the graph on the left. Being able to break down the totals graph is indeed a gold mine for analysis.

Typical actions you, or Lisa (!), can take from this data are to get the organic search keyword to send traffic to the blue page. Then to identify the keywords sending traffic to the green and orange page and see if we can increase traffic to other pages.

Exporting the Data from the web interface:
Anybody can pull this data from the Google Analytics web interface. You simply create a custom report with landing pages and entrances. Then drill into each landing page, and export the data to a csv file. Finally you go through all csv files and compile them into a single file for analysis. Let's illustrate:

Going through each report individually is a LOT of manual work, but we can automate all of this using the Data Export API; reducing hours of work into a few minutes!

Using the Data Export API to Automate:
In part one of our series, we demonstrate how to use the Data Export API to automate the exact task above. A user specifies 1 query to determine the top landing pages. The for each landing page, a separate query is used to get the data over time.

This is great and we built it to work with any query with a single dimension. But notice that the number of queries grows with the number dimensions. In fact this program requires n + 1 queries so if you want data for 1,000 dimensions, it will take 1,001 queries.

This is bad because there is a daily quota of 10,000 queries for the Data Export API. So if you ran this program 10 times, with 1,000 dimensions, it would require 10,010 queries completely using your quota. ouch!

Optimizing Data Export API Requests:
To reduce the number of queries requires, the second part of this series describes an alternate approach to retrieving the same data, but minimizes the number of queries required. In the second approach, we use Data Export API filter expressions to return data for multiple dimensions in each request.

This approach dramatically reduces the amount of quota required. In the best case, only 2 queries are required.

Using this second approach, analysts can now run this report to their hearts content. They can do this for different time frames, and different dimensions, comparing organic vs paid traffic, trends of keywords by search engine, even compare traffic by geography.

As we mentioned, we wrote two articles describing both approaches and released the sample code for the application. Let us know the amazing insights you find through using this tool.

Have fun!

2013, By: Seo Master
Seo Master present to you: Author Photo
By Scott Knaster, Google Developers Blog Editor

This summer Google ran an online course called Power Searching with Google. The course was so popular that Peter Norvig and the Research at Google people who created it decided to generalize the course code and framework, and make it into Course Builder, an open source project that’s now available. The Research team points out that Course Builder is an experiment, and there’s a lot of work still to be done, but if you’re interested in this approach, you can join a bunch of schools that are considering using Course Builder.

Speaking of research, you might think that we have little new to learn about the very basic task of boiling water, but of course that’s not true. Researchers at several schools around the world recently collaborated to produce a way to boil water without producing bubbles.



This discovery has many potential practical applications. It could be used to prevent vapor buildup that can cause explosions, or could even lead to discoveries of ways to reduce surface drag or prevent frost from forming. But most important, it’s really, really cool.

Finally, here’s a fun new Easter egg (or is it a valuable new search tool?). In a Google search box, enter the name of your favorite actor, followed by Bacon number. (If you’re unfamiliar with the Bacon number phenomenon, you can find out more.) Maybe you’ll get some ideas about movies to see over the weekend!


Each week our Fridaygram presents cool things from Google and elsewhere that you might not have heard about. Some Fridaygram items aren't related to developer topics, but all of them are interesting to us nerds. For extra credit this week, you can check into the small set of people who have a defined Erdős–Bacon number.
2013, By: Seo Master
Seo Master present to you:

As you may know,Google allows its engineers to spend 20% of their time on projects independent of their regular day to day job. For my 20% time, I chose to continue and expand my work on maintaining the Linux man-pages.

Since April, we've managed to ship 21 new releases, with a dozen or so new pages, and around 400 major and minor improvements to existing pages.

My work on the Linux Man-pages project man-pages led me to talk about kernel-userland interface design, testing, and documentation at the recent LinuxConf Europe, where my Zurich colleague Roman Marxer also spoke about Google's recently open-sourced Ganeti virtual server management software.

I was lucky enough to be invited to the immediately following USENIX Linux Kernel Developers Summit, where I joined Google colleagues Andrew Morton, Paul Menage, and Martin Bligh to participate in the discussion of current topics related to kernel development, including the topic of kernel-userland API design, testing, and documentation.

You can read my talk, and in-depth coverage of the Kernel Developer Summit at LWN.net. It's available to LWN.net subscribers only until the 20th of September, but you can already see the obligatory group photo.



Googlers Andrew Morton and Paul Menage relaxing at the end of the Linux Kernel Summit, Cambridge, England

(photo credit: Michael Kerrisk)

Ed. note: Post updated to fix typo.2013, By: Seo Master
Powered by Blogger.