Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

seo Python Library for Google Analytics Management API 2013

Seo Master present to you: It’s been only 7 weeks since we’ve launched the Google Analytics Management API and we’ve heard a lot of great feedback. Thanks!

Since Python is one of our more popular languages, we’ve updated the Google Analytics Python Client Library to access all 5 feeds of the Management API. Now it’s easier than ever to get your configuration data from the API.

To show you how simple it is to use the library, here is an example which returns all the goal names for a profile:
import gdata.analytics.client

APP_NAME = 'goal_names_demo'
my_client = gdata.analytics.client.AnalyticsClient(source=APP_NAME)

# Authorize
my_client.client_login(
INSERT_USER_NAME,
INSERT_PASSWORD,
APP_NAME,
service='analytics')

# Make a query.
query = gdata.analytics.client.GoalQuery(
acct_id='INSERT_ACCOUNT_ID',
web_prop_id='INSERT_WEB_PROP_ID',
profile_id='INSERT_PROFILE_ID')

# Get and print results.
results = my_client.GetManagementFeed(query)
for entry in results.entry:
print 'Goal number = %s' % entry.goal.number
print 'Goal name = %s' % entry.goal.name
print 'Goal value = %s' % entry.goal.value

To get you started, we wrote a reference example which accesses all the important information for each feed. We also added links to the source and PyDoc from the Management API Libraries and Examples page. Have a look and let us know what you think!

2013, By: Seo Master

seo Got big JSON? BigQuery expands data import for large scale web apps 2013

Seo Master present to you: Author PhotoBy Ryan Boyd, Developer Advocate

JSON is the data format of the web. JSON is used to power most modern websites, is a native format for many NoSQL databases hosting top web applications, and provides the primary data format in many REST APIs. Google BigQuery, our cloud service for ad-hoc analytics on big data, has now added support for JSON and the nested/repeated structure inherent in the data format.

image
JSON opens the door to a more object-oriented view of your data compared to CSV, the original data format supported by BigQuery. It removes the need for duplication of data required when you flatten records into CSV. Here are some examples of data you might find a JSON format useful for:
  • Log files, with multiple headers and other name-value pairs.
  • User session activities, with information about each activity occurring nested beneath the session record.
  • Sensor data, with variable attributes collected in each measurement.
Nested/repeated data support is one of our most requested features. And while BigQuery's underlying infrastructure supports it, we'd only enabled it in a limited fashion through M-Lab's test data. Today, however, developers can use JSON to get any nested/repeated data into and out of BigQuery.

For more information on importing JSON and nested/repeated data into BigQuery, check out the new guide in our documentation. You should also see the Dealing with Data section for details on the new querying syntax available for this type of data.

Improvements to Data Loading Pipeline

We’ve made it much easier to ingest data into BigQuery – up to 1TB of data per load job, with each file up to 100GB uncompressed JSON or CSV. We’ve also eliminated the 2 imports per minute rate limit, enabling you to submit all your ingestion jobs and let us handle the queuing as necessary. In a recent project I’ve been working on, import jobs for 3TB of data that previously took me 12 hours to run now take me only 36 minutes – a 20x improvement!

We’ve published a new Ingestion Cookbook that explains how to take advantage of these new limits.

We’re initiating a small trusted tester program aimed at making it easier to move your data from the App Engine Datastore to BigQuery for analysis. If you store a lot of data in Datastore and are also using BigQuery, we’d like to hear from you. Please sign up now to be considered for the trusted tester program.

Learn more this week

Michael Manoochehri, Siddartha Naidu and I are in London this week talking about BigQuery and these new features at the Strata big data conference. Ju-kay Kwek will also be talking about BigQuery at the Interop NYC conference tomorrow. Please stop by, say hi, and let us know what you’re doing with big data.

We’ll also be producing a Google Developers Live session from Campus London on Friday at 16:00 BST (15:00 GMT).


Ryan Boyd is a Developer Advocate, focused on big data. He's been at Google for 6 years and previously helped build out the Google Apps ISV ecosystem. He published his first book, "Getting Started with OAuth 2.0", with O'Reilly.

Posted by Scott Knaster, Editor
2013, By: Seo Master

seo What's good for a Business Blog: A Sub-Domain, Sub-Folder or a New Domain 2013

Seo Master present to you:
You might have probably heard that hosting a business blog will help you in your internet marketing plan and you have started planning to have a business blog.  Business blogging no matter helps, whether goal of your business to have website is lead generation or if you have a shopping cart or if it just exists for branding and educate your potential customers, business blogging can help you with everything. It will help you in SEO as well.

So what’s the best practice to host a business blog, on a sub-domain, in a folder structure or on an entirely new domain? Here are the pros and cons of choosing between these options for business blogging.

Having a business blog on a sub-domain or sub-folder has following benefits:

Read more »2013, By: Seo Master

seo Measuring Speed The Slow Way 2013

Seo Master present to you:

Let's say you figured out a wicked-cool way to speed up how quickly your website loads. You know with great certainty that the most popular web page will load much, much faster. Then, you remember that the page always loads much, much faster in your browser with the web server running on your development box. You need numbers that represent what your users will actually experience.

Depending on your development process, you may have several measuring opportunities such as after a live release, on a staging server, on a continuous build, or on your own development box.

Only a live release gives numbers that users experience--through different types of connections, different computers, different browsers. But, it comes with some challenges:
  • You must instrument your site (one example tool is jiffy-web).
  • You must isolate changes.
    • The most straight-forward way to do that is to release only one change at a time. That avoids having multiple changes altering performance numbers--for better or for worse--at the same time.
  • Consider releasing changes to a subset of users.
    • That helps safe-guard against real-world events like holidays, big news days, exploding hard drives, and, perhaps the worst possible fate of all: a slashdotting.
Measuring real traffic is important, but performance should also be measured earlier in the development process. Millions of users will not hit your development web server--they won't, right?! You need a way to measure your pages without that.

Today, Steve Souders has released Hammerhead, a Firefox plug-in that is just the ticket for measuring web page load times. It has a sweet feature that repeatedly loads pages both with and without the browser cache so you can understand different use cases. One thing Hammerhead will not do for you is slow down your web connection. The page load times that you measure on your development machine will likely be faster than your users' wildest dreams.

Measuring page load times with a real DSL or dial up connection would be ideal, but if you cannot do that, all hope is not lost. You can try the following tools that simulate slower connection speeds on a single box:Please share your own favorite tools in the comments. Each of these tools is super easy to install and setup options to simulate slower connections. However, they each have some caveats that you need to keep in mind.

Firefox Throttle hooks into the WinSock API to limit bandwidth and avoids using proxy settings. (If you use it, be sure to disable "burst-mode".) Right now, Firefox Throttle only limits bandwidth. That means it controls how much data arrives in a given time period after the first bits arrive. It does not limit latency. Latency controls how long it takes packets to travel to and from the server. See Wikipedia's Relationship between latency and throughput for more details. For certain webpages, latency can make up a large part of the overall load time. The next Firefox Throttle release is expected to include latency delays and other webmaster friendly features to simulate slower, less-reliable connections. With these enhancements, Firefox Throttle will be an easy recommendation.

Fiddler and Charles act as proxies, and, as a result they make browsers act rather differently. For instance, IE and Firefox drastically limit the maximum number of connections (IE8 from 60+ to 6 and FF3 from 30 to 8). If you happen to know that all your users go though a proxy anyway, then this will not matter to you. Otherwise, it can mean that web pages load substantially differently.

If you have more time and hardware with which to tinker, you may want to check out tools like dummynet (FreeBSD or Mac OS X), or netem (Linux). They have even more knobs and controls and can be put between the web browser hardware and the serving hardware.

Measurements at each stage of web development can guide performance improvements. Hammerhead combined with a connection simulator like Firefox Throttle can be a great addition to your web development tool chest.2013, By: Seo Master
Powered by Blogger.