Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you:
By Dan Rubel, Productivity Enhancer, Dart Editor team

Cross-posted from the Chromium Blog

Today's release of the Dart SDK and Editor is the first beta release, and contains performance and productivity improvements across the platform. This latest release helps Dart developers automate code evolution, produce smaller JavaScript code and deploy Dart web apps.

The Editor's analysis engine, responsible for reporting warnings and errors, is completely rewritten and is 20% faster at parsing and analyzing. Now, there’s no need to run all the unit tests just to discover a typo. The Dart Editor watches your back as you type.

In addition, Dart Editor makes it easier for developers to manage an evolving app. Some of the new features include:

  • "Rename Library" refactoring
  • "Convert Method to Getter" and "Convert Getter to Method" refactorings
  • "Import Library" quick fix
  • "Create Class" and "Create part" quick fixes

Code completion has also improved. For example, completion is now camelcase aware. Type iE and Dart Editor finds isEmpty.

Compiling Dart to JavaScript now results in smaller code. For example, some Dart programs that use reflection and HTML can compile to JavaScript that is 3.7x smaller than previous compilation sizes.

Dart VM performance has also improved. Compared against the previous release of Dart, DeltaBlue is 33% faster and Tracer is 40% faster. This release also includes full SIMD acceleration in Dart VM.

Finally, deploying a Dart web app is now easier, with the beta pub deploy command. It creates a directory with your app's code and assets and prepares it for hosting on your favorite web server. You can use this command from Dart Editor or the pub command-line utility.

That's just the highlights - there are more improvements across the platform. You can read the full release notes for more details and changes. You can download the latest version of Dart Editor, including everything you need for Dart development, from dartlang.org. We look forward to your feedback!



Written by Dan Rubel, Dart Editor Team

Posted by Scott Knaster, Editor
2013, By: Seo Master
salam every one, this is a topic from google web master centrale blog: The webmaster tools team has a very exciting mission: we dig into our logs, find as much useful information as possible, and pass it on to you, the webmasters. Our reward is that you more easily understand what Google sees, and why some pages don't make it to the index.

The latest batch of information that we've put together for you is the amount of traffic between Google and a given site. We show you the number of requests, number of kilobytes (yes, yes, I know that tech-savvy webmasters can usually dig this out, but our new charts make it really easy to see at a glance), and the average document download time. You can see this information in chart form, as well as in hard numbers (the maximum, minimum, and average).

For instance, here's the number of pages Googlebot has crawled in the Webmaster Central blog over the last 90 days. The maximum number of pages Googlebot has crawled in one day is 24 and the minimum is 2. That makes sense, because the blog was launched less than 90 days ago, and the chart shows that the number of pages crawled per day has increased over time. The number of pages crawled is sometimes more than the total number of pages in the site -- especially if the same page can be accessed via several URLs. So http://www..matrixar.com/2006/10/learn-more-about-googlebots-crawl-of.html and http://www..matrixar.com/2006/10/learn-more-about-googlebots-crawl-of.html#links are different, but point to the same page (the second points to an anchor within the page).


And here's the average number of kilobytes downloaded from this blog each day. As you can see, as the site has grown over the last two and a half months, the number of average kilobytes downloaded has increased as well.


The first two reports can help you diagnose the impact that changes in your site may have on its coverage. If you overhaul your site and dramatically reduce the number of pages, you'll likely notice a drop in the number of pages that Googlebot accesses.

The average document download time can help pinpoint subtle networking problems. If the average time spikes, you might have network slowdowns or bottlenecks that you should investigate. Here's the report for this blog that shows that we did have a short spike in early September (the maximum time was 1057 ms), but it quickly went back to a normal level, so things now look OK.

In general, the load time of a page doesn't affect its ranking, but we wanted to give this info because it can help you spot problems. We hope you will find this data as useful as we do!this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
salam every one, this is a topic from google web master centrale blog: I've seen a lot of questions lately about robots.txt files and Googlebot's behavior. Last week at SES, I spoke on a new panel called the Bot Obedience course. And a few days ago, some other Googlers and I fielded questions on the WebmasterWorld forums. Here are some of the questions we got:

If my site is down for maintenance, how can I tell Googlebot to come back later rather than to index the "down for maintenance" page?
You should configure your server to return a status of 503 (network unavailable) rather than 200 (successful). That lets Googlebot know to try the pages again later.

What should I do if Googlebot is crawling my site too much?
You can contact us -- we'll work with you to make sure we don't overwhelm your server's bandwidth. We're experimenting with a feature in our webmaster tools for you to provide input on your crawl rate, and have gotten great feedback so far, so we hope to offer it to everyone soon.

Is it better to use the meta robots tag or a robots.txt file?
Googlebot obeys either, but meta tags apply to single pages only. If you have a number of pages you want to exclude from crawling, you can structure your site in such a way that you can easily use a robots.txt file to block those pages (for instance, put the pages into a single directory).

If my robots.txt file contains a directive for all bots as well as a specific directive for Googlebot, how does Googlebot interpret the line addressed to all bots?
If your robots.txt file contains a generic or weak directive plus a directive specifically for Googlebot, Googlebot obeys the lines specifically directed at it.

For instance, for this robots.txt file:
User-agent: *
Disallow: /

User-agent: Googlebot
Disallow: /cgi-bin/
Googlebot will crawl everything in the site other than pages in the cgi-bin directory.

For this robots.txt file:
User-agent: *
Disallow: /
Googlebot won't crawl any pages of the site.

If you're not sure how Googlebot will interpret your robots.txt file, you can use our robots.txt analysis tool to test it. You can also test how Googlebot will interpret changes to the file.

For complete information on how Googlebot and Google's other user agents treat robots.txt files, see our webmaster help center.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.