Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

Seo Master present to you:
By Jeremy Glassenberg, Platform Manager, Box

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who appeared in the Developer Sandbox at Google I/O 2011.


During the day 2 keynote of Google I/O, I was excited to see Box's integration with the Chromebook's file browser handler getting demoed on the big stage. The integration makes local files and files you encounter on the web easily accessible to cloud services inside Chrome OS.

Chrome's file browser handler utilizes the new HTML5 file system API, designed to enable web applications to interact with local files. This API lets web applications read files, edit files, and create new files within a designated local space on a user's machine. This includes creating binary files for application data, and in Box's case, accessing user-created files to let people easily move their content to the cloud.

As mentioned during the Google I/O keynote, the integration between Box and the Chrome OS file browser handler only took our team a weekend to build. We were able to build the integration quickly because of the simplicity of both Chrome's file browser platform and Box's API, both of which were designed to make content integrations like this easy for developers to implement.

In this case, the Quick Importer tool from the Box API made the entire development process just a few steps:

1. We created a Chrome extension manifest to work with Box.
{
"name”: "Box Uploader",
...
"file_browser_handlers": [
{
"id”: "upload",
"default_title": "Save to Gallery", // What the button will display
"file_filters": [
]
}
],
2. In the Chrome manifest, we specified the relevant file types to which the service applies. In our case, that's most file types, as seen below. Specialized services may just want certain types, such as images for Picasa.
"file_browser_handlers": [
{
"id": "upload",
"default_title": "Save to Box",
"file_filters": [
"filesystem:*.*"
]
}
],
3. With some JavaScript code connecting to the file browser handler, we set up a way to upload files through Box’s Quick Importer.
var fm = new FileManager();
fm.uploadServer = 'https://www.box.net/<...>';

if (bgPage && bgPage.filesToUpload.length) {
var entry;
while(entry = bgPage.filesToUpload.pop()) {
entry.file(function(file) {
fm.uploadFile(file);
});
}
}
That's actually all there was to the integration.

Once the file is uploaded to the Box API's Quick Import URL, our page is displayed to authenticate the user, to let the user select a Box folder to save the file, and then to upload the file.


While such an integration can be customized through our API, our Quick Import provided an easy and fast means to connect the platforms. Developers can customize the integration by using direct calls to our API, and take advantage of additional features such as automatic sharing, if they prefer.

Thanks to the simplicity of Chrome's file browser handler and some extra tools in the Box API, our development time was very short (just a weekend), but it could have actually been even quicker. We had a couple of unusual complications that weekend:

1. The Google Chrome team was still experimenting with the file browser, so development from both sides was happening in parallel, which can be a bit tricky. Now that the file browser has been thoroughly tested, you should have an even easier time.

2. I took my girlfriend out a couple times, since her final exams were coming up soon afterward. I love you, Hayley!

Once the content has been uploaded to Box, it’s accessible to many Google services, including Gmail, Google Docs, and Google Calendar, through additional integrations on our site with Google Apps. Ah, the wonders of open platforms.


Jeremy Glassenberg is the Platform Manager at Box, where he oversees partner integrations, API and platform product management, and Box’s community of several thousand developers. In addition to managing Box's developer platform, Jeremy is a part-time blogger at ProgrammableWeb, and a contributor to several open-source projects.

Posted by Scott Knaster, Editor
2013, By: Seo Master
salam every one, this is a topic from google web master centrale blog:

We've improved Webmaster Central's robots.txt analysis tool to recognize Sitemap declarations and relative URLs. Earlier versions weren't aware of Sitemaps at all, and understood only absolute URLs; anything else was reported as Syntax not understood. The improved version now tells you whether your Sitemap's URL and scope are valid. You can also test against relative URLs with a lot less typing.

Reporting is better, too. You'll now be told of multiple problems per line if they exist, unlike earlier versions which only reported the first problem encountered. And we've made other general improvements to analysis and validation.

Imagine that you're responsible for the domain www.example.com and you want search engines to index everything on your site, except for your /images folder. You also want to make sure your Sitemap gets noticed, so you save the following as your robots.txt file:

disalow images

user-agent: *
Disallow:

sitemap: http://www.example.com/sitemap.xml

You visit Webmaster Central to test your site against the robots.txt analysis tool using these two test URLs:

http://www.example.com
/archives

Earlier versions of the tool would have reported this:



The improved version tells you more about that robots.txt file:





We also want to make sure you've heard about the new unavailable_after meta tag announced by Dan Crow on the Official Google Blog a few weeks ago. This allows for a more dynamic relationship between your site and Googlebot. Just think, with www.example.com, any time you have a temporarily available news story or limited offer sale or promotion page, you can specify the exact date and time you want specific pages to stop being crawled and indexed.

Let's assume you're running a promotion that expires at the end of 2007. In the headers of page www.example.com/2007promotion.html, you would use the following:

<META NAME="GOOGLEBOT"
CONTENT="unavailable_after: 31-Dec-2007 23:59:59 EST">


The second exciting news: the new X-Robots-Tag directive, which adds Robots Exclusion Protocol (REP) META tag support for non-HTML pages! Finally, you can have the same control over your videos, spreadsheets, and other indexed file types. Using the example above, let's say your promotion page is in PDF format. For www.example.com/2007promotion.pdf, you would use the following:

X-Robots-Tag: unavailable_after: 31 Dec
2007 23:59:59 EST


Remember, REP meta tags can be useful for implementing noarchive, nosnippet, and now unavailable_after tags for page-level instruction, as opposed to robots.txt, which is controlled at the domain root. We get requests from bloggers and webmasters for these features, so enjoy. If you have other suggestions, keep them coming. Any questions? Please ask them in the Webmaster Help Group.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Seo Master present to you:

JavaScript libraries let developers do more with less code. But JavaScript libraries need to work on a variety of browsers, so using them often means shipping even more code. If JQuery has code to support XMLHttpRequest over ActiveX on an older browser like IE6 then you end up shipping that code even if your application doesn't support IE6. Not only that, but you ship that code to the other 90% of newer browsers that don't need it.


This problem is only going to get worse. Browsers are rushing to implement HTML5 and EcmaScript5 features like JSON.parse that used to be provided only in library code, but libraries will likely have to keep that code for years if not decades to support older browsers.


Lots of compilers (incl. (JSMin, Dojo, YUI, Closure, Caja) remove unnecessary code from JavaScript to make the code you ship smaller. They seem like a natural place to address this problems. Optimization is just taking into account the context that code is going to run in to improve it; giving compilers information about browsers will help them avoid shipping code to support marginal browsers to modern browsers.

The JavaScript Knowledge Base (JSKB) on browserscope.org seeks to systematically capture this information in a way that compilers can use.

It collects facts about browsers using JavaScript snippet. The JavaScript code (!!window.JSON && typeof window.JSON.stringify === 'function') is true if JSON is defined. JSKB knows that this is true for Firefox 3.5 but not Netscape 2.0.

Caja Web Tools includes a code optimizer that uses these facts. If it sees code like

if (typeof JSON.stringify !== 'function') { /* lots of code */ }

it knows that the body will never be executed on Firefox 3.5, and can optimize it out. The key here is that the developer writes feature tests, not version tests, and as browsers roll out new features, JSKB captures that information, letting compilers produce smaller code for that browser.


The Caja team just released Caja Web Tools, which already uses JSKB to optimize code. We hope that other JavaScript compilers will adopt these techniques. If you're working on a JavaScript optimizer, take a look at our JSON APIs to get an idea of what the JSKB contains.


If you're writing JavaScript library code or application code then the JSKB documentation can suggest good feature tests. And the examples in the Caja Web Tools testbed are good starting places.


2013, By: Seo Master
Powered by Blogger.