Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Google now indexes SVG 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

You can now use Google search to find SVG documents. SVG is an open, XML-based format for vector graphics with support for interactive elements. We’re big fans of open standards, and our mission is to organize the world’s information, so indexing SVG is a natural step.

We index SVG content whether it is in a standalone file or embedded directly in HTML. The web is big, so it may take some time before we crawl and index most SVG files, but as of today you may start seeing them in your search results. If you want to see it yourself, try searching for [sitemap site:fastsvg.com] or [HideShow site:svg-whiz.com]

If you host SVG files and you wish to exclude them from Google’s search results, you can use the “X-Robots-Tag: noindex” directive in the HTTP header.

Check out Webmaster Central for a full list of file types we support.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Introducing the Structured Data Dashboard 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Structured data is becoming an increasingly important part of the web ecosystem. Google makes use of structured data in a number of ways including rich snippets which allow websites to highlight specific types of content in search results. Websites participate by marking up their content using industry-standard formats and schemas.

To provide webmasters with greater visibility into the structured data that Google knows about for their website, we’re introducing today a new feature in Webmaster Tools - the Structured Data Dashboard. The Structured Data Dashboard has three views: site, item type and page-level.

Site-level view
At the top level, the Structured Data Dashboard, which is under Optimization, aggregates this data (by root item type and vocabulary schema).  Root item type means an item that is not an attribute of another on the same page.  For example, the site below has about 2 million Schema.Org annotations for Books (“http://schema.org/Book”)


Itemtype-level view
It also provides per-page details for each item type, as seen below:


Google parses and stores a fixed number of pages for each site and item type. They are stored in decreasing order by the time in which they were crawled. We also keep all their structured data markup. For certain item types we also provide specialized preview columns as seen in this example below (e.g. “Name” is specific to schema.org Product).


The default sort order is such that it would facilitate inspection of the most recently added Structured Data.

Page-level view
Last but not least, we have a details page showing all attributes of every item type on the given page (as well as a link to the Rich Snippet testing tool for the page in question).


Webmasters can use the Structured Data Dashboard to verify that Google is picking up new markup, as well as to detect problems with existing markup, for example monitor potential changes in instance counts during site redesigns.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Changes in the Chrome user agent 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

The Chrome team is exploring a few changes to Chrome’s UA string. These changes are designed to provide additional details in the user-agent, remove redundancy, and increase compatibility with Internet Explorer. They’re also happening in conjunction with similar changes in Firefox 4.

We intend to ship Chrome 11 with these changes, assuming they don't cause major web compatibility problems. To test them out and ensure your website remains compatible with Chrome, we recommend trying the Chrome Dev and Beta channel builds. If you have any questions, please check out the blog post on the Chromium blog or drop us a line at our help forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Reorganizing internal vs. external backlinks 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Today we’re making a change to the way we categorize link data in Webmaster Tools. As you know, Webmaster Tools lists links pointing to your site in two separate categories: links coming from other sites, and links from within your site. Today’s update won’t change your total number of links, but will hopefully present your backlinks in a way that more closely aligns with your idea of which links are actually from your site vs. from other sites.

You can manage many different types of sites in Webmaster Tools: a plain domain name (example.com), a subdomain (www.example.com or cats.example.com), or a domain with a subfolder path (www.example.com/cats/ or www.example.com/users/catlover/). Previously, only links that started with your site’s exact URL would be categorized as internal links: so if you entered www.example.com/users/catlover/ as your site, links from www.example.com/users/catlover/profile.html would be categorized as internal, but links from www.example.com/users/ or www.example.com would be categorized as external links. This also meant that if you entered www.example.com as your site, links from example.com would be considered external because they don’t start with the same URL as your site (they don’t contain www).

Most people think of example.com and www.example.com as the same site these days, so we’re changing it such that now, if you add either example.com or www.example.com as a site, links from both the www and non-www versions of the domain will be categorized as internal links. We’ve also extended this idea to include other subdomains, since many people who own a domain also own its subdomains—so links from cats.example.com or pets.example.com will also be categorized as internal links for www.example.com.

Links for www.google.comExternal linksInternal links
Previously categorized as...www.example.com/
www.example.org/stuff.html
scholar.google.com/
sketchup.google.com/
google.com/
www.google.com/
www.google.com/stuff.html
www.google.com/support/webmasters/
Now categorized as...www.example.com/
www.example.org/stuff.html
scholar.google.com/
sketchup.google.com/
google.com/
www.google.com/
www.google.com/stuff.html
www.google.com/support/webmasters/

If you own a site that’s on a subdomain (such as www..matrixar.com) or in a subfolder (www.google.com/support/webmasters/) and don’t own the root domain, you’ll still only see links from URLs starting with that subdomain or subfolder in your internal links, and all others will be categorized as external links. We’ve made a few backend changes so that these numbers should be even more accurate for you.

Note that, if you own a root domain like example.com or www.example.com, your number of external links may appear to go down with this change; this is because, as described above, some of the URLs we were previously classifying as external links will have moved into the internal links report. Your total number of links (internal + external) should not be affected by this change.

As always, drop us a comment or join our Webmaster Help Forum if you have questions!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: 6 Quick Tips for International Websites 2013

salam every one, this is a topic from google web master centrale blog:
Note from the editors: After previously looking into various ways to handle internationalization for Google’s web-search, here’s a post from Google Web Studio team members with tips for web developers.

Many websites exist in more than one language, and more and more websites are made available for more than one language. Yet, building a website for more than one language doesn’t simply mean translation, or localization (L10N), and that’s it. It requires a few more things, all of which are related to internationalization (I18N). In this post we share a few tips for international websites.

1. Make pages I18N-ready in the markup, not the style sheets

Language and directionality are inherent to the contents of the document. If possible you should hence always use markup, not style sheets, for internationalization purposes. Use @lang and @dir, at least on the html element:

<html lang="ar" dir="rtl">

Avoid coming up with your own solutions like special classes or IDs.

As for I18N in style sheets, you can’t always rely on CSS: The CSS spec defines that conforming user agents may ignore properties like direction or unicode-bidi. (For XML, the situation changes again. XML doesn’t offer special internationalization markup, so here it’s advisable to use CSS.)

2. Use one style sheet for all locales

Instead of creating separate style sheets for LTR and RTL directionality, or even each language, bundle everything in one style sheet. That makes your internationalization rules much easier to understand and maintain.

So instead of embedding an alternative style sheet like

<link href="default.rtl.css" rel="stylesheet">

just use your existing

<link href="default.css" rel="stylesheet">

When taking this approach you’ll need to complement existing CSS rules by their international counterparts:

3. Use the [dir='rtl'] attribute selector

Since we recommend to stick with the style sheet you have (tip #2), you need a different way of selecting elements you need to style differently for the other directionality. As RTL contents require specific markup (tip #1), this should be easy: For most modern browsers, we can simply use [dir='rtl'].

Here’s an example:

aside {
 float: right;
 margin: 0 0 1em 1em;
}

[dir='rtl'] aside {
 float: left;
 margin: 0 1em 1em 0;
}

4. Use the :lang() pseudo class

To target documents of a particular language, use the :lang() pseudo class. (Note that we’re talking documents here, not text snippets, as targeting snippets of a particular language makes things a little more complex.)

For example, if you discover that bold formatting doesn’t work very well for Chinese documents (which indeed it does not), use the following:

:lang(zh) strong,
:lang(zh) b {
 font-weight: normal;
 color: #900;
}

5. Mirror left- and right-related values

When working with both LTR and RTL contents it’s important to mirror all the values that change directionality. Among the properties to watch out for is everything related to borders, margins, and paddings, but also position-related properties, float, or text-align.

For example, what’s text-align: left in LTR needs to be text-align: right in RTL.

There are tools to make it easy to “flip” directionality. One of them is CSSJanus, though it has been written for the “separate style sheet” realm, not the “same style sheet” one.

6. Keep an eye on the details

Watch out for the following items:
  • Images designed for left or right, like arrows or backgrounds, light sources in box-shadow and text-shadow values, and JavaScript positioning and animations: These may require being swapped and accommodated for in the opposite directionality.
  • Font sizes and fonts, especially for non-Latin alphabets: Depending on the script and font, the default font size may be too small. Consider tweaking the size and, if necessary, the font.
  • CSS specificity: When using the [dir='rtl'] (or [dir='ltr']) hook (tip #2), you’re using a selector of higher specificity. This can lead to issues. Just have an eye out, and adjust accordingly.

If you have any questions or feedback, check the Internationalization Webmaster Help Forum, or leave your comments here.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Responsive design – harnessing the power of media queries 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate / Advanced

We love data, and spend a lot of time monitoring the analytics on our websites. Any web developer doing the same will have noticed the increase in traffic from mobile devices of late. Over the past year we’ve seen many key sites garner a significant percentage of pageviews from smartphones and tablets. These represent large numbers of visitors, with sophisticated browsers which support the latest HTML, CSS, and JavaScript, but which also have limited screen space with widths as narrow as 320 pixels.

Our commitment to accessibility means we strive to provide a good browsing experience for all our users. We faced a stark choice between creating mobile specific websites, or adapting existing sites and new launches to render well on both desktop and mobile. Creating two sites would allow us to better target specific hardware, but maintaining a single shared site preserves a canonical URL, avoiding any complicated redirects, and simplifies the sharing of web addresses. With a mind towards maintainability we leant towards using the same pages for both, and started thinking about how we could fulfill the following guidelines:
  1. Our pages should render legibly at any screen resolution
  2. We mark up one set of content, making it viewable on any device
  3. We should never show a horizontal scrollbar, whatever the window size


Stacked content, tweaked navigation and rescaled images – Chromebooks
Implementation

As a starting point, simple, semantic markup gives us pages which are more flexible and easier to reflow if the layout needs to be changed. By ensuring the stylesheet enables a liquid layout, we're already on the road to mobile-friendliness. Instead of specifying width for container elements, we started using max-width instead. In place of height we used min-height, so larger fonts or multi-line text don’t break the container’s boundaries. To prevent fixed width images “propping open” liquid columns, we apply the following CSS rule:

img {
max-width: 100%;
}


Liquid layout is a good start, but can lack a certain finesse. Thankfully media queries are now well-supported in modern browsers including IE9+ and most mobile devices. These can make the difference between a site that degrades well on a mobile browser, vs. one that is enhanced to take advantage of the streamlined UI. But first we have to take into account how smartphones represent themselves to web servers.

Viewports

When is a pixel not a pixel? When it’s on a smartphone. By default, smartphone browsers pretend to be high-resolution desktop browsers, and lay out a page as if you were viewing it on a desktop monitor. This is why you get a tiny-text “overview mode” that’s impossible to read before zooming in. The default viewport width for the default Android browser is 800px, and 980px for iOS, regardless of the number of actual physical pixels on the screen.

In order to trigger the browser to render your page at a more readable scale, you need to use the viewport meta element:

<meta name="viewport" content="width=device-width, initial-scale=1">


Mobile screen resolutions vary widely, but most modern smartphone browsers currently report a standard device-width in the region of 320px. If your mobile device actually has a width of 640 physical pixels, then a 320px wide image would be sized to the full width of the screen, using double the number of pixels in the process. This is also the reason why text looks so much crisper on the small screen – double the pixel density as compared to a standard desktop monitor.

The useful thing about setting the width to device-width in the viewport meta tag is that it updates when the user changes the orientation of their smartphone or tablet. Combining this with media queries allows you to tweak the layout as the user rotates their device:

@media screen and (min-width:480px) and (max-width:800px) {
  /* Target landscape smartphones, portrait tablets, narrow desktops

  */
}

@media screen and (max-width:479px) {
  /* Target portrait smartphones */
}


In reality you may find you need to use different breakpoints depending on how your site flows and looks on various devices. You can also use the orientation media query to target specific orientations without referencing pixel dimensions, where supported.


@media all and (orientation: landscape) {
  /* Target device in landscape mode */
}

@media all and (orientation: portrait) {
  /* Target device in portrait mode */
}



Stacked content, smaller images – Cultural Institute
A media queries example

We recently re-launched the About Google page. Apart from setting up a liquid layout, we added a few media queries to provide an improved experience on smaller screens, like those on a tablet or smartphone.

Instead of targeting specific device resolutions we went with a relatively broad set of breakpoints. For a screen resolution wider than 1024 pixels, we render the page as it was originally designed, according to our 12-column grid. Between 801px and 1024px, you get to see a slightly squished version thanks to the liquid layout.

Only if the screen resolution drops to 800 pixels will content that’s not considered core content be sent to the bottom of the page:


@media screen and (max-width: 800px) {
/* specific CSS */
}


With a final media query we enter smartphone territory:


@media screen and (max-width: 479px) {
/* specific CSS */
}


At this point, we’re not loading the large image anymore and we stack the content blocks. We also added additional whitespace between the content items so they are more easily identified as different sections.

With these simple measures we made sure the site is usable on a wide range of devices.


Stacked content and the removal of large image – About Google
Conclusion

It’s worth bearing in mind that there’s no simple solution to making sites accessible on mobile devices and narrow viewports. Liquid layouts are a great starting point, but some design compromises may need to be made. Media queries are a useful way of adding polish for many devices, but remember that 25% of visits are made from those desktop browsers that do not currently support the technique and there are some performance implications. And if you have a fancy widget on your site, it might work beautifully with a mouse, but not so great on a touch device where fine control is more difficult.

The key is to test early and test often. Any time spent surfing your own sites with a smartphone or tablet will prove invaluable. When you can’t test on real devices, use the Android SDK or iOS Simulator. Ask friends and colleagues to view your sites on their devices, and watch how they interact too.

Mobile browsers are a great source of new traffic, and learning how best to support them is an exciting new area of professional development.

Some more examples of responsive design at Google:


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Now you can polish up Google’s translation of your website 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All
(Cross-posted on the Google Translate Blog)

Since we first launched the Website Translator plugin back in September 2009, more than a million websites have added the plugin. While we’ve kept improving our machine translation system since then, we may not reach perfection until someone invents full-blown Artificial Intelligence. In other words, you’ll still sometimes run into translations we didn’t get quite right.

So today, we’re launching a new experimental feature (in beta) that lets you customize and improve the way the Website Translator translates your site. Once you add the customization meta tag to a webpage, visitors will see your customized translations whenever they translate the page, even when they use the translation feature in Chrome and Google Toolbar. They’ll also now be able to ‘suggest a better translation’ when they notice a translation that’s not quite right, and later you can accept and use that suggestion on your site.

To get started:
  1. Add the Website Translator plugin and customization meta tag to your website
  2. Then translate a page into one of 60+ languages using the Website Translator
To tweak a translation:
  1. Hover over a translated sentence to display the original text
  2. Click on ‘Contribute a better translation’
  3. And finally, click on a phrase to choose an automatic alternative translation -- or just double-click to edit the translation directly.
For example, if you’re translating your site into Spanish, and you want to translate Cat not to gato but to Cat, you can tweak it as follows:


If you’re signed in, the corrections made on your site will go live right away -- the next time a visitor translates a page on your website, they’ll see your correction. If one of your visitors contributes a better translation, the suggestion will wait until you approve it. You can also invite other editors to make corrections and add translation glossary entries. You can learn more about these new features in the Help Center.

This new experimental feature is currently free of charge. We hope this feature, along with Translator Toolkit and the Translate API, can provide a low cost way to expand your reach globally and help to break down language barriers.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Mo’ better to also detect “mobile” user-agent 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

Here’s a trending User-Agent detection misstep we hope to help you prevent: While it seems completely reasonable to key off the string “android” in the User-Agent and then redirect users to your mobile version, there’s a small catch... Android tablets were just released! Similar to mobile, the User-Agent on Android tablets also contains “android,” yet tablet users usually prefer the full desktop version over the mobile equivalent. If your site matches “android” and then automatically redirects users, you may be forcing Android tablet users into a sub-optimal experience.

As a solution for mobile sites, our Android engineers recommend to specifically detect “mobile” in the User-Agent string as well as “android.” Let’s run through a few examples.

With a User-Agent like this:
Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13
since there is no “mobile” string, serve this user the desktop version (or a version customized for Android large-screen touch devices). The User-Agent tells us they’re coming from a large-screen device, the XOOM tablet.

On the other hand, this User-Agent:
Mozilla/5.0 (Linux; U; Android 2.2.1; en-us; Nexus One Build/FRG83) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
contains “mobile” and “android,” so serve the web surfer on this Nexus One the mobile experience!

You’ll notice that Android User-Agents have commonalities:


While you may still want to detect “android” in the User-Agent to implement Android-specific features, such as touch-screen optimizations, our main message is: Should your mobile site depends on UA sniffing, please detect the strings “mobile” and “android,” rather than just “android,” in the User-Agent. This helps properly serve both your mobile and tablet visitors.

For questions, please join our Android community in their developer forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part I: URLs & directories 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:
Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or web-sniffer.net to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We cover what to do in this situation in a subsequent post. in Part II of our removals series.

If a URL meets one of the above criteria, you can remove it by going to http://www.google.com/webmasters/tools/removals, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter
   http://www.example.com/embarrassing-stuff.html
not
   http://www.google.com/search?q=embarrassing+stuff

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Edit: Read the rest of this series:
Part II: Removing & updating cached content
Part III: Removing content you don't own
Part IV: Tracking requests, what not to remove

Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Will the Real <Your Site Here> Please Stand Up? 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate



In our recent post on the Google Online Security Blog, we described our system for identifying phishing pages. Of the millions of webpages that our scanners analyze for phishing, we successfully identify 9 out of 10 phishing pages. Our classification system only incorrectly flags a non-phishing site as a phishing site about 1 in 10,000 times, which is significantly better than similar systems. In our experience, these “false positive” sites are usually built to distribute spam or may be involved with other suspicious activity. If you find that your site has been added to our phishing page list (”Reported Web Forgery!”) by mistake, please report the error to us. On the other hand, if your site has been added to our malware list (”This site may harm your computer”), you should follow the instructions here. Our team tries to address all complaints within one day, and we usually respond within a few hours.

Unfortunately, sometimes when we try to follow up on your reports, we find that we are just as confused as our automated system. If you run a website, here are some simple guidelines that will allow us to quickly fix any mistakes and help keep your site off our phishing page list in the first place.

- Don’t ask for usernames and passwords that do not belong to your site. We consider this behavior phishing by definition, so don’t do it! If you want to provide an add-on service to another site, consider using a public API or OAuth instead.

- Avoid displaying logos that are not yours near login fields. Someone surfing the web might mistakenly believe that the logo represents your website, and they might be misled into entering personal information into your site that they intended for the other site. Furthermore, we can’t always be sure that you aren’t doing this intentionally, so we might block your site just to be safe. To prevent misunderstandings, we recommend exercising caution when displaying these logos.

- Minimize the number of domains used by your site, especially for logins. Asking for a username and password for Site X looks very suspicious on Site Y. Besides making it harder for us to evaluate your website, you may be inadvertently teaching your visitors to ignore suspicious URLs, making them more vulnerable to actual phishing attempts. If you must have your login page on a different domain from your main site, consider using a transparent proxy to enable users to access this page from your primary domain. If all else fails...

- Make it easy to find links to your pages. It is difficult for us (and for your users) to determine who controls an off-domain page in your site if the links to that page from your main site are hard to find. All it takes to clear this problem up is to have each off-domain page link back to an on-domain page which links to it. If you have not done this, and one of your pages ends up on our list by mistake, please mention in your error report how we can find the link from your main site to the wrongly blocked page. However, if you do nothing else...

- Don’t send strange links via email or IM. It’s all but impossible for us to verify unusual links that only appeared in your emails or instant messages. Worse, using these kinds of links conditions your users/customers/friends to click on strange links they receive through email or IM, which can put them at risk for other Internet crimes besides phishing.

While we hope you consider these recommendations to be common sense, we’ve seen major e-commerce and financial companies break these guidelines from time to time. Following them will not only improve your experience with our anti-phishing systems, but will also help provide your visitors with a better online experience.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Getting started with structured data 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All

If Google understands your website’s content in a structured way, we can present that content more accurately and more attractively to Google users. For example, our algorithms can enhance your search results with “rich snippets” when we understand that your page is a structured product listing, event, recipe, review, or similar. We can also feature your data in Knowledge Graph panels or in Google Now cards, helping to spread the word about your content.

Today we’re excited to announce two features that make it simpler than ever before to participate in structured data features. The first is an expansion of Data Highlighter to seven new types of structured data. The second is a brand new tool, the Structured Data Markup Helper.

Support for Products, Businesses, Reviews and more in Data Highlighter

Data Highlighter launched in December 2012 as a point-and-click tool for teaching Google the pattern of structured data about events on your website — without even having to edit your site’s HTML. Now, you can also use Data Highlighter to teach us about many other kinds of structured data on your site: products, local businesses, articles, software applications, movies, restaurants, and TV episodes.

To get started, visit Webmaster Tools, select your site, click the "Optimization" link in the left sidebar, and click "Data Highlighter". You’ll be prompted to enter the URL of a typically structured page on your site (for example, a product or event’s detail page) and “tag” its key fields with your mouse.

Google Structured Data Highlighter

The tagging process takes about 5 minutes for a single page, or about 15 minutes for a pattern of consistently formatted pages. At the end of the process, you’ll have the chance to verify Google’s understanding of your structured data and, if it’s correct, “publish” it to Google. Then, as your site is recrawled over time, your site will become eligible for enhanced displays of information like prices, reviews, and ratings right in the Google search results.

New Structured Data Markup Helper tool

While Data Highlighter is a great way to quickly teach Google about your site’s structured data without having to edit your HTML, it’s ultimately preferable to embed structured data markup directly into your web pages, so your structured content is available to everyone. To assist web authors with that task, we’re happy to announce a new tool: the Structured Data Markup Helper.

Like in Data Highlighter, you start by submitting a web page (URL or HTML source) and using your mouse to “tag” the key properties of the relevant data type. When you’re done, the Structured Data Markup Helper generates sample HTML code with microdata markup included. This code can be downloaded and used as a guide as you implement structured data on your website.

Structured Data Markup Helper

The Structured Data Markup Helper supports a subset of data types, including all the types supported by Data Highlighter as well as several types used for embedding structured data in Gmail. Consult schema.org for complete schema documentation.

We hope these two tools make it easier for all websites to participate in Google’s growing suite of structured data features! As always, please post in our forums if you have any questions or feedback.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Using RSS/Atom feeds to discover new URLs 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate

Google uses numerous sources to find new webpages, from links we find on the web to submitted URLs. We aim to discover new pages quickly so that users can find new content in Google search results soon after they go live. We recently launched a feature that uses RSS and Atom feeds for the discovery of new webpages.

RSS/Atom feeds have been very popular in recent years as a mechanism for content publication. They allow readers to check for new content from publishers. Using feeds for discovery allows us to get these new pages into our index more quickly than traditional crawling methods. We may use many potential sources to access updates from feeds including Reader, notification services, or direct crawls of feeds. Going forward, we might also explore mechanisms such as PubSubHubbub to identify updated items.

In order for us to use your RSS/Atom feeds for discovery, it's important that crawling these files is not disallowed by your robots.txt. To find out if Googlebot can crawl your feeds and find your pages as fast as possible, test your feed URLs with the robots.txt tester in Google Webmaster Tools.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Sitemaps: One file, many content types 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

Have you ever wanted to submit your various content types (video, images, etc.) in one Sitemap? Now you can! If your site contains videos, images, mobile URLs, code or geo information, you can now create—and submit—a Sitemap with all the information.

Site owners have been leveraging Sitemaps to let Google know about their sites’ content since Sitemaps were first introduced in 2005. Since that time additional specialized Sitemap formats have been introduced to better accommodate video, images, mobile, code or geographic content. With the increasing number of specialized formats, we’d like to make it easier for you by supporting Sitemaps that can include multiple content types in the same file.

The structure of a Sitemap with multiple content types is similar to a standard Sitemap, with the additional ability to contain URLs referencing different content types. Here's an example of a Sitemap that contains a reference to a standard web page for Web search, image content for Image search and a video reference to be included in Video search:
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"
xmlns:video="http://www.google.com/schemas/sitemap-video/1.1">
<url>
<loc>http://example.com/foo.html</loc>
<image:image>
<image:loc>http://example.com/image.jpg</image:loc>
</image:image>
<video:video>
<video:content_loc>http://example.com/videoABC.flv</video:content_loc>
<video:title>Grilling tofu for summer</video:title>
</video:video>
</url>
</urlset>

Here's an example of what you'll see in Webmaster Tools when a Sitemap containing multiple content types is submitted:



We hope the capability to include multiple content types in one Sitemap simplifies your Sitemap submission. The rest of the Sitemap rules, like 50,000 max URLs in one file and the 10MB uncompressed file size limit, still apply. If you have questions or other feedback, please visit the Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Preview the latest +1 button changes 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Want to test the latest +1 features? Today we’re introducing a new option for webmasters who want to be the first to know about changes to the +1 button. Enroll in the Google+ Platform Preview, available globally, to test updates before they launch to all users on your site. When you’re logged into the account you’ve enrolled with and you visit a page with the +1 button, you’ll see the latest preview release.

If you join now, you’ll be able to test the first set of updates we’ve released to Platform Preview: hover and confirmation bubbles.

If you hover your mouse over a +1 button, you’ll see a bubble letting you know what will happen when you click:



After you click, you’ll receive confirmation that the +1 has been applied:



This will give your site’s users an extra reminder of the account they’re using to +1, as well as the fact that their +1 is public.

If you have any questions, please join us in the Webmaster forum. To receive updates about the +1 button, please subscribe to the Google Publisher Buttons Announce Group. And for advanced tips and tricks, check our Google Code site.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Work smarter, not harder, with site health 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All

We consistently hear from webmasters that they have to prioritize their time. Some manage dozens or hundreds of clients’ sites; others run their own business and may only have an hour to spend on website maintenance in between managing finances and inventory. To help you prioritize your efforts, Webmaster Tools is introducing the idea of “site health,” and we’ve redesigned the Webmaster Tools home page to highlight your sites with health problems. This should allow you to easily see what needs your attention the most, without having to click through all of the reports in Webmaster Tools for every site you manage.

Here’s what the new home page looks like:


You can see that sites with health problems are shown at the top of the list. (If you prefer, you can always switch back to listing your sites alphabetically.) To see the specific issues we detected on a site, click the site health icon or the “Check site health” link next to that site:


This new home page is currently only available if you have 100 or fewer sites in your Webmaster Tools account (either verified or unverified). We’re working on making it available to all accounts in the future. If you have more than 100 sites, you can see site health information at the top of the Dashboard for each of your sites.

Right now we include three issues in your site’s health check:
  1. Have we detected malware on the site?
  2. Have any important pages been removed via our URL removal tool?
  3. Are any of your important pages blocked from crawling in robots.txt?
You can click on any of these items to get more details about what we detected on your site. If the site health icon and the “Check site health” link don’t appear next to a site, it means that we didn’t detect any of these issues on that site (congratulations!).

A word about “important pages:” as you know, you can get a comprehensive list of all URLs that have been removed by going to Site configuration > Crawler access > Remove URL; and you can see all the URLs that we couldn’t crawl because of robots.txt by going to Diagnostics > Crawl errors > Restricted by robots.txt. But since webmasters often block or remove content on purpose, we only wanted to indicate a potential site health issue if we think you may have blocked or removed a page you didn’t mean to, which is why we’re focusing on “important pages.” Right now we’re looking at the number of clicks pages get (which you can see in Your site on the web > Search queries) to determine importance, and we may incorporate other factors in the future as our site health checks evolve.

Obviously these three issues—malware, removed URLs, and blocked URLs—aren’t the only things that can make a website “unhealthy;” in the future we’re hoping to expand the checks we use to determine a site’s health, and of course there’s no substitute for your own good judgment and knowledge of what’s going on with your site. But we hope that these changes make it easier for you to quickly spot major problems with your sites without having to dig down into all the data and reports.

After you’ve resolved any site health issues we’ve flagged, it will usually take several days for the warning to disappear from your Webmaster Tools account, since we have to recrawl the site, see the changes you’ve made, and then process that information through our Web Search and Webmaster Tools pipelines. If you continue to see a site health warning for that site after a week or so, the issue may not have been resolved. Feel free to ask for help tracking it down in our Webmaster Help Forum... and let us know what you think!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Adding associates to manage your YouTube presence 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

Many organizations have multiple presences on the web. For example, Webmaster Tools lives at www.google.com/webmasters, but it also has a Twitter account and a YouTube channel. It's important that visitors to these other properties have confidence that they are actually associated with the Webmaster Tools site. However to date it has been challenging for webmasters to manage which users can take actions on behalf of their site in different services.

Today we're happy to announce a new feature in Webmaster Tools that allows webmasters to add "associates" -- trusted users who can act on behalf of your site in other Google products. Unlike site owners and users, associates can't view site data or take any site actions in Webmaster Tools, but they are authorized to perform specific tasks in other products.

For this initial launch, members of YouTube's partner program that have created a YouTube channel for their site can now link the two together. By doing this, your YouTube channel will be displayed as the "official channel" for your website.


Management within Webmaster Tools

To add or change associates:

  1. On the Webmaster Tools home page, click the site you want.
  2. Under Configuration, click Associates.
  3. Click Add a new associate.
  4. In the text box, type the email address of the person you want to add.
  5. Select the type of association you want.
  6. Click Add.

Management within YouTube

It’s also possible for users to request association from a site’s webmaster.
  1. Log in to your YouTube partner account.
  2. Click on the user menu and choose Settings > Associated Website.
  3. Fill in the page you would like to associate your channel with.
  4. Click Add. If you’re a verified owner of the site, you’re done. But if someone else in your organization manages the website, the association will be marked Pending. The owner receives a notification with an option to approve or deny the request.
  5. After approval is granted, navigate back to this page and click Refresh to complete the association.
Through associates, webmasters can easily and safely allow others to associate their website with YouTube channels. We plan to support integration with additional Google products in the future.

If you have more questions, please see the Help with Associates article or visit our webmaster help forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Request visitors' permission before installing software 2013

salam every one, this is a topic from google web master centrale blog: (Cross-posted on the Google Korea Blog)

Webmaster Level: All

Legitimate websites may require that their visitors install software. These sites often do so to provide their users with additional functionality beyond what's available in standard web browsers, like viewing a special type of document. Please note, however, that if your site requires specific software for your visitors, the implementation of this software installation process is very important. Incorrect implementation can appear as though you're installing malware, triggering our malware detection filters, and resulting in your site being labeled with a 'This site may harm your computer' malware warning in our search results.

If using your site requires a special software install, you need to first inform visitors why they need to install additional software. Here are two bad examples and one good example of how to handle the situation of a new visitor to such a site:

Bad: Install the required software without giving the visitor a chance to choose whether or not they want to install the software.

Bad: Pop up a confirmation dialog box that prompts the visitor to agree to install the software, without providing enough detail for the visitor to make an informed choice. (This includes the standard ActiveX control installation dialog box, since it doesn't contain enough meaningful information for a visitor to make an informed decision about that particular piece of software.)

Good: Redirect the new visitor to an information page which provides thorough details on why a special software installation is required to use the site. From this page the visitor can initiate the installation of the required software if they decide to proceed with installation.

Has your site been labeled with a malware warning in our search results due to a poorly implemented software installation requirement? Updating the installation process to ensure that visitors are fully informed on why the installation is necessary, and giving them a chance to opt out, should resolve this issue. Once you've got this in place, you can go to Webmaster Tools and request a malware review to expedite the process of removing any malware warnings associated with your site in Google's search results.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Introducing the +1 button 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

We all know what it’s like to get a bit of help when you’re looking for it. Online, that advice can come from a number of places: a tweet, a shared video, or a blog post, to name a few. With Google Social Search we’ve been working to show that content when it’s useful, making search more personally relevant.

We think sharing on the web can be even better--that people might share more recommendations, more often, if they knew their advice would be used to help their friends and contacts right when they’re searching for relevant topics on Google. That’s why we’re introducing the +1 button, an easy way for Google users to recommend your content right from the search results pages (and, soon, from your site).



+1 is a simple idea. Let’s use Brian as an example. When Brian signs in to his Google Account and sees one of your pages in the organic search results on Google (or your search ads if you’re using AdWords), he can +1 it and recommend your page to the world.


The next time Brian’s friend Mary is signed in and searching on Google and your page appears, she might see a personalized annotation letting her know that Brian +1’d it. So Brian’s +1 helps Mary decide that your site is worth checking out.


We expect that these personalized annotations will help sites stand out by showing users which search results are personally relevant to them. As a result, +1’s could increase both the quality and quantity of traffic to the sites people care about.

But the +1 button isn’t just for search results. We’re working on a +1 button that you can put on your pages too, making it easy for people to recommend your content on Google search without leaving your site. If you want to be notified when the +1 button is available for your website, you can sign up for email updates at our +1 webmaster site.

Over the coming weeks, we’ll add +1 buttons to search results and ads on Google.com. We’ll also start to look at +1’s as one of the many signals we use to determine a page’s relevance and ranking, including social signals from other services. For +1's, as with any new ranking signal, we'll be starting carefully and learning how those signals affect search quality over time. At first the +1 button will appear for English searches only on Google.com, but we’re working to add more languages in the future.

We’re excited about using +1’s to make search more personal, relevant and compelling. We hope you’re excited too! If you have questions about the +1 button and how it affects search on Google.com, you can check the Google Webmaster Central Help Center.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: +1 around the world 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: all

A few months ago we released the +1 button on English search results on google.com. More recently, we’ve made the +1 button available to sites across the web, making it easy for the people who love your content to recommend it on Google search.

Today, +1’s will start appearing on Google search pages globally. We'll be starting off with sites like google.co.uk, google.de, google.jp and google.fr, then expanding quickly to most other Google search sites soon after.

We’ve partnered with a few more sites where you’ll see +1 buttons over the coming days.


If you’re a publisher based outside of the US, and you’ve been waiting to put +1 buttons on your site, now’s a good time to get started. Visit the +1 button tool on Google Webmaster Central where the +1 button is already available in 44 languages.

Adding the +1 button could help your site to stand out by putting personal recommendations right at the moment of decision, on Google search. So if you have users who are fans of your content, encourage them to add their voice with +1!

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Chrome Extensions for web development 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

The Chrome Developer Tools are great for debugging HTML, JavaScript and CSS in Chrome. If you're writing a webpage or even a web app for the Chrome Web Store, you can inspect elements in the DOM, debug live JavaScript, and edit CSS styles directly in the current page. Extensions can make Google Chrome an even better web development environment by providing additional features that you can easily access in your browser. To help developers like you, we created a page that features extensions for web development. We hope you’ll find them useful in creating applications and sites for the web.


For example, Speed Tracer is an extension to help you identify and fix performance issues in your web applications. With Speed Tracer, you can get a better idea of where time is being spent in your application and troubleshoot problems in JavaScript parsing and execution, CSS style, and more.


Another useful extension is the Resolution Test that changes the size of the browser window, so web developers can preview websites in different screen resolutions. It also includes a list of commonly used resolutions, as well as a custom option to input your own resolution.


With the Web Developer extension, you can access additional developer tools such as validation options, page resizing and a CSS elements viewer; all from an additional button in the toolbar.


Another extension you should check out is the Chrome Editor that allows you to easily code within your browser, so you don’t have to flip between your browser and code editor. You can also save a code reference locally to your computer for later use.

These are just a few of the extensions you can find in our extensions for web development page. You can also look for more in the extensions gallery.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.