Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

seo Cara Cari Uang Di Internet Via Jual Produk Online 2013




Saat ini semakin banyak orang yang ingin tahu cara cari uang di Internet. Ada banyak ragam kegiatan cari uang di Internet saat ini, tapi salah satu cara yang banyak dipilih adalah dengan menjual produk secara online. Saat ini anda bisa melihat begitu banyak penawaran seperti gadget, tas, aksesoris, pulsa, dan masih banyak lagi. Mungkin ada di antara anda para pengunjung blog ini yang tertarik

seo Marketing Skills: Writing a Good Meta Description 2013

Seo Master present to you:

Marketing Skills: Writing a Good Meta Description


Meta Description Basics

marketing skillsTaking the time to write a good meta description is like sending someone a handwritten thank-you note. It’s not widely done, but it’s a nice touch that can make you stand out. Unlike sending that thank-you note, however, you shouldn’t consider writing your description optional. Given its importance to your website being found (if not its SEO), ignoring this would be a huge mistake. How do you write a good meta description, though? Our guide will walk you through the steps.

Know What Their Purpose Is

Meta descriptions do not actually factor into Google’s search algorithm. What they do, however, is compel people to click on your content—particularly if you use a keyword in the meta description that matches the one they’re searching for. It’s a good place for keywords used strategically (i.e. keywords that aren’t jammed together in a nonsensical stew of words). In addition to appearing on Google, the meta descriptions will also appear when you share a link to your content on Facebook and Google+.

Think Calls-to-Action

Meta descriptions should contain their own kinds of calls-to-action. Just like your actual CTAs should invite people to your landing pages, the meta description should encourage your readers to click on the link and read your content. Things like “Learn the secrets of great inbound marketing,” or “Discover the top trends in blogging for 2013.” It’s not enough to describe the content inside. That, by itself, might not be compelling enough. You have to motivate your readers to want to click and read.
This can be done in more subtle ways than using commands like those listed above. For an example, take a look at the following meta description:

2013, By: Seo Master

from web contents: Beyond PageRank: Graduating to actionable metrics 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: Beginner

Like any curious netizen, I have a Google Alert set up to email me whenever my name is mentioned online. Usually I get a slow trickle of my forum posts, blog posts, and tweets. But by far the most popular topic of these alerts over the past couple years has been my off-handed mention that we removed PageRank distribution data from Webmaster Tools in one of our 2009 releases.

The fact that people are still writing about this almost two years later—usually in the context of “Startling news from Susan Moskwa: ...”—really drives home how much PageRank has become a go-to statistic for some webmasters. Even the most inexperienced site owners I talk with have often heard about, and want to know more about, PageRank (“PR”) and what it means for their site. However, as I said in my fateful forum post, the Webmaster Central team has been telling webmasters for years that they shouldn't focus so much on PageRank as a metric for representing the success of one’s website. Today I’d like to explain this position in more detail and give you some relevant, actionable options to fill your time once you stop tracking your PR!

Why PageRank?
In 2008 Udi Manber, VP of engineering at Google, wrote on the Official Google Blog:
“The most famous part of our ranking algorithm is PageRank, an algorithm developed by Larry Page and Sergey Brin, who founded Google. PageRank is still in use today, but it is now a part of a much larger system.”
PageRank may have distinguished Google as a search engine when it was founded in 1998; but given the rate of change Manber describes—launching “about 9 [improvements] per week on the average”—we’ve had a lot of opportunity to augment and refine our ranking systems over the last decade. PageRank is no longer—if it ever was—the be-all and end-all of ranking.

If you look at Google’s Technology Overview, you’ll notice that it calls out relevance as one of the top ingredients in our search results. So why hasn’t as much ink been spilled over relevance as has been over PageRank? I believe it’s because PageRank comes in a number, and relevance doesn’t. Both relevance and PageRank include a lot of complex factors—context, searcher intent, popularity, reliability—but it’s easy to graph your PageRank over time and present it to your CEO in five minutes; not so with relevance. I believe the succinctness of PageRank is why it’s become such a go-to metric for webmasters over the years; but just because something is easy to track doesn’t mean it accurately represents what’s going on on your website.

What do we really want?
I posit that none of us truly care about PageRank as an end goal. PageRank is just a stand-in for what we really want: for our websites to make more money, attract more readers, generate more leads, more newsletter sign-ups, etc. The focus on PageRank as a success metric only works if you assume that a higher PageRank results in better ranking, then assume that that will drive more traffic to your site, then assume that that will lead to more people doing-whatever-you-want-them-to-do on your site. On top of these assumptions, remember that we only update the PageRank displayed on the Google Toolbar a few times a year, and we may lower the PageRank displayed for some sites if we believe they’re engaging in spammy practices. So the PR you see publicly is different from the number our algorithm actually uses for ranking. Why bother with a number that’s at best three steps removed from your actual goal, when you could instead directly measure what you want to achieve? Finding metrics that are directly related to your business goals allows you to spend your time furthering those goals.

If I don’t track my PageRank, what should I be tracking?
Take a look at metrics that correspond directly to meaningful gains for your website or business, rather than just focusing on ranking signals. Also consider metrics that are updated daily or weekly, rather than numbers (like PageRank) that only change a few times a year; the latter is far too slow for you to reliably understand which of your changes resulted in the number going up or down (assuming you update your site more than a few times a year). Here are three suggestions to get you started, all of which you can track using services like Google Analytics or Webmaster Tools:
  1. Conversion rate
  2. Bounce rate
  3. Clickthrough rate (CTR)
Conversion rate
A “conversion” is when a visitor does what you want them to do on your website. A conversion might be completing a purchase, signing up for a mailing list, or downloading a white paper. Your conversion rate is the percentage of visitors to your site who convert (perform a conversion). This is a perfect example of a metric that, unlike PageRank, is directly tied to your business goals. When users convert they’re doing something that directly benefits your organization in a measurable way! Whereas your PageRank is both difficult to measure accurately (see above), and can go up or down without having any direct effect on your business.

Bounce rate
A “bounce” is when someone comes to your website and then leaves without visiting any other pages on your site. Your bounce rate is the percentage of visits to your site where the visitor bounces. A high bounce rate may indicate that users don’t find your site compelling, because they come, take a look, and leave directly. Looking at the bounce rates of different pages across your site can help you identify content that’s underperforming and point you to areas of your site that may need work. After all, it doesn’t matter how well your site ranks if most searchers are bouncing off of it as soon as they visit.

Clickthrough rate (CTR)
In the context of organic search results, your clickthrough rate is how often people click on your site out of all the times your site gets shown in search results. A low CTR means that, no matter how well your site is ranking, users aren’t clicking through to it. This may indicate that they don’t think your site will meet their needs, or that some other site looks better. One way to improve your CTR is to look at your site’s titles and snippets in our search results: are they compelling? Do they accurately represent the content of each URL? Do they give searchers a reason to click on them? Here’s some advice for improving your snippets; the HTML suggestions section of Webmaster Tools can also point you to pages that may need help. Again, remember that it doesn’t matter how well your site ranks if searchers don’t want to click on it.

Entire blogs and books have been dedicated to explaining and exploring web metrics, so you’ll excuse me if my explanations just scrape the surface; analytics evangelist Avinash Kaushik’s site is a great place to start if you want to dig deeper into these topics. But hopefully I’ve at least convinced you that there are more direct, effective and controllable ways to measure your site’s success than PageRank.

One final note: Some site owners are interested in their site’s PR because people won’t buy links from their site unless they have a high PageRank. Buying or selling links for the purpose of passing PageRank violates our Webmaster Guidelines and is very likely to have negative consequences for your website, so a) I strongly recommend against it, and b) don’t be surprised if we aren’t interested in helping you raise your PageRank or improve your website when this is your stated goal.

We’d love to hear what metrics you’ve found useful and actionable for your website! Feel free to share your success stories with us in the comments here or in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Improved Flash indexing 2013

salam every one, this is a topic from google web master centrale blog:

We've received numerous requests to improve our indexing of Adobe Flash files. Today, Ron Adler and Janis Stipins—software engineers on our indexing team—will provide us with more in-depth information about our recent announcement that we've greatly improved our ability to index Flash.

Q: Which Flash files can Google better index now?
We've improved our ability to index textual content in SWF files of all kinds. This includes Flash "gadgets" such as buttons or menus, self-contained Flash websites, and everything in between.

Q: What content can Google better index from these Flash files?
All of the text that users can see as they interact with your Flash file. If your website contains Flash, the textual content in your Flash files can be used when Google generates a snippet for your website. Also, the words that appear in your Flash files can be used to match query terms in Google searches.

In addition to finding and indexing the textual content in Flash files, we're also discovering URLs that appear in Flash files, and feeding them into our crawling pipeline—just like we do with URLs that appear in non-Flash webpages. For example, if your Flash application contains links to pages inside your website, Google may now be better able to discover and crawl more of your website.

Q: What about non-textual content, such as images?
At present, we are only discovering and indexing textual content in Flash files. If your Flash files only include images, we will not recognize or index any text that may appear in those images. Similarly, we do not generate any anchor text for Flash buttons which target some URL, but which have no associated text.

Also note that we do not index FLV files, such as the videos that play on YouTube, because these files contain no text elements.

Q: How does Google "see" the contents of a Flash file?
We've developed an algorithm that explores Flash files in the same way that a person would, by clicking buttons, entering input, and so on. Our algorithm remembers all of the text that it encounters along the way, and that content is then available to be indexed. We can't tell you all of the proprietary details, but we can tell you that the algorithm's effectiveness was improved by utilizing Adobe's new Searchable SWF library.

Q: What do I need to do to get Google to index the text in my Flash files?
Basically, you don't need to do anything. The improvements that we have made do not require any special action on the part of web designers or webmasters. If you have Flash content on your website, we will automatically begin to index it, up to the limits of our current technical ability (see next question).

That said, you should be aware that Google is now able to see the text that appears to visitors of your website. If you prefer Google to ignore your less informative content, such as a "copyright" or "loading" message, consider replacing the text within an image, which will make it effectively invisible to us.

Q: What are the current technical limitations of Google's ability to index Flash?
There are three main limitations at present, and we are already working on resolving them:

1. Googlebot does not execute some types of JavaScript. So if your web page loads a Flash file via JavaScript, Google may not be aware of that Flash file, in which case it will not be indexed.
2. We currently do not attach content from external resources that are loaded by your Flash files. If your Flash file loads an HTML file, an XML file, another SWF file, etc., Google will separately index that resource, but it will not yet be considered to be part of the content in your Flash file.
3. While we are able to index Flash in almost all of the languages found on the web, currently there are difficulties with Flash content written in bidirectional languages. Until this is fixed, we will be unable to index Hebrew language or Arabic language content from Flash files.

We're already making progress on these issues, so stay tuned!



Update in July 2008: Everyone, thanks for your great questions and feedback. Our focus is to improve search quality for all users, and with better Flash indexing we create more meaningful search results. Listed below, we’ve also answered some of the most prevalent questions. Thanks again!

Flash site in search results before improvements


Flash site after improved indexing, querying [nasa deep impact animation]


Helping us access and index your Flash files
@fintan: We verified with Adobe that the textual content from legacy sites, such as those scripted with AS1 and AS2, can be indexed by our new algorithm.

@andrew, jonny m, erichazann, mike, ledge, stu, rex, blog, dis: For our July 1st launch, we didn't enable Flash indexing for Flash files embedded via SWFObject. We're now rolling out an update that enables support for common JavaScript techniques for embedding Flash, including SWFObject and SWFObject2.

@mike: At this time, content loaded dynamically from resource files is not indexed. We’ve noted this feature request from several webmasters -- look for this in a near future update.

Update on July 29, 1010: Please note that our ability to load external resources is live.

Interaction of HTML pages and Flash
@captain cuisine: The text found in Flash files is treated similarly to text found in other files, such as HTML, PDFs, etc. If the Flash file is embedded in HTML (as many of the Flash files we find are), its content is associated with the parent URL and indexed as single entity.

@jeroen: Serving the same content in Flash and an alternate HTML version could cause us to find duplicate content. This won't cause a penalty -- we don’t lower a site in ranking because of duplicate content. Be aware, though, that search results will most likely only show one version, not both.

@All: We’re trying to serve users the most relevant results possible regardless of the file type. This means that standalone Flash, HTML with embedded Flash, HTML only, PDFs, etc., can all have the potential to be returned in search results.

Indexing large Flash files
@dsfdgsg: We’ve heard requests for deep linking (linking to specific content inside file) not just for Flash results, but also for other large documents and presentations. In the case of Flash, the ability to deep link will require additional functionality in Flash with which we integrate.

@All: The majority of the existing Flash files on the web are fine in regard to filesize. It shouldn’t be too much of a concern.

More details about our Flash indexing algorithm
@brian, marcos, bharath: Regarding ActionScript, we’re able to find new links loaded through ActionScript. We explore Flash like a website visitor does, we do not decompile the SWF file. Unless you're making ActionScript visible to users, Google will not expose ActionScript code.

@dlocks: We respect rel="nofollow" wherever we encounter it in HTML.
this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Open redirect URLs: Is your site being abused? 2013

salam every one, this is a topic from google web master centrale blog:
No one wants malware or spammy URLs inserted onto their domain, which is why we all try to follow good security practices. But what if there were a way for spammers to take advantage of your site, without ever setting a virtual foot in your server?

There is, by abusing open redirect URLs.

Webmasters face a number of situations where it's helpful to redirect users to another page. Unfortunately, redirects left open to any arbitrary destination can be abused. This is a particularly onerous form of abuse because it takes advantage of your site's functionality rather than exploiting a simple bug or security flaw. Spammers hope to use your domain as a temporary "landing page" to trick email users, searchers and search engines into following links which appear to be pointing to your site, but actually redirect to their spammy site.

We at Google are working hard to keep the abused URLs out of our index, but it's important for you to make sure your site is not being used in this way. Chances are you don't want users finding URLs on your domain that push them to a screen full of unwanted porn, nasty viruses and malware, or phishing attempts. Spammers will generate links to make the redirects appear in search results, and these links tend to come from bad neighborhoods you don't want to be associated with.

This sort of abuse has become relatively common lately so we wanted to get the word out to you and your fellow webmasters. First we'll give some examples of redirects that are actively being abused, then we'll talk about how to find out if your site is being abused and what to do about it.

Redirects being abused by spammers

We have noticed spammers going after a wide range of websites, from large well-known companies to small local government agencies. The list below is a sample of the kinds of redirect we have seen used. These are all perfectly legitimate techniques, but if they're used on your site you should watch out for abuse.

  • Scripts that redirect users to a file on the server—such as a PDF document—can sometimes be vulnerable. If you use a content management system (CMS) that allows you to upload files, you might want to make sure the links go straight to the file, rather than going through a redirect. This includes any redirects you might have in the downloads section of your site. Watch out for links like this:
example.com/go.php?url=
example.com/ie/ie40/download/?

  • Internal site search result pages sometimes have automatic redirect options that could be vulnerable. Look for patterns like this, where users are automatically sent to any page after the "url=" parameter:
example.com/search?q=user+search+keywords&url=

  • Systems to track clicks for affiliate programs, ad programs, or site statistics might be open as well. Some example URLs include:
example.com/coupon.jsp?code=ABCDEF&url=
example.com/cs.html?url=

  • Proxy sites, though not always technically redirects, are designed to send users through to other sites and therefore can be vulnerable to this abuse. This includes those used by schools and libraries. For example:
proxy.example.com/?url=

  • In some cases, login pages will redirect users back to the page they were trying to access. Look out for URL parameters like this:
example.com/login?url=

  • Scripts that put up an interstitial page when users leave a site can be abused. Lots of educational, government, and large corporate web sites do this to let users know that information found on outgoing links isn't under their control. Look for URLs following patterns like this:
example.com/redirect/
example.com/out?
example.com/cgi-bin/redirect.cgi?

Is my site being abused?

Even if none of the patterns above look familiar, your site may have open redirects to keep an eye on. There are a number of ways to see if you are vulnerable, even if you are not a developer yourself.

  • Check if abused URLs are showing up in Google. Try a site: search on your site to see if anything unfamiliar shows up in Google's results for your site. You can add words to the query that are unlikely to appear in your content, such as commercial terms or adult language. If the query [site:example.com viagra] isn't supposed to return any pages on your site and it does, that could be a problem. You can even automate these searches with Google Alerts.

  • You can also watch out for strange queries showing up in the Top search queries section of Webmaster Tools. If you have a site dedicated to the genealogy of the landed gentry, a large number of queries for porn, pills, or casinos might be a red flag. On the other hand, if you have a drug info site, you might not expect to see celebrities in your top queries. Keep an eye on the Message Center in Webmaster Tools for any messages from Google.

  • Check your server logs or web analytics package for unfamiliar URL parameters (like "=http:" or "=//") or spikes in traffic to redirect URLs on your site. You can also check the pages with external links in Webmaster Tools.

  • Watch out for user complaints about content or malware that you know for sure can not be found on your site. Your users may have seen your domain in the URL before being redirected and assumed they were still on your site.


What you can do

Unfortunately there is no one easy way to make sure that your redirects aren't exploited. An open redirect isn't a bug or a security flaw in and of itself—for some uses they have to be left fairly open. But there are a few things you can do to prevent your redirects from being abused or at least to make them less attractive targets. Some of these aren't trivial; you may need to write some custom code or talk to your vendor about releasing a patch.

  • Change the redirect code to check the referer, since in most cases everyone coming to your redirect script legitimately should come from your site, not a search engine or elsewhere. You may need to be permissive, since some users' browsers may not report a referer, but if you know a user is coming from an external site you can stop or warn them.

  • If your script should only ever send users to an internal page or file (for example, on a page with file downloads), you should specifically disallow off-site redirects.

  • Consider using a whitelist of safe destinations. In this case your code would keep a record of all outgoing links, and then check to make sure the redirect is a legitimate destination before forwarding the user on.

  • Consider signing your redirects. If your website does have a genuine need to provide URL redirects, you can properly hash the destination URL and then include that cryptographic signature as another parameter when doing the redirect. That allows your own site to do URL redirection without opening your URL redirector to the general public.

  • If your site is really not using it, just disable or remove the redirect. We have noticed a large number of sites where the only use of the redirect is by spammers—it's probably just a feature left turned on by default.

  • Use robots.txt to exclude search engines from the redirect scripts on your site. This won't solve the problem completely, as attackers could still use your domain in email spam. Your site will be less attractive to attackers, though, and users won't get tricked via web search results. If your redirect scripts reside in a subfolder with other scripts that don't need to appear in search results, excluding the entire subfolder may even make it harder for spammers to find redirect scripts in the first place.



Open redirect abuse is a big issue right now but we think that the more webmasters know about it, the harder it will be for the bad guys to take advantage of unwary sites. Please feel free to leave any helpful tips in the comments below or discuss in our Webmaster Help Forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Responsive design – harnessing the power of media queries 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate / Advanced

We love data, and spend a lot of time monitoring the analytics on our websites. Any web developer doing the same will have noticed the increase in traffic from mobile devices of late. Over the past year we’ve seen many key sites garner a significant percentage of pageviews from smartphones and tablets. These represent large numbers of visitors, with sophisticated browsers which support the latest HTML, CSS, and JavaScript, but which also have limited screen space with widths as narrow as 320 pixels.

Our commitment to accessibility means we strive to provide a good browsing experience for all our users. We faced a stark choice between creating mobile specific websites, or adapting existing sites and new launches to render well on both desktop and mobile. Creating two sites would allow us to better target specific hardware, but maintaining a single shared site preserves a canonical URL, avoiding any complicated redirects, and simplifies the sharing of web addresses. With a mind towards maintainability we leant towards using the same pages for both, and started thinking about how we could fulfill the following guidelines:
  1. Our pages should render legibly at any screen resolution
  2. We mark up one set of content, making it viewable on any device
  3. We should never show a horizontal scrollbar, whatever the window size


Stacked content, tweaked navigation and rescaled images – Chromebooks
Implementation

As a starting point, simple, semantic markup gives us pages which are more flexible and easier to reflow if the layout needs to be changed. By ensuring the stylesheet enables a liquid layout, we're already on the road to mobile-friendliness. Instead of specifying width for container elements, we started using max-width instead. In place of height we used min-height, so larger fonts or multi-line text don’t break the container’s boundaries. To prevent fixed width images “propping open” liquid columns, we apply the following CSS rule:

img {
max-width: 100%;
}


Liquid layout is a good start, but can lack a certain finesse. Thankfully media queries are now well-supported in modern browsers including IE9+ and most mobile devices. These can make the difference between a site that degrades well on a mobile browser, vs. one that is enhanced to take advantage of the streamlined UI. But first we have to take into account how smartphones represent themselves to web servers.

Viewports

When is a pixel not a pixel? When it’s on a smartphone. By default, smartphone browsers pretend to be high-resolution desktop browsers, and lay out a page as if you were viewing it on a desktop monitor. This is why you get a tiny-text “overview mode” that’s impossible to read before zooming in. The default viewport width for the default Android browser is 800px, and 980px for iOS, regardless of the number of actual physical pixels on the screen.

In order to trigger the browser to render your page at a more readable scale, you need to use the viewport meta element:

<meta name="viewport" content="width=device-width, initial-scale=1">


Mobile screen resolutions vary widely, but most modern smartphone browsers currently report a standard device-width in the region of 320px. If your mobile device actually has a width of 640 physical pixels, then a 320px wide image would be sized to the full width of the screen, using double the number of pixels in the process. This is also the reason why text looks so much crisper on the small screen – double the pixel density as compared to a standard desktop monitor.

The useful thing about setting the width to device-width in the viewport meta tag is that it updates when the user changes the orientation of their smartphone or tablet. Combining this with media queries allows you to tweak the layout as the user rotates their device:

@media screen and (min-width:480px) and (max-width:800px) {
  /* Target landscape smartphones, portrait tablets, narrow desktops

  */
}

@media screen and (max-width:479px) {
  /* Target portrait smartphones */
}


In reality you may find you need to use different breakpoints depending on how your site flows and looks on various devices. You can also use the orientation media query to target specific orientations without referencing pixel dimensions, where supported.


@media all and (orientation: landscape) {
  /* Target device in landscape mode */
}

@media all and (orientation: portrait) {
  /* Target device in portrait mode */
}



Stacked content, smaller images – Cultural Institute
A media queries example

We recently re-launched the About Google page. Apart from setting up a liquid layout, we added a few media queries to provide an improved experience on smaller screens, like those on a tablet or smartphone.

Instead of targeting specific device resolutions we went with a relatively broad set of breakpoints. For a screen resolution wider than 1024 pixels, we render the page as it was originally designed, according to our 12-column grid. Between 801px and 1024px, you get to see a slightly squished version thanks to the liquid layout.

Only if the screen resolution drops to 800 pixels will content that’s not considered core content be sent to the bottom of the page:


@media screen and (max-width: 800px) {
/* specific CSS */
}


With a final media query we enter smartphone territory:


@media screen and (max-width: 479px) {
/* specific CSS */
}


At this point, we’re not loading the large image anymore and we stack the content blocks. We also added additional whitespace between the content items so they are more easily identified as different sections.

With these simple measures we made sure the site is usable on a wide range of devices.


Stacked content and the removal of large image – About Google
Conclusion

It’s worth bearing in mind that there’s no simple solution to making sites accessible on mobile devices and narrow viewports. Liquid layouts are a great starting point, but some design compromises may need to be made. Media queries are a useful way of adding polish for many devices, but remember that 25% of visits are made from those desktop browsers that do not currently support the technique and there are some performance implications. And if you have a fancy widget on your site, it might work beautifully with a mouse, but not so great on a touch device where fine control is more difficult.

The key is to test early and test often. Any time spent surfing your own sites with a smartphone or tablet will prove invaluable. When you can’t test on real devices, use the Android SDK or iOS Simulator. Ask friends and colleagues to view your sites on their devices, and watch how they interact too.

Mobile browsers are a great source of new traffic, and learning how best to support them is an exciting new area of professional development.

Some more examples of responsive design at Google:


this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: A day in the life of webmaster support 2013

salam every one, this is a topic from google web master centrale blog: It was an ordinary Wednesday morning when I decided to film my day at the Googleplex...



A recap of the day's events

10AM Meeting with the webmaster support team
Our team shares a doc containing our current agenda and the previous meetings' agenda, minutes, and action items. In this meeting, we discussed:
  • Feedback from blog post on Duplicate content due to scrapers. Some webmasters suggested that we could improve our detection. In order to improve quality, it would help to get feedback with specific examples. Susan Moskwa, one of our Webmaster Trends Analysts based in Kirkland, Washington, volunteered to post a blog comment to solicit more information.

  • Recent and upcoming releases

  • JuneTune online chat agenda

  • Two recent spam techniques mentioned in the blogosphere. Brian White, who leads one of the Webspam-fighting groups at Google, explained that one technique is new twist on old idea, both are already handled.
11AM Meeting with Matt Cutts
Matt provided feedback on:
1PM Lunch with Shyam, a Crawl engineer, and Jason, and AdSense engineer

2PM Meeting with Wysz to review slides for Google Trifecta

5PM Little "drive-by" to catch Reid, Evan, Charlene, Jessica, and Wysz while they're monitoring the discussion group

7PM Dinner with Matthias

p.s. A huge thanks to Wysz for his film editing skillz.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Now you can polish up Google’s translation of your website 2013

salam every one, this is a topic from google web master centrale blog:
Webmaster level: All
(Cross-posted on the Google Translate Blog)

Since we first launched the Website Translator plugin back in September 2009, more than a million websites have added the plugin. While we’ve kept improving our machine translation system since then, we may not reach perfection until someone invents full-blown Artificial Intelligence. In other words, you’ll still sometimes run into translations we didn’t get quite right.

So today, we’re launching a new experimental feature (in beta) that lets you customize and improve the way the Website Translator translates your site. Once you add the customization meta tag to a webpage, visitors will see your customized translations whenever they translate the page, even when they use the translation feature in Chrome and Google Toolbar. They’ll also now be able to ‘suggest a better translation’ when they notice a translation that’s not quite right, and later you can accept and use that suggestion on your site.

To get started:
  1. Add the Website Translator plugin and customization meta tag to your website
  2. Then translate a page into one of 60+ languages using the Website Translator
To tweak a translation:
  1. Hover over a translated sentence to display the original text
  2. Click on ‘Contribute a better translation’
  3. And finally, click on a phrase to choose an automatic alternative translation -- or just double-click to edit the translation directly.
For example, if you’re translating your site into Spanish, and you want to translate Cat not to gato but to Cat, you can tweak it as follows:


If you’re signed in, the corrections made on your site will go live right away -- the next time a visitor translates a page on your website, they’ll see your correction. If one of your visitors contributes a better translation, the suggestion will wait until you approve it. You can also invite other editors to make corrections and add translation glossary entries. You can learn more about these new features in the Help Center.

This new experimental feature is currently free of charge. We hope this feature, along with Translator Toolkit and the Translate API, can provide a low cost way to expand your reach globally and help to break down language barriers.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Translate your website with Google: Expand your audience globally 2013

salam every one, this is a topic from google web master centrale blog: (This has been cross-posted from the Official Google Blog)

How long would it take to translate all the world's web content into 50 languages? Even if all of the translators in the world worked around the clock, with the current growth rate of content being created online and the sheer amount of data on the web, it would take hundreds of years to make even a small dent.

Today, we're happy to announce a new website translator gadget powered by Google Translate that enables you to make your site's content available in 51 languages. Now, when people visit your page, if their language (as determined by their browser settings) is different than the language of your page, they'll be prompted to automatically translate the page into their own language. If the visitor's language is the same as the language of your page, no translation banner will appear.


After clicking the Translate button, the automatic translations are shown directly on your page.


It's easy to install — all you have to do is cut and paste a short snippet into your webpage to increase the global reach of your blog or website.


Automatic translation is convenient and helps people get a quick gist of the page. However, it's not a perfect substitute for the art of professional translation. Today happens to be International Translation Day, and we'd like to take the opportunity to celebrate the contributions of translators all over the world. These translators play an essential role in enabling global communication, and with the rapid growth and ease of access to digital content, the need for them is greater than ever. We hope that professional translators, along with translation tools such as Google Translator Toolkit and this Translate gadget, will continue to help make the world's content more accessible to everyone.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Mo’ better to also detect “mobile” user-agent 2013

salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

Here’s a trending User-Agent detection misstep we hope to help you prevent: While it seems completely reasonable to key off the string “android” in the User-Agent and then redirect users to your mobile version, there’s a small catch... Android tablets were just released! Similar to mobile, the User-Agent on Android tablets also contains “android,” yet tablet users usually prefer the full desktop version over the mobile equivalent. If your site matches “android” and then automatically redirects users, you may be forcing Android tablet users into a sub-optimal experience.

As a solution for mobile sites, our Android engineers recommend to specifically detect “mobile” in the User-Agent string as well as “android.” Let’s run through a few examples.

With a User-Agent like this:
Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13
since there is no “mobile” string, serve this user the desktop version (or a version customized for Android large-screen touch devices). The User-Agent tells us they’re coming from a large-screen device, the XOOM tablet.

On the other hand, this User-Agent:
Mozilla/5.0 (Linux; U; Android 2.2.1; en-us; Nexus One Build/FRG83) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
contains “mobile” and “android,” so serve the web surfer on this Nexus One the mobile experience!

You’ll notice that Android User-Agents have commonalities:


While you may still want to detect “android” in the User-Agent to implement Android-specific features, such as touch-screen optimizations, our main message is: Should your mobile site depends on UA sniffing, please detect the strings “mobile” and “android,” rather than just “android,” in the User-Agent. This helps properly serve both your mobile and tablet visitors.

For questions, please join our Android community in their developer forum.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: URL removal explained, Part I: URLs & directories 2013

salam every one, this is a topic from google web master centrale blog: Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:
Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or web-sniffer.net to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We cover what to do in this situation in a subsequent post. in Part II of our removals series.

If a URL meets one of the above criteria, you can remove it by going to http://www.google.com/webmasters/tools/removals, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter
   http://www.example.com/embarrassing-stuff.html
not
   http://www.google.com/search?q=embarrassing+stuff

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the http://www.example.com/secret/ directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL. If you enter the root of your site as the URL you want to remove, you'll be asked to confirm that you want to remove the entire site. If you enter a subdirectory, select the "Remove directory" option from the drop-down menu.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Edit: Read the rest of this series:
Part II: Removing & updating cached content
Part III: Removing content you don't own
Part IV: Tracking requests, what not to remove

Companion post: Managing what information is available about you online

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: What are your SEO recommendations? 2013

salam every one, this is a topic from google web master centrale blog:

You may have noticed that we recently rewrote our article on What is an SEO? Does Google recommend them? Previously, the article had focused on warning people about common SEO scams to look out for, but didn't mention many of the valuable services that a helpful SEO can provide.

The article now notes some of the benefits of search engine optimization, and provides some guidance to site owners who are considering hiring an SEO. We'd also like to get your perspective: how would you define SEO? What questions would you ask a prospective SEO? What advice would you give to an inexperienced webmaster who's considering whether to contract an SEO? We'd like to hear your thoughts and incorporate your feedback if there's important advice that we should add.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: BlogHer 2007: Building your audience 2013

salam every one, this is a topic from google web master centrale blog: Last week, I spoke at BlogHer Business about search engine optimization issues. I presented with Elise Bauer, who talked about the power of community in blogging. She made great points about the linking patterns of blogs. Link out to sites that would be relevant and useful for your readers. Comment on blogs that you like to continue the conversation and provide a link back to your blog. Write useful content that other bloggers will want to link to. Blogging connects readers and writers and creates real communities where valuable content can be exchanged. I talked more generally about search and a few things you might consider when developing your site and blog.

Why is search important for a business?
With search, your potential customers are telling you exactly what they are looking for. Search can be a powerful tool to help you deliver content that is relevant and useful and meets your customers' needs. For instance, do keyword research to find out the most common types of searches that are relevant to your brand. Does your audience most often search for "houses for sale" or "real estate"? Check your referrer logs to see what searches are bringing visitors to your site (you can find a list of the most common searches that return your site in the results from the Query stats page of webmaster tools). Does your site include valuable content for those searches? A blog is a great way to add this content. You can write unique, targeted articles that provide exactly what the searcher wanted.

How do search engines index sites?
The first step in the indexing process is discovery. A search engine has to know the pages exist. Search engines generally learn about pages from following links, and this process works great. If you have new pages, ensure relevant sites link to them, and provide links to them from within your site. For instance, if you have a blog for your business, you could provide a link from your main site to the latest blog post. You can also let search engines know about the pages of your site by submitting a Sitemap file. Google, Yahoo!, and Microsoft all support the Sitemaps protocol and if you have a blog, it couldn't be easier! Simply submit your blog's RSS feed. Each time you update your blog and your RSS feed is updated, the search engines can extract the URL of the latest post. This ensures search engines know about the updates right away.

Once a search engine knows about the pages, it has to be able to access those pages. You can use the crawl errors reports in webmaster tools to see if we're having any trouble crawling your site. These reports show you exactly what pages we couldn't crawl, when we tried to crawl them, and what the error was.

Once we access the pages, we extract the content. You want to make sure that what your page is about is represented by text. What does the page look like with Javascript, Flash, and images turned off in the browser? Use ALT text and descriptive filenames for images. For instance, if your company name is in a graphic, the ALT text should be the company name rather than "logo". Put text in HTML rather than in Flash or images. This not only helps search engines index your content, but also makes your site more accessible to visitors with mobile browsers, screen readers, or older browsers.

What is your site about?
Does each page have unique title and meta description tags that describe the content? Are the words that visitors search for represented in your content? Do a search of your pages for the queries you expect searchers to do most often and make sure that those words do indeed appear in your site. Which of the following tells visitors and search engines what your site is about?

Option 1
If you're plagued by the cliffs of insanity or the pits of despair, sign up for one of our online classes! Learn the meaning of the word inconceivable. Find out the secret to true love overcoming death. Become skilled in hiding your identity with only a mask. And once you graduate, you'll get a peanut. We mean it.

Option 2
See our class schedule here. We provide extensive instruction and valuable gifts upon graduation.

When you link to other pages in your site, ensure that the anchor text (the text used for the link) is descriptive of those pages. For instance, you might link to your products page with the text "Inigo Montoya's sword collection" or "Buttercup's dresses" rather than "products page" or the ever-popular "click here".

Why are links important?
Links are important for a number of reasons. They are a key way to drive traffic to your site. Visitors of other sites can learn about your site through links to it. You can use links to other sites to provide valuable information to your visitors. And just as links let visitors know about your site, they also let search engines know about it. Links also tell search engines and potential visitors about your site. The anchor text describes what your site is about and the number of relevant links to your pages are an indicator of how popular and useful those pages are. (You can find a list of the links to your site and the most common anchor text used in those links in webmaster tools.)

A blog is a great way to build links, because it enables you to create new content on a regular basis. The more useful content you have, the greater the chances someone else will find that content valuable to their readers and link to it. Several people at the BlogHer session asked about linking out to other sites. Won't this cause your readers to abandon your site? Won't this cause you to "leak out" your PageRank? No, and no. Readers will appreciate that you are letting them know about resources they might be interested in and will remember you as a valuable source of information (and keep coming back for more!). And PageRank isn't a set of scales, where incoming links are weighted against outgoing ones and cancel each other out. Links are content, just as your words are. You want your site to be as useful to your readers as possible, and providing relevant links is a way, just as writing content is, to do that.

The key is compelling content
Google's main goal is to provide the most useful and relevant search results possible. That's the key thing to keep in mind as you look at optimizing your site. How can you make your site the most useful and relevant result for the queries you care about? This won't just help you in the search results, which after all, are just the means to the end. What you are really interested in is keeping your visitors happy and coming back. And creating compelling and useful content is the best way to do that.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Advanced Website Diagnostics with Google Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog:
Running a website can be complicated—so we've provided Google Webmaster Tools to help webmasters to recognize potential issues before they become real problems. Some of the issues that you can spot there are relatively small (such as having duplicate titles and descriptions), other issues can be bigger (such as your website not being reachable). While Google Webmaster Tools can't tell you exactly what you need to change, it can help you to recognize that there could be a problem that needs to be addressed.

Let's take a look at a few examples that we ran across in the Google Webmaster Help Groups:

Is your server treating Googlebot like a normal visitor?

While Googlebot tries to act like a normal user, some servers may get confused and react in strange ways. For example, although your server may work flawlessly most of the time, some servers running IIS may react with a server error (or some other action that is tied to a server error occurring) when visited by a user with Googlebot's user-agent. In the Webmaster Help Group, we've seen IIS servers return result code 500 (Server error) and result code 404 (File not found) in the "Web crawl" diagnostics section, as well as result code 302 when submitting Sitemap files. If your server is redirecting to an error page, you should make sure that we can crawl the error page and that it returns the proper result code. Once you've done that, we'll be able to show you these errors in Webmaster Tools as well. For more information about this issue and possible resolutions, please see http://todotnet.com/archive/0001/01/01/7472.aspx and http://www.kowitz.net/archive/2006/12/11/asp.net-2.0-mozilla-browser-detection-hole.aspx.

If your website is hosted on a Microsoft IIS server, also keep in mind that URLs are case-sensitive by definition (and that's how we treat them). This includes URLs in the robots.txt file, which is something that you should be careful with if your server is using URLs in a non-case-sensitive way. For example, "disallow: /paris" will block /paris but not /Paris.

Does your website have systematically broken links somewhere?

Modern content management systems (CMS) can make it easy to create issues that affect a large number of pages. Sometimes these issues are straightforward and visible when you view the pages; sometimes they're a bit harder to spot on your own. If an issue like this creates a large number of broken links, they will generally show up in the "Web crawl" diagnostics section in your Webmaster Tools account (provided those broken URLs return a proper 404 result code). In one recent case, a site had a small encoding issue in its RSS feed, resulting in over 60,000 bad URLs being found and listed in their Webmaster Tools account. As you can imagine, we would have preferred to spend time crawling content instead of these 404 errors :).

Is your website redirecting some users elsewhere?

For some websites, it can make sense to concentrate on a group of users in a certain geographic location. One method of doing that can be to redirect users located elsewhere to a different page. However, keep in mind that Googlebot might not be crawling from within your target area, so it might be redirected as well. This could mean that Googlebot will not be able to access your home page. If that happens, it's likely that Webmaster Tools will run into problems when it tries to confirm the verification code on your site, resulting in your site becoming unverified. This is not the only reason for a site becoming unverified, but if you notice this on a regular basis, it would be a good idea to investigate. On this subject, always make sure that Googlebot is treated the same way as other users from that location, otherwise that might be seen as cloaking.

Is your server unreachable when we try to crawl?

It can happen to the best of sites—servers can go down and firewalls can be overly protective. If that happens when Googlebot tries to access your site, we won't be able crawl the website and you might not even know that we tried. Luckily, we keep track of these issues and you can spot "Network unreachable" and "robots.txt unreachable" errors in your Webmaster Tools account when we can't reach your site.

Has your website been hacked?

Hackers sometimes add strange, off-topic hidden content and links to questionable pages. If it's hidden, you might not even notice it right away; but nonetheless, it can be a big problem. While the Message Center may be able to give you a warning about some kinds of hidden text, it's best if you also keep an eye out yourself. Google Webmaster Tools can show you keywords from your pages in the "What Googlebot sees" section, so you can often spot a hack there. If you see totally irrelevant keywords, it would be a good idea to investigate what's going on. You might also try setting up Google Alerts or doing queries such as [site:example.com spammy words], where "spammy words" might be words like porn, viagra, tramadol, sex or other words that your site wouldn't normally show. If you find that your site actually was hacked, I'd recommend going through our blog post about things to do after being hacked.

There are a lot of issues that can be recognized with Webmaster Tools; these are just some of the more common ones that we've seen lately. Because it can be really difficult to recognize some of these problems, it's a great idea to check your Webmaster Tools account to make sure that you catch any issues before they become real problems. If you spot something that you absolutely can't pin down, why not post in the discussion group and ask the experts there for help?

Have you checked your site lately?

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

from web contents: Google Webmaster Help Forums in more languages 2013

salam every one, this is a topic from google web master centrale blog:
Traditionally when we launch a new communication channel, we also give the shareholders a chance to introduce themselves. We did so when we opened webmaster help communities for European webmasters almost two years ago, and also more than a year ago, when we were able to expand and add groups in three more languages. Last December we were very happy to announce the re-launch of two of our Help Forums in a new and cool look and feel.

Today, we're happy to announce that we keep on increasing the global dialogue with webmasters, opening an Arabic and a Czech/Slovak Webmaster Help Forum. Furthermore, we would like to highlight the support we offer in Chinese, Japanese, and Korean. While we've offered support to Chinese webmasters for a little more than a year, the Japanese and Korean forums are only a few weeks old. Keeping with tradition, the guides monitoring our new forums would like to introduce themselves to the global webmaster family:

Arabic Webmaster Help Forum
مرحبا! My name is Adel and I'll be monitoring the Arabic Webmaster Help Forum. I'm originally from Beirut, Lebanon. After finishing computer science studies, I joined Google, some 18 months ago.

Besides working on search quality in Arabic and building a community on our forum, I enjoy traveling and listening to really loud heavy metal music; sometimes I get to do both at the same time! ;-)

I am looking forward to a lot of questions regarding Arabic Google Search and of course ranking and indexing issues on your sites to come. I hope I'll see you there soon!
- Adel

Czech/Slovak Webmaster Help Forum

Zdravím! I am Marcel, the Google Guide on the Czech/Slovak Webmaster Help Forum. I am originally from Slovakia. After graduating in New Media and Industrial Design, it took me some time and traveling around the globe before moving to Dublin and eventually joining Google some 3 years ago.

Ever since, I've been working in different teams. I was lucky to be part of the AdSense team where I participated in launching AdSense for Content for Czech and Slovak. Since my transition to Search Quality, I enjoy working on improving the quality of our natural search results in Czech, Slovak, and Polish.

Besides my work I have a few more passions, such as listening to live music in Irish pubs, challenging my colleagues in occasional Soulcalibur skirmishes on Playstation and testing burger places all over the world :-) If you want to discuss any of these topics or maybe something about your sites, please join the community. I am looking forward to meeting you there :-)
- Marcel

Chinese Webmaster Help Forum

你好! Hi from the Chinese Webmaster Help Forum team! The Chinese Webmaster Help Forum has received great support from webmasters since its launch in March 2008. In March 2009, the Chinese Webmaster Help Forum moved to a new system with many more user-friendly features for better information sharing. It has become a good platform for webmasters to share their knowledge of Google search and Webmaster Tools and to communicate with Google.

The Chinese Forum now has 6 Google Guides: Xiang (降龙十巴掌), Eric (趙錢孫李), Marina (小馬過河), Chris (城镇), Hyson (草帽路飞), and Fa (法人戴表). We are from many different provinces of mainland China. When not spending time in the forum, we enjoy playing ping-pong and foosball in the office. A few of us are huge video game fans. You may learn more about us when you participate in the forum :)

A big thank you to everyone who has taken part in forum discussions! We hope to see both familiar faces and newcomers join in the Chinese Webmaster Help Forum!
- Xiang (降龙十巴掌), Eric (趙錢孫李), Marina (小馬過河), Chris (城镇), Hyson (草帽路飞), and Fa (法人戴表)

Japanese Webmaster Help Forum

こんにちは! Hello from the Japanese Webmaster Help Forum team! Our names are Nao ( なお ), Kaede ( 楓 ), Haru ( ハル ), and Kyotaro. We are the four guides working in Google Search Quality for Japanese. We've just launched our forum on March 6th.

All of us were born in Japan and grew up here. Nao has also lived in Greece, the Netherlands, and New York. Haru is from the west side of Japan, which is known for its talkative culture and traditional Japanese comedy. Maybe you will read Haru's unique communication on our forum :)

As for our interests, we love eating and drinking! Between posting on the forum, we enjoy Google's excellent lunches and sweets a lot. After working, of course, we sometimes go out for a drink with our team members :) Kaede knows all the nice bars in Tokyo.

Nao and Kyotaro love Sumo wrestling. We've watched two tournaments this year with Googlers from other locations. Haru, of course, loves watching comedies!

We are really excited and happy to see many users joining our forum and sharing tips with each other. Looking forward to seeing you there!
- Nao ( なお ), Kaede ( 楓 ), Haru ( ハル ), and Kyotaro

Korean Webmaster Help Forum

안녕하세요! Hello everyone, my name is Joowon and I work in Google Search Quality for Korean. I was born in Germany and lived in Korea for a few years before moving to Hawaii, California and New York to attend high school and college. After all that traveling, I'm only fluent in Korean and English, with a bit of proficiency in Japanese. Some of the interests I've developed over the years are design, wine, cooking, yoga, and sustainability issues.

Currently I'm back in Seoul and enjoying the dynamic atmosphere here, with lots of interesting people and great food. The Korean Webmaster Help Forum was launched only a few weeks ago, and I'm very much looking forward to talking to all of you. See you in the forum!
- Joowon

Hello world! ;) I am Andrew and I am part of the Search Quality team in Seoul. I grew up in a port city in the southern part of Korea. Ironically, I don't eat seafood because it looks scary to me :( Many of my friends and colleagues love to make jokes about that, but I still don't eat any seafood yet. Playing drums, traveling and photography are my main interests. Currently I'm a drummer of "Spring Fingers", the first band of Google Korea, and we'll have our first concert at the end of April!

I love playing around with web technologies/APIs and find it very exciting to exchange information and ideas on the web. The Korean Webmaster Help Forum was recently launched and I hope to see you there!
- Andrew

If you're curious about our Webmaster Help Forums in other languages, please feel free to peak in. Here's a list of our currently monitored Webmaster Help Forums: Arabic, Chinese, Czech/Slovak, Dutch, Danish, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Turkish.

this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
Powered by Blogger.