Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

from web contents: Becoming Social 2013

salam every one, this is a topic from google web master centrale blog:

Update: The described feature is no longer available.


Wondering how to make your site more social? We'd like to make it easier for you, which is why earlier tonight at Campfire One at the Googleplex, we announced a preview release of Google Friend Connect.

Google Friend Connect is a service that that helps you grow traffic by enabling you to easily provide social features for your visitors. Just add a snippet of code, and, voilà, you can add social functionality -- picking and choosing from built-in functionality like user registration, invitations, members gallery, message posting, and reviews, as well as third-party applications built by the OpenSocial developer community.

Social features can generate buzz and traffic to your pages. Using Google Friend Connect on your site, your visitors will be able to see, invite, and interact with their friends from existing sources of friends, including Facebook, Google Talk, hi5, LinkedIn, orkut, Plaxo, and others. And you'll be able to more actively engage your visitors by adding social features from a growing gallery of social applications.



We've heard from many site owners that even though their sites aren't social networks, they'd still like them to be social. Whether your site sells car parts or dishes out great guacamole recipes -- like the sample site in the YouTube video above -- you can visit http://www.google.com/friendconnect/ or read more on the Official Google Blog to learn about Google Friend Connect. Right now, the preview is available for only a few sites, but soon we'll give the green light to even more. Sign up now to be on the wait list.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Cari Uang di Internet Semakin Mudah dengan Google Analytic 2013






Sebelumnya saya sudah menuliskan dalam artikel cari uang di internet semakin mudah dengan SEO bahwa pengunjung dari Google itu lebih mudah untuk diubah menjadi konsumen atau pembeli. Kenapa? Karena sejak awal mereka membuka Google, mereka memang sudah mencari produk yang kita tawarkan. Tinggal seberapa mampu anda meyakinkan mereka untuk membeli dari anda.


Adik saya mempunyai sebuah blog

from web contents: Go Daddy and Google offer easy access to Webmaster Tools 2013

salam every one, this is a topic from google web master centrale blog:

Welcome Go Daddy webmasters to the Google Webmaster Tools family! Today, we're announcing that Go Daddy, the world's largest hostname provider in the web hosting space, is working with us as a pilot partner so that their customers can more easily access Google Webmaster Tools. Go Daddy is a great partner, and we hope to educate more webmasters on how to make their site more search engine-friendly.

Go Daddy users will now see our link right in their hosting control center, and can launch Google Webmaster Tools directly from their hosting account. And Go Daddy makes the Google Webmaster Tools account creation process faster by adding the site, verifying the site, and submitting Sitemaps on behalf of hosting customers. Our tools show users how Google views their site, give useful stats like queries and links, diagnose problems, and share information with us in order to improve their site's visibility in search results.

As a continuation of these efforts, we look forward to working with other web hosting companies to add Google Webmaster Tools to their products soon.

And in case you're wondering, Webmaster Tools will stay 100% the same for current users. If you have questions or suggestions about our partnership with Go Daddy, let us know in our Webmaster community discussion groups.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

seo Google Place Membantu Authority Sebuah Situs 2013

Google place untuk bisnis sangat membantu mengangkat authority sebuah website atau atau blog. Sering berubahnya update algoritma google memang tidak menjamin trik seo dulu bisa efektif untuk sekarang, begitu juga trik seo sekarang belum tentu manjur untuk esok hari, jadi menulis, memberi manfaat bagi pembaca dan hasil karya sendiri walau tulisannya apa adanya adalah sesuatu yang menjadi kunci ke

from web contents: Come see us at SES London and hear tips on successful site architecture 2013

salam every one, this is a topic from google web master centrale blog: If you're planning to be at Search Engine Strategies London February 13-15, stop by and say hi to one of the many Googlers who will be there. I'll be speaking on Wednesday at the Successful Site Architecture panel and thought I'd offer up some tips for building crawlable sites for those who can't attend.

Make sure visitors and search engines can access the content
  • Check the Crawl errors section of webmaster tools for any pages Googlebot couldn't access due to server or other errors. If Googlebot can't access the pages, they won't be indexed and visitors likely can't access them either.
  • Make sure your robots.txt file doesn't accidentally block search engines from content you want indexed. You can see a list of the files Googlebot was blocked from crawling in webmaster tools. You can also use our robots.txt analysis tool to make sure you're blocking and allowing the files you intend.
  • Check the Googlebot activity reports to see how long it takes to download a page of your site to make sure you don't have any network slowness issues.
  • If pages of your site require a login and you want the content from those pages indexed, ensure you include a substantial amount of indexable content on pages that aren't behind the login. For instance, you can put several content-rich paragraphs of an article outside the login area, with a login link that leads to the rest of the article.
  • How accessible is your site? How does it look in mobile browsers and screen readers? It's well worth testing your site under these conditions and ensuring that visitors can access the content of the site using any of these mechanisms.

Make sure your content is viewable

  • Check out your site in a text-only browser or view it in a browser with images and Javascript turned off. Can you still see all of the text and navigation?
  • Ensure the important text and navigation in your site is in HTML, not in images, and make sure all images have ALT text that describe them.
  • If you use Flash, use it only when needed. Particularly, don't put all of the text from your site in Flash. An ideal Flash-based site has pages with HTML text and Flash accents. If you use Flash for your home page, make sure that the navigation into the site is in HTML.

Be descriptive

  • Make sure each page has a unique title tag and meta description tag that aptly describe the page.
  • Make sure the important elements of your pages (for instance, your company name and the main topic of the page) are in HTML text.
  • Make sure the words that searchers will use to look for you are on the page.

Keep the site crawlable


  • If possible, avoid frames. Frame-based sites don't allow for unique URLs for each page, which makes indexing each page separately problematic.
  • Ensure the server returns a 404 status code for pages that aren't found. Some servers are configured to return a 200 status code, particularly with custom error messages and this can result in search engines spending time crawling and indexing non-existent pages rather than the valid pages of the site.
  • Avoid infinite crawls. For instance, if your site has an infinite calendar, add a nofollow attribute to links to dynamically-created future calendar pages. Each search engine may interpret the nofollow attribute differently, so check with the help documentation for each. Alternatively, you could use the nofollow meta tag to ensure that search engine spiders don't crawl any outgoing links on a page, or use robots.txt to prevent search engines from crawling URLs that can lead to infinite loops.
  • If your site uses session IDs or cookies, ensure those are not required for crawling.
  • If your site is dynamic, avoid using excessive parameters and use friendly URLs when you can. Some content management systems enable you to rewrite URLs to friendly versions.
See our tips for creating a Google-friendly site and webmaster guidelines for more information on designing your site for maximum crawlability and usability.

If you will be at SES London, I'd love for you to come by and hear more. And check out the other Googlers' sessions too:

Tuesday, February 13th

Auditing Paid Listings & Clickfraud Issues 10:45 - 12:00
Shuman Ghosemajumder, Business Product Manager for Trust & Safety

Wednesday, February 14th

A Keynote Conversation 9:00 - 9:45
Matt Cutts, Software Engineer

Successful Site Architecture 10:30 - 11:45
Vanessa Fox, Product Manager, Webmaster Central

Google University 12:45 - 1:45

Converting Visitors into Buyers 2:45 - 4:00
Brian Clifton, Head of Web Analytics, Google Europe

Search Advertising Forum 4:30 - 5:45
David Thacker, Senior Product Manager

Thursday, February 15th

Meet the Crawlers 9:00 - 10:15
Dan Crow, Product Manager

Web Analytics and Measuring Successful Overview 1:15 - 2:30
Brian Clifton, Head of Web Analytics, Google Europe

Search Advertising Clinic 1:15 - 2:30
Will Ashton, Retail Account Strategist

Site Clinic 3:00 - 4:15
Sandeepan Banerjee, Sr. Product Manager, Indexing

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: 7 must-read Webmaster Central blog posts 2013

      salam every one, this is a topic from google web master centrale blog:

      Our search quality and Webmaster Central teams love helping webmasters solve problems. But since we can't be in all places at all times answering all questions, we also try hard to show you how to help yourself. We put a lot of work into providing documentation and blog posts to answer your questions and guide you through the data and tools we provide, and we're constantly looking for ways to improve the visibility of that information.

      While I always encourage people to search our Help Center and blog for answers, there are a few articles in particular to which I'm constantly referring people. Some are recent and some are buried in years' worth of archives, but each is worth a read:

      1. Googlebot can't access my website
        Web hosters seem to be getting more aggressive about blocking spam bots and aggressive crawlers from their servers, which is generally a good thing; however, sometimes they also block Googlebot without knowing it. If you or your hoster are "allowing" Googlebot through by whitelisting Googlebot IP addresses, you may still be blocking some of our IPs without knowing it (since our full IP list isn't public, for reasons explained in the post). In order to be sure you're allowing Googlebot access to your site, use the method in this blog post to verify whether a crawler is Googlebot.
      2. URL blocked by robots.txt
        Sometimes the web crawl section of Webmaster Tools reports a URL as "blocked by robots.txt", but your robots.txt file doesn't seem to block crawling of that URL. Check out this list of troubleshooting tips, especially the part about redirects. This thread from our Help Group also explains why you may see discrepancies between our web crawl error reports and our robots.txt analysis tool.
      3. Why was my URL removal request denied?
        (Okay, I'm cheating a little: this one is a Help Center article and not a blog post.) In order to remove a URL from Google search results you need to first put something in place that will prevent Googlebot from simply picking that URL up again the next time it crawls your site. This may be a 404 (or 410) status code, a noindex meta tag, or a robots.txt file, depending on what type of removal request you're submitting. Follow the directions in this article and you should be good to go.
      4. Flash best practices
        Flash continues to be a hot topic for webmasters interested in making visually complex content accessible to search engines. In this post Bergy, our resident Flash expert, outlines best practices for working with Flash.
      5. The supplemental index
        The "supplemental index" was a big topic of conversation in 2007, and it seems some webmasters are still worried about it. Instead of worrying, point your browser to this post on how we now search our entire index for every query.
      6. Duplicate content
        Duplicate content—another perennial concern of webmasters. This post talks in detail about duplicate content caused by URL parameters, and also references Adam's previous post on deftly dealing with duplicate content, which gives lots of good suggestions on how to avoid or mitigate problems caused by duplicate content.
      7. Sitemaps FAQs
        This post answers the most frequent questions we get about Sitemaps. And I'm not just saying it's great because I posted it. :-)

      Sometimes, knowing how to find existing information is the biggest barrier to getting a question answered. So try searching our blog, Help Center and Help Group next time you have a question, and please let us know if you can't find a piece of information that you think should be there!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Spring time design refresh! 2013

      salam every one, this is a topic from google web master centrale blog:
      We've been listening to you at conferences, user studies, forums and blogs and we decided to start from the ground up with a brand new Webmaster Tools design! It was much needed, and the end result is beautiful in our eyes:



      Highlights
      • One-stop Dashboard: We redesigned our dashboard to bring together data you view regularly: Links to your site, Top search queries, Sitemaps, and Crawl errors.
      • More top search queries: You now have up to 100 queries to track for impressions and clickthrough! In addition, we've substantially improved data quality in this area.
      • Sitemap tracking for multiple users: In the past, you were unable to monitor Sitemaps submitted by other users or via mechanisms like robots.txt. Now you can track the status of Sitemaps submitted by other users in addition to yourself.
      • Message subscription: To make sure you never miss an important notification, you can subscribe to Message Center notifications via e-mail. Stay up-to-date without having to log in as frequently.
      • Improved menu and navigation: We reorganized our features into a more logical grouping, making them easier to find and access. More details on changes.
      • Smarter help: Every page displays links to relevant Help Center articles and by the way, we've streamlined our Help Center and made it easier to use.
      • Sites must be verified to access detailed functionality: Since we're providing so much more data, going forward your site must be verified before you can access any features in Webmaster Tools, including features such as Sitemaps, Test Robots.txt and Generate Robots.txt which were previously available for unverified sites. If you submit Sitemaps for unverified sites, you can continue to do so using Sitemap pings or by including the Sitemap location in your robots.txt file.
      • Removal of the enhanced Image Search option: We're always iterating and improving on our services, both by adding new product attributes and removing old ones. With this release, the enhanced Image Search option is no longer a component of Webmaster Tools. The Google Image Labeler will continue to select images from sites regardless of this setting.
      Go ahead, get started

      The new user interface is available at http://www.google.com/webmasters/tools/new. The old user interface will continue to be available for a couple of weeks to give you guys time to adjust and provide feedback.

      We did our best to get the product localized; however, you may notice a few missing translations in some areas of the user interface. We apologize for the inconvenience and when we switch everyone over in a couple of weeks, we'll fully support 40 languages. The one exception will be our Help Center, which will be available in 21 languages going forward.

      We're really excited about this launch, and hope you are as well. Tell us what you think and stay tuned for more updates!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      seo Registration Open for "Powered By YouTube" 2013

      Seo Master present to you:

      The YouTube APIs team had so much fun at Google I/O that we thought it was about time to have our own event at our office in San Bruno. (Check out the announcement on the YouTube API Blog for a video of the office.) This will be all YouTube APIs, all the time! The agenda is still being finalized, but we'll have "bigger picture" sessions as well as nitty gritty hacking time to get started and learn best practices. You'll have time to mingle with a diverse set of developers from different companies and the YouTube engineers and product managers.

      If you're interested, here are all the details:

      Thursday, July 10, 2008
      10:30am - 5:00pm (tentative)
      YouTube HQ @ 901 Cherry Ave. San Bruno, CA 94066
      Cost: Free

      Please reserve your spot and register early at Powered By YouTube.

      Already have questions, comments, or session suggestions? Let us know in the forum. Hope to see you here next month!2013, By: Seo Master

      from web contents: Fetch as Googlebot and Malware details -- now in Webmaster Tools Labs! 2013

      salam every one, this is a topic from google web master centrale blog: The Webmaster Tools team is lucky to have passionate users who provide us with a great set of feature ideas. Going forward, we'll be launching some features under the "Labs" label so we can quickly transition from concept to production, and hear your feedback ASAP. With Labs releases, you have the opportunity to play with features and have your feedback heard much earlier in the development lifecycle. On the flip side, since these features are available early in the release cycle they're not as robust, and may break at times.

      Today we're launching two cool features:
      • Malware details
      • Fetch as Googlebot
      Malware details (developed by Lucas Ballard)

      Before today, you may have been relying on manual testing, our safe browsing API, and malware notifications to determine which pages on your site may be distributing malware. Sometimes finding the malicious code is extremely difficult, even when you do know which pages it was found on. Today we are happy to announce that we'll be providing snippets of code that exist on some of those pages that we consider to be malicious. We hope this additional information enables you to eliminate the malware on your site very quickly, and reduces the number of iterations many webmasters go through during the review process.

      More information on this cool feature is available at our Online Security Blog.


      Fetch as Googlebot (developed by Javier Tordable)

      "What does Googlebot see when it accesses my page?" is a common question webmasters ask us on our forums and at conferences. Our keywords and HTML suggestions features help you understand the content we're extracting from your site, and any issues we may be running into at crawl and indexing time. However, we realized it was important to provide the ability for users to submit pages on their site and get real-time feedback on what Googlebot sees. This feature will help users a great deal when they re-implement their site with a new technology stack, find out that some of their pages have been hacked, or want to understand why they're not ranking for specific keywords.


      We're pretty excited about this launch, and hope you are too. Let us know what you think!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: New Crawl Error alerts from Webmaster Tools 2013

      salam every one, this is a topic from google web master centrale blog: Webmaster level: All

      Today we’re rolling out Crawl Error alerts to help keep you informed of the state of your site.

      Since Googlebot regularly visits your site, we know when your site exhibits connectivity issues or suddenly spikes in pages returning HTTP error response codes (e.g. 404 File Not Found, 403 Forbidden, 503 Service Unavailable, etc). If your site is timing out or is exhibiting systemic errors when accessed by Googlebot, other visitors to your site might be having the same problem!

      When we see such errors, we may send alerts –- in the form of messages in the Webmaster Tools Message Center –- to let you know what we’ve detected. Hopefully, given this increased communication, you can fix potential issues that may otherwise impact your site’s visitors or your site’s presence in search.

      As we discussed in our blog post announcing the new Webmaster Tools Crawl Errors feature, we divide crawl errors into two types: Site Errors and URL Errors.

      Site Error alerts for major site-wide problems

      Site Errors represent an inability to connect to your site, and represent systemic issues rather than problems with specific pages. Here are some issues that might cause Site Errors:
      • Your DNS server is down or misconfigured.
      • Your web server itself is firewalled off.
      • Your web server is refusing connections from Googlebot.
      • Your web server is overloaded, or down.
      • Your site’s robots.txt is inaccessible.
      These errors are global to a site, and in theory should never occur for a well-operating site (and don’t occur for the large majority of the sites we crawl). If Googlebot detects any appreciable number of these Site Errors, regardless of the size of your site, we’ll try to notify you in the form of a message in the Message Center:

      Example of a Site Error alert
      The alert provides the number of errors Googlebot encountered crawling your site, the overall crawl error connection rate for your site, a link to the appropriate section of Webmaster Tools to examine the data more closely, and suggestions as to how to fix the problem.

      If your site shows a 100% error rate in one of these categories, it likely means that your site is either down or misconfigured in some way. If your site has an error rate less than 100% in any of these categories, it could just indicate a transient condition, but it could also mean that your site is overloaded or improperly configured. You may want to investigate these issues further, or ask about them on our forum.

      We may alert you even if the overall error rate is very low — in our experience a well configured site shouldn’t have any errors in these categories.

      URL Error anomaly alerts for potentially less critical issues

      Whereas any appreciable number of Site Errors could indicate that your site is misconfigured, overloaded, or simply out of service, URL Errors (pages that return a non-200 HTTP code, or incorrectly return an HTTP 200 code in the case of soft 404 errors) may occur on any well-configured site. Because different sites have different numbers of pages and different numbers of external links, a count of errors that indicates a serious problem for a small site might be entirely normal for a large site.

      That’s why for URL Errors we only send alerts when we detect a large spike in the number of errors for any of the five categories of errors (Server error, Soft 404, Access denied, Not found or Not followed). For example, if your site routinely has 100 pages with 404 errors, we won’t alert you if that number fluctuates minimally. However we might notify you when that count reaches a much higher number, say 500 or 1,000. Keep in mind that seeing 404 errors is not always bad, and can be a natural part of a healthy website (see our previous blog post: Do 404s hurt my site?).

      A large spike in error count could be because something has changed on your site — perhaps a reconfiguration has changed the permissions for a section of your site, or a new version of a script is crashing regularly, or someone accidentally moved or deleted an entire directory, or a reorganization of your site causes external links to no longer work. It could also just be a transient spike, or could be because of external causes (someone has linked to non-existent pages), so there might not even be a problem; but when we see an unusually large number of errors for your site, we’ll let you know so you can investigate:

      Example of a URL Error anomaly alert
      The alert describes the category of web errors for which we’ve detected a spike, gives a link to the appropriate section of Webmaster Tools so that you can see what pages we think are problematic, and offers troubleshooting suggestions.

      Enable Message forwarding to send alerts to your inbox

      We know you’re busy, and that routinely checking Webmaster Tools just to check for new alerts might be something you forget to do. Consider turning on Message forwarding. We’ll send any Webmaster Tools messages to the email address of your choice.

      Let us know what you think, and if you have any comments or suggestions on our new alerts please visit our forum.

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Crawl Errors: The Next Generation 2013

      salam every one, this is a topic from google web master centrale blog: Webmaster level: All

      Crawl errors is one of the most popular features in Webmaster Tools, and today we’re rolling out some very significant enhancements that will make it even more useful.

      We now detect and report many new types of errors. To help make sense of the new data, we’ve split the errors into two parts: site errors and URL errors.

      Site Errors

      Site errors are errors that aren’t specific to a particular URL—they affect your entire site. These include DNS resolution failures, connectivity issues with your web server, and problems fetching your robots.txt file. We used to report these errors by URL, but that didn’t make a lot of sense because they aren’t specific to individual URLs—in fact, they prevent Googlebot from even requesting a URL! Instead, we now keep track of the failure rates for each type of site-wide error. We’ll also try to send you alerts when these errors become frequent enough that they warrant attention.

      View site error rate and counts over time

      Furthermore, if you don’t have (and haven’t recently had) any problems in these areas, as is the case for many sites, we won’t bother you with this section. Instead, we’ll just show you some friendly check marks to let you know everything is hunky-dory.

      A site with no recent site-level errors

      URL errors

      URL errors are errors that are specific to a particular page. This means that when Googlebot tried to crawl the URL, it was able to resolve your DNS, connect to your server, fetch and read your robots.txt file, and then request this URL, but something went wrong after that. We break the URL errors down into various categories based on what caused the error. If your site serves up Google News or mobile (CHTML/XHTML) data, we’ll show separate categories for those errors.

      URL errors by type with full current and historical counts

      Less is more

      We used to show you at most 100,000 errors of each type. Trying to consume all this information was like drinking from a firehose, and you had no way of knowing which of those errors were important (your homepage is down) or less important (someone’s personal site made a typo in a link to your site). There was no realistic way to view all 100,000 errors—no way to sort, search, or mark your progress. In the new version of this feature, we’ve focused on trying to give you only the most important errors up front. For each category, we’ll give you what we think are the 1000 most important and actionable errors.  You can sort and filter these top 1000 errors, let us know when you think you’ve fixed them, and view details about them.

      Instantly filter and sort errors on any column

      Some sites have more than 1000 errors of a given type, so you’ll still be able to see the total number of errors you have of each type, as well as a graph showing historical data going back 90 days. For those who worry that 1000 error details plus a total aggregate count will not be enough, we’re considering adding programmatic access (an API) to allow you to download every last error you have, so please give us feedback if you need more.

      We've also removed the list of pages blocked by robots.txt, because while these can sometimes be useful for diagnosing a problem with your robots.txt file, they are frequently pages you intentionally blocked. We really wanted to focus on errors, so look for information about roboted URLs to show up soon in the "Crawler access" feature under "Site configuration".

      Dive into the details

      Clicking on an individual error URL from the main list brings up a detail pane with additional information, including when we last tried to crawl the URL, when we first noticed a problem, and a brief explanation of the error.

      Details for each URL error

      From the details pane you can click on the link for the URL that caused the error to see for yourself what happens when you try to visit it. You can also mark the error as “fixed” (more on that later!), view help content for the error type, list Sitemaps that contain the URL, see other pages that link to this URL, and even have Googlebot fetch the URL right now, either for more information or to double-check that your fix worked.

      View pages which link to this URL

      Take action!

      One thing we’re really excited about in this new version of the Crawl errors feature is that you can really focus on fixing what’s most important first. We’ve ranked the errors so that those at the top of the priority list will be ones where there’s something you can do, whether that’s fixing broken links on your own site, fixing bugs in your server software, updating your Sitemaps to prune dead URLs, or adding a 301 redirect to get users to the “real” page. We determine this based on a multitude of factors, including whether or not you included the URL in a Sitemap, how many places it’s linked from (and if any of those are also on your site), and whether the URL has gotten any traffic recently from search.

      Once you think you’ve fixed the issue (you can test your fix by fetching the URL as Googlebot), you can let us know by marking the error as “fixed” if you are a user with full access permissions. This will remove the error from your list.  In the future, the errors you’ve marked as fixed won’t be included in the top errors list, unless we’ve encountered the same error when trying to re-crawl a URL.

      Select errors and mark them as fixed

      We’ve put a lot of work into the new Crawl errors feature, so we hope that it will be very useful to you. Let us know what you think and if you have any suggestions, please visit our forum!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Farewell to soft 404s 2013

      salam every one, this is a topic from google web master centrale blog: We see two kinds of 404 ("File not found") responses on the web: "hard 404s" and "soft 404s." We discourage the use of so-called "soft 404s" because they can be a confusing experience for users and search engines. Instead of returning a 404 response code for a non-existent URL, websites that serve "soft 404s" return a 200 response code. The content of the 200 response is often the homepage of the site, or an error page.

      How does a soft 404 look to the user? Here's a mockup of a soft 404: This site returns a 200 response code and the site's homepage for URLs that don't exist.



      As exemplified above, soft 404s are confusing for users, and furthermore search engines may spend much of their time crawling and indexing non-existent, often duplicative URLs on your site. This can negatively impact your site's crawl coverage—because of the time Googlebot spends on non-existent pages, your unique URLs may not be discovered as quickly or visited as frequently.

      What should you do instead of returning a soft 404?
      It's much better to return a 404 response code and clearly explain to users that the file wasn't found. This makes search engines and many users happy.

      Return 404 response code



      Return clear message to users



      Can your webserver return 404, but send a helpful "Not found" message to the user?
      Of course! More info as "404 week" continues!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Preventing Virtual Blight: my presentation from Web 2.0 Summit 2013

      salam every one, this is a topic from google web master centrale blog:
      One of the things I'm thinking about in 2009 is how Google can be even more transparent and communicate more. That led me to a personal goal for 2009: if I give a substantial conference presentation (not just a question and answer session), I'd like to digitize the talk so that people who couldn't attend the conference can still watch the presentation.

      In that spirit, here's a belated holiday present. In November 2008 I spoke on a panel about "Preventing Virtual Blight" at the Web 2.0 Summit in San Francisco. A few weeks later I ended up recreating the talk at the Googleplex and we recorded the video. In fact, this is a "director's cut" because I could take a little more time for the presentation. Here's the video of the presentation:



      And if you'd like to follow along at home, I'll include the actual presentation as well:



      You can also access the presentation directly. By the way thanks to Wysz for recording this not just on a shoestring budget but for free. I think we've got another video ready to go pretty soon, too.

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Specify your canonical 2013

      salam every one, this is a topic from google web master centrale blog:
      Carpe diem on any duplicate content worries: we now support a format that allows you to publicly specify your preferred version of a URL. If your site has identical or vastly similar content that's accessible through multiple URLs, this format provides you with more control over the URL returned in search results. It also helps to make sure that properties such as link popularity are consolidated to your preferred version.

      Let's take our old example of a site selling Swedish fish. Imagine that your preferred version of the URL and its content looks like this:

      http://www.example.com/product.php?item=swedish-fish


      However, users (and Googlebot) can access Swedish fish through multiple (not as simple) URLs. Even if the key information on these URLs is the same as your preferred version, they may show slight content variations due to things like sort parameters or category navigation:

      http://www.example.com/product.php?item=swedish-fish&category=gummy-candy

      Or they have completely identical content, but with different URLs due to things such as a tracking parameters or a session ID:

      http://www.example.com/product.php?item=swedish-fish&trackingid=1234&sessionid=5678

      Now, you can simply add this <link> tag to specify your preferred version:

      <link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />

      inside the <head> section of the duplicate content URLs:

      http://www.example.com/product.php?item=swedish-fish&category=gummy-candy
      http://www.example.com/product.php?item=swedish-fish&trackingid=1234&sessionid=5678


      and Google will understand that the duplicates all refer to the canonical URL: http://www.example.com/product.php?item=swedish-fish. Additional URL properties, like PageRank and related signals, are transferred as well.

      This standard can be adopted by any search engine when crawling and indexing your site.

      Of course you may have more questions. Joachim Kupke, an engineer from our Indexing Team, is here to provide us with the answers:

      Is rel="canonical" a hint or a directive?
      It's a hint that we honor strongly. We'll take your preference into account, in conjunction with other signals, when calculating the most relevant page to display in search results.

      Can I use a relative path to specify the canonical, such as <link rel="canonical" href="product.php?item=swedish-fish" />?
      Yes, relative paths are recognized as expected with the <link> tag. Also, if you include a <base> link in your document, relative paths will resolve according to the base URL.

      Is it okay if the canonical is not an exact duplicate of the content?
      We allow slight differences, e.g., in the sort order of a table of products. We also recognize that we may crawl the canonical and the duplicate pages at different points in time, so we may occasionally see different versions of your content. All of that is okay with us.

      What if the rel="canonical" returns a 404?
      We'll continue to index your content and use a heuristic to find a canonical, but we recommend that you specify existent URLs as canonicals.

      What if the rel="canonical" hasn't yet been indexed?
      Like all public content on the web, we strive to discover and crawl a designated canonical URL quickly. As soon as we index it, we'll immediately reconsider the rel="canonical" hint.

      Can rel="canonical" be a redirect?
      Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it.

      What if I have contradictory rel="canonical" designations?
      Our algorithm is lenient: We can follow canonical chains, but we strongly recommend that you update links to point to a single canonical page to ensure optimal canonicalization results.

      Can this link tag be used to suggest a canonical URL on a completely different domain?
      **Update on 12/17/2009: The answer is yes! We now support a cross-domain rel="canonical" link element.**

      Previous answer below:
      No. To migrate to a completely different domain, permanent (301) redirects are more appropriate. Google currently will take canonicalization suggestions into account across subdomains (or within a domain), but not across domains. So site owners can suggest www.example.com vs. example.com vs. help.example.com, but not example.com vs. example-widgets.com.

      Sounds great—can I see a live example?
      Yes, wikia.com helped us as a trusted tester. For example, you'll notice that the source code on the URL http://starwars.wikia.com/wiki/Nelvana_Limited specifies its rel="canonical" as: http://starwars.wikia.com/wiki/Nelvana.

      The two URLs are nearly identical to each other, except that Nelvana_Limited, the first URL, contains a brief message near its heading. It's a good example of using this feature. With rel="canonical", properties of the two URLs are consolidated in our index and search results display wikia.com's intended version.

      Feel free to ask additional questions in our comments below. And if you're unable to implement a canonical designation link, no worries; we'll still do our best to select a preferred version of your duplicate content URLs, and transfer linking properties, just as we did before.

      Update: this link-tag is currently also supported by Ask.com, Microsoft Live Search and Yahoo!.

      Update: for more information, please see our Help Center articles on canonicalization and rel=canonical.

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Google's SEO Starter Guide 2013

      salam every one, this is a topic from google web master centrale blog:

      Note: The SEO Starter Guide has since been updated.

      Webmasters often ask us at conferences or in the Webmaster Help Group, "What are some simple ways that I can improve my website's performance in Google?" There are lots of possible answers to this question, and a wealth of search engine optimization information on the web, so much that it can be intimidating for newer webmasters or those unfamiliar with the topic. We thought it'd be useful to create a compact guide that lists some best practices that teams within Google and external webmasters alike can follow that could improve their sites' crawlability and indexing.

      Our Search Engine Optimization Starter Guide covers around a dozen common areas that webmasters might consider optimizing. We felt that these areas (like improving title and description meta tags, URL structure, site navigation, content creation, anchor text, and more) would apply to webmasters of all experience levels and sites of all sizes and types. Throughout the guide, we also worked in many illustrations, pitfalls to avoid, and links to other resources that help expand our explanation of the topics. We plan on updating the guide at regular intervals with new optimization suggestions and to keep the technical advice current.

      So, the next time we get the question, "I'm new to SEO, how do I improve my site?", we can say, "Well, here's a list of best practices that we use inside Google that you might want to check out."

      Update on July 22, 2009: The SEO Starter Guide is now available in 40 languages!
      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: More ways for you to give us input 2013

      salam every one, this is a topic from google web master centrale blog: At Google, we are always working hard to provide searchers with the best possible results. We've found that our spam reporting form is a great way to get your input as we continue to improve our results. Some of you have asked for a way to report paid links as well.

      Links are an important signal in our PageRank calculations, as they tend to indicate when someone has found a page useful. Links that are purchased are great for advertising and traffic purposes, but aren't useful for PageRank calculations. Buying or selling links to manipulate results and deceive search engines violates our guidelines.

      Today, in response to your request, we're providing a paid links reporting form within Webmaster Tools. To use the form, simply log in and provide information on the sites buying and selling links for purposes of search engine manipulation. We'll review each report we get and use this feedback to improve our algorithms and improve our search results. in some cases we may also take individual action on sites.

      If you are selling links for advertising purposes, there are many ways you can designate this, including:
      • Adding a rel="nofollow" attribute to the href tag
      • Redirecting the links to an intermediate page that is blocked from search engines with a robots.txt file
      We value your input and look forward to continuing to improve our great partnership with you.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Introducing Rich Snippets 2013

      salam every one, this is a topic from google web master centrale blog:
      Webmaster Level: All

      As a webmaster, you have a unique understanding of your web pages and the content they represent. Google helps users find your page by showing them a small sample of that content -- the "snippet." We use a variety of techniques to create these snippets and give users relevant information about what they'll find when they click through to visit your site. Today, we're announcing Rich Snippets, a new presentation of snippets that applies Google's algorithms to highlight structured data embedded in web pages.


      Rich Snippets give users convenient summary information about their search results at a glance. We are currently supporting data about reviews and people. When searching for a product or service, users can easily see reviews and ratings, and when searching for a person, they'll get help distinguishing between people with the same name. It's a simple change to the display of search results, yet our experiments have shown that users find the new data valuable -- if they see useful and relevant information from the page, they are more likely to click through. Now we're beginning the process of opening up this successful experiment so that more websites can participate. As a webmaster, you can help by annotating your pages with structured data in a standard format.

      To display Rich Snippets, Google looks for markup formats (microformats and RDFa) that you can easily add to your own web pages. In most cases, it's as quick as wrapping the existing data on your web pages with some additional tags. For example, here are a few relevant lines of the HTML from Yelp's review page for "Drooling Dog BarBQ" before adding markup data:


      and now with microformats markup:


      or alternatively, use RDFa markup. Either format works:


      By incorporating standard annotations in your pages, you not only make your structured data available for Google's search results, but also for any service or tool that supports the same standard. As structured data becomes more widespread on the web, we expect to find many new applications for it, and we're excited about the possibilities.

      To ensure that this additional data is as helpful as possible to users, we'll be rolling this feature out gradually, expanding coverage to more sites as we do more experiments and process feedback from webmasters. We will make our best efforts to monitor and analyze whether individual websites are abusing this system: if we see abuse, we will respond accordingly.

      To prepare your site for Rich Snippets and other benefits of structured data on the web, please see our documentation on structured data annotations.

      Now, time for some Q&A with the team:

      If I mark up my pages, does that guarantee I'll get Rich Snippets?

      No. We will be rolling this out gradually, and as always we will use our own algorithms and policies to determine relevant snippets for users' queries. We will use structured data when we are able to determine that it helps users find answers sooner. And because you're providing the data on your pages, you should anticipate that other websites and other tools (browsers, phones) might use this data as well. You can let us know that you're interested in participating by filling out this form.

      What about other existing microformats? Will you support other types of information besides reviews and people?

      Not every microformat corresponds to data that's useful to show in a search result, but we do plan to support more of the existing microformats and define RDFa equivalents.

      What's next?

      We'll be continuing experiments with new types (beyond reviews and people) and hope to announce support for more types in the future.

      I have too much data on my page to mark it all up.

      That wasn't a question, but we'll answer anyway. For the purpose of getting data into snippets, we don't need every bit of data: it simply wouldn't fit. For example, a page that says it has "497 reviews" of a product probably has data for 10 and links to the others. Even if you could mark up all 497 blocks of data, there is no way we could fit it into a single snippet. To make your part of this grand experiment easier, we have defined aggregate types where necessary: a review-aggregate can be used to summarize all the review information (review count, average/min/max rating, etc.).

      Why do you support multiple encodings?

      A lot of previous work on structured data has focused on debates around encoding. Even within Google, we have advocates for microformat encoding, advocates for various RDF encodings, and advocates for our own encodings. But after working on this Rich Snippets project for a while, we realized that structured data on the web can and should accommodate multiple encodings: we hope to emphasize this by accepting both microformat encoding and RDFa encoding. Each encoding has its pluses and minuses, and the debate is a fine intellectual exercise, but it detracts from the real issues.

      We do believe that it is important to have a common vocabulary: the language of object types, object properties, and property types that enable structured data to be understood by different applications. We debated how to address this vocabulary problem, and concluded that we needed to make an investment. Google will, working together with others, host a vocabulary that various Google services and other websites can use. We are starting with a small list, which we hope to extend over time.

      Wherever possible, we'll simply reuse vocabulary that is in wide use: we support the pre-existing vCard and hReview types, and there are a variety of other types defined by various communities. Sites that use Google Custom Search will be able to define their own types, which we will index and present to users in rich Custom Search results pages. Finally, we encourage and expect this space to evolve based on new ideas from the structured data community. We'll notice and reach out when our crawlers pick up new types that are getting broad use.

      Update on November 1, 2009: Check out our update on Rich Snippets!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      seo Check out the Custom Search API 2013

      Seo Master present to you:

      Have you ever wished that you could harness the power of Google to create a customized search engine for your website or a collection of websites? Custom Search lets you do that in under five minutes—and that includes time for a tea break. Pretty sweet, eh? If you have more time, you could take the customization to the next level. You can select websites to include, ignore, or prioritize in your search engine. You can even tweak the ranking of your search results and change the look and feel of your results page, among other things.

      If you are curious about how tricked-out custom search engines work, you don't have to look further than the Custom Search API page on Google Code. Go ahead, try out some search queries and be sure to visit the Custom Search Directory, which showcases some popular custom search engines. And close to home, we use a combination of the Custom Search API and the AJAX Search API to power search on Google Code.

      To learn more about this API, see the developer guide and join us over in the discussion group.
      2013, By: Seo Master

      seo Make your website faster with PageSpeed Insights 2013

      Seo Master present to you:
      Author Photo
      Bryan
      Libo

      By Libo Song and Bryan McQuade,
      PageSpeed Insights Team


      A year ago, we released a preview of the PageSpeed Insights Chrome Developer Tools extension, which analyzes the performance of web pages and provides suggestions to make them faster. Today, we’re releasing version 2.0 of the PageSpeed Insights extension, available in the Chrome Web Store. PageSpeed Insights analyzes all aspects of a web page load and points out the specific things you can do to make your page faster. For instance, PageSpeed Insights can inform you about an expensive JavaScript call that blocks the renderer for too long, remind you about that new photo on the front page of your web site that you might have forgotten to resize or optimize, or recommend changing the way you load third-party content so it no longer blocks the page load.

      PageSpeed Insights for Chrome is a Developer Tools extension that analyzes all aspects of the page load, including resources, network, DOM, and the timeline. If you're already familiar with the Developer Tools, you'll find that PageSpeed Insights integrates with a toolset you're already using.


      Using technologies like Native Client, PageSpeed Insights is able to run the open-source PageSpeed Insights SDK securely and with the performance of native code. Leveraging the Insights SDK enables the Chrome extension to automatically optimize the images, CSS, JavaScript and HTML resources on your web page and provide versions of those resources that you can easily deploy on your website.

      We hope you’ll give PageSpeed Insights for Chrome a try and start optimizing your web pages today. We’d love to hear from you, as always. Please try PageSpeed Insights for Chrome, and give us feedback on our mailing list with questions, comments, and new features you’d like to see.


      Libo Song and Bryan McQuade are Software Engineers on the Google PageSpeed Insights Team in Cambridge, MA. They focus on developing tools to help site owners understand how to speed up their sites.

      Posted by Ashleigh Rentz, Editor Emerita
      2013, By: Seo Master

      seo IT Rumors iPhone 6 Will Be The Low-Cost Phone 2013

      Seo Master present to you:
      IT Rumors iPhone 6 Will Be The Low-Cost Phone

      Just months ago the launch of iPhone 5 around 100 countries, Apple has declared rolled-back in other section orders, apparently in reply to lower than expected require for its novel flagship of Smartphone product.

      Why is this happen? This is because Apple cuts some section orders, which is not so clear. However, it will affect to the response of some users, matters related to iPhone prices, or might affect the development of short item life cycles into iPhone or iPad – that can be estimated, so that the reduction of marketing window on each launched iOS gadget.

      In addition, the supply restraint said that weighed down on iPhone 5, at times on the first quarter, for the most part on the first 2 months – when several keen followers were expecting to upgrade. The steep 100 to 200 dollars price cut on Apple’s previous gadgets, like iPhone 4 and iPhone 4S are also expected involved selling decisions. As Apple racked-up on implausible 26.9 million ‘iPhone sales’ during of its latest quarter or 58% add to prior year’s sales. And there no reports about the prices of iPhone 4S and iPhone 5, because Apple does not escaping on its individual ‘model sales’, as well as on its iPhone listings also comprise the currently very pleasingly priced with ‘99 dollars’ iPhone 4S and the ‘free’ iPhone 4.

      On the other hand, iPhone 6 is very hot in the ear among all customers. There are many rumors of the next model of iPhone including on its unique designs; better sharp display, eco-friendly battery, as well as it will be sale on lower price.

      In point of view, Apple always give satisfaction to their customers including wireless charging, Wi-Fi standards, better 4G LTE, eye tracking, camera, processor, screen, home button, etc.

      However, with this latest rumor about iPhone 6 that will be on the cheaper price. This is very impossible, because in terms of business, lots of things to be considered, especially to the overall expenses.

      For example, a simple vendor selling his items on the lower price without turn-back to his total income, well, this is unfair to the vendor right? Just for the sake of customer’s desire.

      In conclusion:

      If you are planning to really have iPhone 6, start now to save money to avoid disappointed and no problem regarding what is the price of that certain iPhone.




      Author Bio:
      Irina Webandyou is a tech/seo writer and professional blogger. She likes to cover tech and seo news through online exposures She runs her own website: Guest Posting Service.
      2013, By: Seo Master

      from web contents: For webmasters: Google+ and the +1 button 101 2013

      salam every one, this is a topic from google web master centrale blog:
      Webmaster Level: Beginner to Intermediate

      Here’s a video that covers the basics of Google+, the +1 button, getting started on Google+, and how social information can make products, like Search, more relevant. This video is for a range of webmasters (from personal bloggers to SEOs of corporations). So, if you’re interested in learning about Google+, we hope that with 20 minutes and this video on YouTube (we have our own Webmaster Support Channel!), you can feel more up to speed with Google’s social opportunities.


      Video about the basics of Google+ and how to get started if you're an interested webmaster.


      Speaking of Google+, if you join, please say hello! We're often posting and hosting Hangouts.


      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

      from web contents: Introducing Data Highlighter for event data 2013

      salam every one, this is a topic from google web master centrale blog: Webmaster Level: All

      Update 19 February 2013: Data Highlighter for events structured markup is available in all languages in Webmaster Tools.

      At Google we're making more and more use of structured data to provide enhanced search results, such as rich snippets and event calendars, that help users find your content. Until now, marking up your site's HTML code has been the only way to indicate structured data to Google. However, we recognize that markup may be hard for some websites to deploy.

      Today, we're offering webmasters a simpler alternative: Data Highlighter. At initial launch, it's available in English only and for structured data about events, such as concerts, sporting events, exhibitions, shows, and festivals. We'll make Data Highlighter available for more languages and data types in the months ahead. Update 19 February 2013: Data Highlighter for events structured markup is available in all languages in Webmaster Tools.

      Data Highlighter is a point-and-click tool that can be used by anyone authorized for your site in Google Webmaster Tools. No changes to HTML code are required. Instead, you just use your mouse to highlight and "tag" each key piece of data on a typical event page of your website:
      Events markup with Data Highlighter

      If your page lists multiple events in a consistent format, Data Highlighter will "learn" that format as you apply tags, and help speed your work by automatically suggesting additional tags. Likewise, if you have many pages of events in a consistent format, Data Highlighter will walk you through a process of tagging a few example pages so it can learn about their format variations. Usually, 5 or 10 manually tagged pages are enough for our sophisticated machine-learning algorithms to understand the other, similar pages on your site.

      When you're done, you can review a sample of all the event data that Data Highlighter now understands. If it's correct, click "Publish."
      From then on, as Google crawls your site, it will recognize your latest event listings and make them eligible for enhanced search results. You can inspect the crawled data on the Structured Data Dashboard, and unpublish at any time if you're not happy with the results.

      Here’s a short video explaining how the process works:

      To get started with Data Highlighter, visit Webmaster Tools, select your site, click the "Optimization" link in the left sidebar, and click "Data Highlighter".

      If you have any questions, please read our Help Center article or ask us in the Webmaster Help Forum. Happy Highlighting!

      this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com
      Powered by Blogger.