Les nouveautés et Tutoriels de Votre Codeur | SEO | Création de site web | Création de logiciel

seo Protect all images in your Blog jquery trick 2013

Seo Master present to you:
Here is a most important trick every blogger need. This trick is about how to protect all your images in your blog with a transparent image covered on it. I recently posted an article about How to protect images? .This article about to protect all images.







Related Articles
How to protect images in your blog?
Do you want to see demo? Try to save picture in this blog.
----------------------------------------------------------
What's inside this article
 Step 1: Add jQuery plugin (if your blog have a jquery plugin,ignore this step)

  • Go to Template->Edit HTML [A dialog box appears click Proceed]
  • Copy and paste the below code <head> and save it
<script src='http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js' type='text/javascript'/>
  • Go to Blogger Account
  • Template->Edit HTML [Click Proceed]
  • Copy the code above </head>

Protect only First image in every Post

<script type='text/javascript'>
//<![CDATA[
$(function(){
$(".post-body img:nth-child(1)").after("<img src=\"http:\/\/i.imgur.com\/eYKPf7b.png\" alt=\"NetOopsblog protected image\" style=\"margin-left: -212px; opacity: 0; position: relative; top: 0;\" \/>");
});
//]]>
</script>

Protect all images in every post

<script type='text/javascript'>
//<![CDATA[
$(function(){
$(".post-body img").after("<img src=\"http:\/\/i.imgur.com\/eYKPf7b.png\" alt=\"NetOopsblog protected image\" style=\"margin-left: -212px; opacity: 0; position: relative; top: 0;\" \/>");
});
//]]>
</script>
I think you liked this article...please like and share..
 
2013, By: Seo Master

seo TuneUp Utilities 2013 13.0.3020.7 Full Version With Patch And Serial Key 2013

Seo Master present to you:
Friends, TuneUp Utilities 2013 is a full function software to PC. It work like as a doctor, It help to fix your computer software problems like as registry problems, disc space problems etc. It also help to increase the speed of our system because it automatically or manually fix our computer problems or delete unwanted files. It is not a free software . You can download free trial versions from official site. Now I share  Original serial number and Patch to make your trial versions to full. Download patch and serial and enjoy…




Download TuneUp Utilities 2013 13.0.3020.7 from official Site: Click here

Download TuneUp Utilities 2013 13.0.3020.7 Patch And Serial Key: Click here (Alternative Link) OR External Link (Patch Only) OR External Link (Serial Only)


TuneUp Utilities 2013 Product keys:

V3FVHT-E5C07N-P8N937-6BMVR9-MQ1TM5-DCCAB5
70F1WN-63H489-XP84MP-H4JDH3-JW22XQ-EPEVCB
TD7QPK-PX1VPT-4CAT0N-V6HTNN-AACXWX-NBXQ97
1PBM0X-DV6JYB-XCPMH0-NA6QF9-H70R6A-B3CHTB

Features

  • TuneUp Disk Cleaner 2013:
  • TuneUp Browser Cleaner 2013:
  • TuneUp Live Optimization 2.0
  • TuneUp Shortcut Cleaner
  • TuneUp Registry Cleaner
  • TuneUp Live Optimization 2.0
  • TuneUp Program Deactivator
  • TuneUp Economy Mode
  • Turbo Mode
  • Disable startup programs
  • Accelerate system startup and shutdown
  • Defragment hard disk
  • TuneUp Shortcut Cleaner
  • 1-Click-Maintenance & Automatic Maintenance
  • Uninstall unneeded programs
  • Find and Delete Large Amounts of Data
  • Restore deleted files
  • Delete files safely
  • Show system information
  • Status & recommendations (category)
  • Optimization status
  • Increase performance - recommendations
  • Program rating
  • Display and close running processes
  • Detect and fix problems
  • Check hard disk for errors
  • Customize the appearance of Windows® (TuneUp Styler)
  • Customize options and behaviors (TuneUp System Control)
  • Start Center
  • Overview of all functions
  • TuneUp Utilities Settings Center
  • Check for Updates
  • Optimization Report




    Leave a comment ….. if links and serial keys are not working…………….


    2013, By: Seo Master

    from web contents: Good times with inbound links 2013

    salam every one, this is a topic from google web master centrale blog:
    Inbound links are links from pages on external sites linking back to your site. Inbound links can bring new users to your site, and when the links are merit-based and freely-volunteered as an editorial choice, they're also one of the positive signals to Google about your site's importance. Other signals include things like our analysis of your site's content, its relevance to a geographic location, etc. As many of you know, relevant, quality inbound links can affect your PageRank (one of many factors in our ranking algorithm). And quality links often come naturally to sites with compelling content or offering a unique service.

    How do these signals factor into ranking?

    Let's say I have a site, example.com, that offers users a variety of unique website templates and design tips. One of the strongest ranking factors is my site's content. Additionally, perhaps my site is also linked from three sources -- however, one inbound link is from a spammy site. As far as Google is concerned, we want only the two quality inbound links to contribute to the PageRank signal in our ranking.

    Given the user's query, over 200 signals (including the analysis of the site's content and inbound links as mentioned above) are applied to return the most relevant results to the user.


    So how can you engage more users and potentially increase merit-based inbound links?

    Many webmasters have written about their success in growing their audience. We've compiled several ideas and resources that can improve the web for all users.
    Create unique and compelling content on your site and the web in general
    • Start a blog: make videos, do original research, and post interesting stuff on a regular basis. If you're passionate about your site's topic, there are lots of great avenues to engage more users.

      If you're interested in blogging, see our Help Center for specific tips for bloggers.

    • Teach readers new things, uncover new news, be entertaining or insightful, show your expertise, interview different personalities in your industry and highlight their interesting side. Make your site worthwhile.

    • Participate thoughtfully in blogs and user reviews related to your topic of interest. Offer your knowledgeable perspective to the community.

    • Provide a useful product or service. If visitors to your site get value from what you provide, they're more likely to link to you.

    • For more actionable ideas, see one of my favorite interviews with Matt Cutts for no-cost tips to help increase your traffic. It's a great primer for webmasters. (Even before this post, I forwarded the URL to many of my friends. :)
    Pursue business development opportunities
    Use Webmaster Tools for "Links > Pages with external links" to learn about others interested in your site. Expand the web community by figuring out who links to you and how they're linking. You may have new audiences or demographics you didn't realize were interested in your niche. For instance, if the webmasters for example.com noticed external links coming from art schools, they may start to engage with the art community -- receiving new feedback and promoting their site and ideas.

    Of course, be responsible when pursuing possible opportunities in this space. Don't engage in mass link-begging; no one likes form letters, and few webmasters of quality sites are likely to respond positively to such solicitations. In general, many of the business development techniques that are successful in human relationships can also be reflected online for your site.
    Now that you've read more information about internal links, outbound links, and inbound links (today's post :), we'll see you in the blog comments! Thanks for joining us for links week.

    Update -- Here's one more business development opportunity:
    Investigate your "Diagnostics > Web/mobile crawl > Crawl error sources" to not only correct broken links, but also to cultivate relationships with external webmasters who share an interest in your site. (And while you're chatting, see if they'll correct the broken link. :) This is a fantastic way to turn broken links into free links to important parts of your site.

    In addition to contacting these webmasters, you may also wish to use 301 redirects to redirect incoming traffic from old pages to their new locations. This is good for users who may still have bookmarks with links to your old pages... and you'll be happy to know that Google appropriately flows PageRank and related signals through these redirects.

    this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    from web contents: Optimize your crawling & indexing 2013

    salam every one, this is a topic from google web master centrale blog: Webmaster Level: Intermediate to Advanced

    Many questions about website architecture, crawling and indexing, and even ranking issues can be boiled down to one central issue: How easy is it for search engines to crawl your site? We've spoken on this topic at a number of recent events, and below you'll find our presentation and some key takeaways on this topic.



    The Internet is a big place; new content is being created all the time. Google has a finite number of resources, so when faced with the nearly-infinite quantity of content that's available online, Googlebot is only able to find and crawl a percentage of that content. Then, of the content we've crawled, we're only able to index a portion.

    URLs are like the bridges between your website and a search engine's crawler: crawlers need to be able to find and cross those bridges (i.e., find and crawl your URLs) in order to get to your site's content. If your URLs are complicated or redundant, crawlers are going to spend time tracing and retracing their steps; if your URLs are organized and lead directly to distinct content, crawlers can spend their time accessing your content rather than crawling through empty pages, or crawling the same content over and over via different URLs.

    In the slides above you can see some examples of what not to do—real-life examples (though names have been changed to protect the innocent) of homegrown URL hacks and encodings, parameters masquerading as part of the URL path, infinite crawl spaces, and more. You'll also find some recommendations for straightening out that labyrinth of URLs and helping crawlers find more of your content faster, including:
    • Remove user-specific details from URLs.
      URL parameters that don't change the content of the page—like session IDs or sort order—can be removed from the URL and put into a cookie. By putting this information in a cookie and 301 redirecting to a "clean" URL, you retain the information and reduce the number of URLs pointing to that same content.
    • Rein in infinite spaces.
      Do you have a calendar that links to an infinite number of past or future dates (each with their own unique URL)? Do you have paginated data that returns a status code of 200 when you add &page=3563 to the URL, even if there aren't that many pages of data? If so, you have an infinite crawl space on your website, and crawlers could be wasting their (and your!) bandwidth trying to crawl it all. Consider these tips for reining in infinite spaces.
    • Disallow actions Googlebot can't perform.
      Using your robots.txt file, you can disallow crawling of login pages, contact forms, shopping carts, and other pages whose sole functionality is something that a crawler can't perform. (Crawlers are notoriously cheap and shy, so they don't usually "Add to cart" or "Contact us.") This lets crawlers spend more of their time crawling content that they can actually do something with.
    • One man, one vote. One URL, one set of content.
      In an ideal world, there's a one-to-one pairing between URL and content: each URL leads to a unique piece of content, and each piece of content can only be accessed via one URL. The closer you can get to this ideal, the more streamlined your site will be for crawling and indexing. If your CMS or current site setup makes this difficult, you can use the rel=canonical element to indicate the preferred URL for a particular piece of content.

    If you have further questions about optimizing your site for crawling and indexing, check out some of our previous writing on the subject, or stop by our Help Forum.

    this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    seo Tips Cara Membuat Blog Dengan Mudah 2013

    Seo Master present to you:
    Tips Cara Membuat Blog Dengan Mudah - Oh ia sebelumnya teman teman saya banyak yang nanya nih Cara Membuat Blog Dengan Mudah.
    Ternyata Blog mempunyai banyak Situsnya, Tapi saya memakai Blogspot.com biar lebih mudah.

    Keterangan : Blogger-ly Blog saya waktu dulu dan saya mengganti domainnya menjadi SEO-XP.
    Jadi hasilnya ini bukan hasil Copassan.

    Beberapa Situsnya :
    1. Wordpress.com
    2. Wokaylah.com 
    3. Blogger/blogspot 
    4. Blogster.com 
    5. Blog Rediff (Rediffland.com) 
    6. Blogsome.com


    Nah apa saja hal yang diperlukan untuk membuat blog di Blogger? Salah satu syarat agar kita bisa membuat blog gratis di Blogger adalah bahwa kita harus memiliki akun email dari Gmail, yaitu email yang dimiliki oleh Google. Seperti diketahui bahwa Blogger sendiri merupakan layanan blog gratis yang dimiliki oleh Google, jadi wajar saja bila pihak Blogger meminta kita harus memiliki akun Gmail terlebih dahulu, apabila kita hendak memiliki blog di Blogger.

    Bagi teman-teman yang masih belum memiliki akun Gmail atau email di Gmail, maka cara membuat blog ini akan kita awali dengan membuat akun Gmail terlebih dahulu. Agar tidak berlama-lama lagi, maka berikut adalah langkah-langkah cara membuat akun Gmail.



    A. Cara membuat Gmail

    1. Kunjungi situs www.gmail.com

    2. Dengan mengunjungi alamat situs diatas, maka teman-teman akan melihat tampilan situs seperti yang tampak pada gambar berikut ini:


    - Kliklah tombol "CREATE AN ACCOUNT" untuk memulai membuat Gmail.

    3. Setelah melalui proses nomor 2, maka akan terlihat tampilan situs seperti berikut ini:

    Pada bagian ini, maka isilah hal-hal yang diminta pada form pengisian data, seperti:

    - Pada bagian kolom "Name" isikanlah dengan nama awal dan nama akhir kamu.

    - pada bagian "Choose your username" isikan dengan username yang kamu inginkan, misalnya belajarblog@gmail.com

    - Pada bagian "Create a password" isikan dengan password yang kamu inginkan.

    - pada bagian "Confrim your password" isikan sama dengan kolom create a password diatasnya.

    - pada bagian "Birthday" isikan sesuai dengan tangga lahir kamu.

    - Pada bagian "Gender" isikan dengan jenis kelamin kamu.

    - Pada bagian "Mobile phone" isikan dengan nomor ponsel kamu.

    - Pada bagian "Current email address" dapat tidak diisi bila sebelumnya kamu belum memiliki email

    - Pada bagian "Prove you're not a robot" isikan sesuai dengan gambar tulisan yang muncul diatasnya.

    - Pada bagian "Location" pilih negara asal kita sesuai dengan yang sudah tercantum, misalnya "Indonesia"

    - Centang pada tanda "I agree to the Google Terms of Services anda Privacy Policy"

    - Kemudian klik tombol "Next step"

    Tunggu beberapa saat, maka seketika akan datang sebuah SMS dari Google ke nomor HP yang sudah kamu isikan sebelumnya untuk me-aktivasi akun Gmail yang baru saja kamu buat.

    Bila langkah-langkah diatas sudah dilakukan dengan benar, maka tentu sebuah akun Gmail akan selesai dibuat. Selanjutnya adalah langkah kita membuat blog baru di Blogger.

    2. Cara Membuat Blog

    1. Kunjungi situs www.blogger.com

    2. Setelah teman-teman kunjungi halaman diatas, maka akan terlihat tampilan situs seperti tampak pada gambar berikut ini:
    Pada bagian ini, maka isilah hal-hal yang diminta pada form pengisian data, seperti:

    - Pada bagian "Judul Blog" isi dengan judul blog sesuai dengan keinginan kamu, misalnya "Belajar Ngeblog"

    - Pada bagian "Alamat blog (URL)" silahkan isikan dengan nama blog yang kamu inginkan, misalnya "belajar-blog-aja"

    - Pada bagian "Buktikan bahwa Anda bukan robot" isikan dengan tulisan yang tampak pada gambar.

    - Klik tombol "Lanjutkan"

    3. Akan terlihat tampilan situs seperti tampak pada gambar berikut ini:

    Pada bagian ini pilihlah salah satu template yang tersedia sesuai yang kamu inginkan. Setelah memilih salah satu template,

    maka langkah selanjutnya adalah meng-klik tombol "Lanjutkan"

    4. Akan terlihat tampilan seperti pada gambar berikut ini:
    Dengan tampilan situs seperti demikian diatas, maka Cara Membuat Blog Dengan Mudah sudah selesai, dan kini saatnya untuk memulai menulis di blog kamu dengan cara mengklik tombol "Mulai Blogging"




    2013, By: Seo Master

    seo 10 Top Cheap Place UK to Buy 3 Bedroom House 2013

    Seo Master present to you:
    The true North South divide

    Just lately we have started to read in the UK newspapers that property prices are set to tumble. Living in the so-called affluent South in a small coastal town I long for the day that house prices do exactly that.

    My home is a 3 bed roomed terraced house. It's no better nor worse than most other average
    family homes up and down the length and breadth of the UK, but there is one crucial difference. That is its value. Here in the South-East property prices are dizzyingly expensive. My little house
    would fetch around £270,000 if I were ever to sell it. To make the move to a more expensive home worth anything up to £500,000, I would be required to pay 3% of the new property's purchase price in Stamp Duty Land Tax. For properties priced between £500,001 and £1,000,000, the tax increases to 4%, and if I were to leap up the ladder to a house valued at more than a million my payment to the treasury would be 5% of it's sale price.
    Here in the South-East we are obliged to pay through the nose for quite average homes, and then pay taxes on top for the privilege. Elsewhere in the UK property prices are no-where near as prohibitive, and consequently there are huge numbers of home owners living in Britain who will never pay a penny to the government in Stamp Duty. For properties priced between £125,001 and £250,000 the duty is levied at a mere 1%. Beneath that price the duty disappears altogether.

    For ordinary, working families living in the South-East of England, property ownership comes at a very high price indeed, and government policies in this instance, do little to alleviate the financial pain. The Chancellor of the Exchequor, Geaorge Osborne, missed an opportunity to remedy this in his 2013 spring budget. Instead he opted to channel resources into maintaining the status quo, and the government will implement a series of measures to help people raise the enormous deposits required by the banks to put down on over-priced homes.

    However, for those whose jobs are more mobile, there is a whole wealth of property readily available in cheaper areas of the UK. Here are some places you might wish to consider in your search for an affordable three-bed roomed house. All prices given were found on the Right move web-site, and are current for February 2013.
    1. Rhondda, Glamorgan


    In the Rhondda Valley area of beautiful Wales, a three bed roomed terraced home can be purchased for as little as £35,000. My search also revealed one house in need of updating in

    Treorchy for just £25,000, but there were a number of others, all advertised as being in good order, in the £35,000 to £40,000 price range. Some of the locations listed in this price bracket are Cwmparc, Treorchy and Tynewydd. This area has shown a rise in prices over the two years since I first complied this list, but still represents exceptional value compared to other parts of the UK.

    The Rhondda Valley is most notable for its historical link to the coal mining industry, and the closure of many local pits in the 1990s left a legacy of high unemployment. The vast amount of low-priced homes for sale in this region is a reflection of the pain that these communities continue to feel.

    2. Liverpool

    Once, famously, home to the Beatles and Cilla Black, lively Liverpool with all it's musical and artistic heritage, has a plentiful supply of reasonably priced three bedroomed terraced houses. The lowest priced example I came across in this area was a home in nearby Bootle which is being offered at £34,500, and there are a number of attractive, basic properties available in the Liverpool area in the £35,000 to £40,000 price bracket. Shared ownership schemes seem to be popular in this region, and many, reasonably priced, brand new homes, come to the market offering 25% to 75% shared ownership.

    In recent years, Liverpool has been transformed into one of the UK’s leading business destinations by an ambitious and far-reaching regeneration programme. Although the plethora of cheap housing seems to tell it's own story, it may just be that the house prices are only temporarily lagging behind the bigger picture. Certainly, here, as in other areas I've investigated, there has been a significant rise in house prices at the lower end of the scale.

    3. Stoke-on-Trent

    Stoke-on-Trent is well known for the numerous potteries that grew up in and around the town from the 17th century onwards. Wedgwood, Minton and Royal Doulton are among the more
    famous china manufacturers from this area, and the potteries, together with abundant local supplies of coal and iron, ensured the prosperity of the region for several centuries. More recently, however, with pit closures, and the loss of numerous factories and steelworks, there has been a sharp rise in unemployment. Nowadays, local tourism opportunities are beginning to be exploited, and both the china works, and the canal system draw their fair share of visitors to the region each year.

    A three-bedroomed, terraced house in the Potteries area can be bought for as little as £40,000 to £45,000. A semi-detached home, in good order, sells for as little as £55,000.
    4. Wakefield, West Yorkshire

    The site of a battle during the Wars of the Roses, Wakefield developed over the centuries, into an important market town and centre for wool, exploiting its position on the navigable River
    Calder to become an inland port. In more recent years, Wakefield has seen a decline in it's fortunes as the textiles and glass making factories which first made it wealthy finally closed in the 1970s and 1980s. This was further compounded by pit closures, which resulted in high levels of unemployment.

    In Castleford, Hunslet and Royston, three bedroomed homes are readily available in the £45,000 to £55,000 price bracket. The lower end prices are a little higher in nearby Wakefield, Pontefract, Leeds and Barnsley, but all show listings for comfortable, habitable properties around £65,000.

    5. Newcastle-upon-Tyne

    Three bedroomed houses priced at between £55,000 and £65,000 are plentiful in the Newcastle upon-Tyne area of Tyne & Wear. The port developed in the 16th century and, along with the shipyards lower down the river Tyne, it became one of the world's largest ship-building and ship-repairing centres. These industries have since gone into decline, and today, Newcastle-upon-Tyne is largely a business and cultural centre, with a lively nightlife. Newcastle-upon Tyne has shown one of the largest increases in property prices since this list was first compiled in 2011, and the prices appear to be rising much faster than the national average.
    6. Belfast and Antrim

    Famous for being home to the shipyard that built the Titanic, the hilly streets of beautiful Belfast have seen more than their fair share of problems over the years. The continuing sectarian conflict that has divided communities in this city, is in sharp contrast, however, to the warm welcome that visitors receive here. Belfast has a vibrant and thriving city centre with great leisure facilities, historic sites to visit, fabulous shopping streets and excellent transport links. A comfortable three-bedroomed home in nearby Antrim or Newtownabbey could be yours from as little as 70,000, but you'll have to be quick, as the available housing stocks as at February 2013, are very, very low.

    7. Hull

    Historic Kingston-Upon-Hull, better known as just plain 'Hull' has poetic and theatrical links as well as a fascinating maritime past. Recent investment in urban regeneration has brought about much improvement in poorer areas in and around the city, but the property prices remain some of the UK's lowest. I found a number of three-bedroomed terraced houses advertised for sale priced at around £49,950, all within a ten mile radius of Hull City Centre. Homes in the £55,000 to £65,000 price range are readily available. If you have a little more to spend, £249,500 will buy you a spacious, detached house with good sized gardens, in one of the better areas. and you could still avoid the Chancellor's 3% stamp duty bracket.

    8. Sheffield, South Yorkshire

    Industrious Sheffield, famous for it's cutlers and surrounded by some of Britain's most ruggedly beautiful countryside, this city has seen tough times in more recent years. Like many of the areas listed here, Sheffield has seen employment prospects wax and wane, but it still remains a vibrant University City with many galleries and museums to browse, and great sporting and leisure facilities. Three-bedroomed terraced houses can be bought for as little as £50,000, and there are a number available in the £55,000 to £65,000 price bracket both in Sheffield, and in the surrounding towns and villages.

    9.Birmingham

    Birmingham, in the West Midlands county of England, is the UK's second most populous city after London. Once at the forefront of the industrial revolution, Birmingham remains a major international commercial centre. It is home to no less than three universities, and is also the site of Britain's National Exhibition Centre. Despite it's sprawling, urban environment, Birmingham enjoys over 8,000 acres of parkland within it's boundaries and also has a fascinating and picturesque network of canals and waterways running through the city.

    Three bedroomed houses in the Birmingham districts of Smethwick and Oldbury begin at between £65,000 and £75,000

    10. Swansea, South Wales

    Swansea and Port Talbot can trace their roots back to the stone age. The Romans and the Vikings both came and put their mark on these ancient settlements, and the people of these towns have been seafarers, ship-builders, merchants, and coal-miners. Situated on the edge of the beautiful Gower Peninsula, this part of Wales has much to recommend it, not least it's property prices. The lowest priced 3 bedroom terraced homes can be bought for as little as £55,000.
    2013, By: Seo Master

    seo Help us improve the developer experience at Google Code 2013

    Seo Master present to you:
    We'd like your feedback about how to make Google Code a more useful destination for developers to find information about using Google's APIs and developer products.

    Please take our survey and give us your feedback, it should only take you a few minutes.

    http://code.google.com/survey

    Everyone who submits the survey will have a chance to win a limited edition t-shirt.

    The survey runs until midnight PST Friday June 11 2010 (that's this week!).

    2013, By: Seo Master

    from web contents: The number of pages Googlebot crawls 2013

    salam every one, this is a topic from google web master centrale blog: The Googlebot activity reports in webmaster tools show you the number of pages of your site Googlebot has crawled over the last 90 days. We've seen some of you asking why this number might be higher than the total number of pages on your sites.


    Googlebot crawls pages of your site based on a number of things including:
    • pages it already knows about
    • links from other web pages (within your site and on other sites)
    • pages listed in your Sitemap file
    More specifically, Googlebot doesn't access pages, it accesses URLs. And the same page can often be accessed via several URLs. Consider the home page of a site that can be accessed from the following four URLs:
    • http://www.example.com/
    • http://www.example.com/index.html
    • http://example.com
    • http://example.com/index.html
    Although all URLs lead to the same page, all four URLs may be used in links to the page. When Googlebot follows these links, a count of four is added to the activity report.

    Many other scenarios can lead to multiple URLs for the same page. For instance, a page may have several named anchors, such as:
    • http://www.example.com/mypage.html#heading1
    • http://www.example.com/mypage.html#heading2
    • http://www.example.com/mypage.html#heading3
    And dynamically generated pages often can be reached by multiple URLs, such as:
    • http://www.example.com/furniture?type=chair&brand=123
    • http://www.example.com/hotbuys?type=chair&brand=123
    As you can see, when you consider that each page on your site might have multiple URLs that lead to it, the number of URLs that Googlebot crawls can be considerably higher than the number of total pages for your site.

    Of course, you (and we) only want one version of the URL to be returned in the search results. Not to worry -- this is exactly what happens. Our algorithms selects a version to include, and you can provide input on this selection process.

    Redirect to the preferred version of the URL
    You can do this using 301 (permanent) redirect. In the first example that shows four URLs that point to a site's home page, you may want to redirect index.html to www.example.com/. And you may want to redirect example.com to www.example.com so that any URLs that begin with one version are redirected to the other version. Note that you can do this latter redirect with the Preferred Domain feature in webmaster tools. (If you also use a 301 redirect, make sure that this redirect matches what you set for the preferred domain.)

    Block the non-preferred versions of a URL with a robots.txt file
    For dynamically generated pages, you may want to block the non-preferred version using pattern matching in your robots.txt file. (Note that not all search engines support pattern matching, so check the guidelines for each search engine bot you're interested in.) For instance, in the third example that shows two URLs that point to a page about the chairs available from brand 123, the "hotbuys" section rotates periodically and the content is always available from a primary and permanent location. If that case, you may want to index the first version, and block the "hotbuys" version. To do this, add the following to your robots.txt file:

    User-agent: Googlebot
    Disallow: /hotbuys?*

    To ensure that this directive will actually block and allow what you intend, use the robots.txt analysis tool in webmaster tools. Just add this directive to the robots.txt section on that page, list the URLs you want to check in the "Test URLs" section and click the Check button. For this example, you'd see a result like this:

    Don't worry about links to anchors, because while Googlebot will crawl each link, our algorithms will index the URL without the anchor.

    And if you don't provide input such as that described above, our algorithms do a really good job of picking a version to show in the search results.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    from web contents: Multiple Sitemaps in the same directory 2013

    salam every one, this is a topic from google web master centrale blog: We've gotten a few questions about whether you can put multiple Sitemaps in the same directory. Yes, you can!

    You might want to have multiple Sitemap files in a single directory for a number of reasons. For instance, if you have an auction site, you might want to have a daily Sitemap with new auction offers and a weekly Sitemap with less time-sensitive URLs. Or you could generate a new Sitemap every day with new offers, so that the list of Sitemaps grows over time. Either of these solutions works just fine.

    Or, here's another sample scenario: Suppose you're a provider that supports multiple web shops, and they share a similar URL structure differentiated by a parameter. For example:

    http://example.com/stores/home?id=1
    http://example.com/stores/home?id=2
    http://example.com/stores/home?id=3

    Since they're all in the same directory, it's fine by our rules to put the URLs for all of the stores into a single Sitemap, under http://example.com/ or http://example.com/stores/. However, some webmasters may prefer to have separate Sitemaps for each store, such as:

    http://example.com/stores/store1_sitemap.xml
    http://example.com/stores/store2_sitemap.xml
    http://example.com/stores/store3_sitemap.xml

    As long as all URLs listed in the Sitemap are at the same location as the Sitemap or in a sub directory (in the above example http://example.com/stores/ or perhaps http://example.com/stores/catalog) it's fine for multiple Sitemaps to live in the same directory (as many as you want!). The important thing is that Sitemaps not contain URLs from parent directories or completely different directories -- if that happens, we can't be sure that the submitter controls the URL's directory, so we can't trust the metadata.

    The above Sitemaps could also be collected into a single Sitemap index file and easily be submitted via Google webmaster tools. For example, you could create http://example.com/stores/sitemap_index.xml as follows:

    <?xml version="1.0" encoding="UTF-8"?>
    <sitemapindex xmlns="http://www.google.com/schemas/sitemap/0.84">
    <sitemap>
    <loc>http://example.com/stores/store1_sitemap.xml</loc>
    <lastmod>2006-10-01T18:23:17+00:00</lastmod>
    </sitemap>
    <sitemap>
    <loc>http://example.com/stores/store2_sitemap.xml</loc>
    <lastmod>2006-10-01</lastmod>
    </sitemap>
    <sitemap>
    <loc>http://example.com/stores/store3_sitemap.xml</loc>
    <lastmod>2006-10-05</lastmod>
    </sitemap>
    </sitemapindex>

    Then simply add the index file to your account, and you'll be able to see any errors for each of the child Sitemaps.

    If each store includes more than 50,000 URLs (the maximum number for a single Sitemap), you would need to have multiple Sitemaps for each store. In that case, you may want to create a Sitemap index file for each store that lists the Sitemaps for that store. For instance:

    http://example.com/stores/store1_sitemapindex.xml
    http://example.com/stores/store2_sitemapindex.xml
    http://example.com/stores/store3_sitemapindex.xml

    Since Sitemap index files can't contain other index files, you would need to submit each Sitemap index file to your account separately.

    Whether you list all URLs in a single Sitemap or in multiple Sitemaps (in the same directory of different directories) is simply based on what's easiest for you to maintain. We treat the URLs equally for each of these methods of organization.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    seo Nicholas C. Zakas: Speed Up Your JavaScript 2013

    Seo Master present to you: Nicholas C. Zakas delivers the seventh Web Exponents tech talk at Google. Nicholas is a JavaScript guru and author working at Yahoo!. Most recently we worked together on my next book, Even Faster Web Sites. Nicholas contributed the chapter on Writing Efficient JavaScript, containing much of the sage advice found in this talk. Check out his slides and watch the video.



    Nicholas starts by asserting that users have a greater expectation that sites will be fast. Web developers need to do most of the heavy lifting to meet these expectations. Much of the slowness in today's web sites comes from JavaScript. In this talk, Nicholas gives advice in four main areas: scope management, data access, loops, and DOM.

    Scope Management: When a symbol is accessed, the JavaScript engine has to walk the scope chain to find that symbol. The scope chain starts with local variables, and ends with global variables. Using more local variables and fewer global variables results in better performance. One way to move in this direction is to store a global as a local variable when it's referenced multiple times within a function. Avoiding with also helps, because that adds more layers to the scope chain. And make sure to use var when declaring local variables, otherwise they'll end up in the global space which means longer access times.

    Data Access: In JavaScript, data is accessed four ways: as literals, variables, object properties, and array items. Literals and variables are the fastest to access, although the relative performance can vary across browsers. Similar to global variables, performance can be improved by creating local variables to hold object properties and array items that are referenced multiple times. Also, keep in mind that deeper object property and array item lookup (e.g., obj.name1.name2.name3) is slower.

    Loops: Nicholas points out that for-in and for each loops should generally be avoided. Although they provide convenience, they perform poorly. The choices when it comes to loops are for, do-while, and while. All three perform about the same. The key to loops is optimizing what is performed at each iteration in the loop, and the number of iterations, especially paying attention to the previous two performance recommendations. The classic example here is storing an array's length as a local variable, as opposed to querying the array's length property on each iteration through a loop.

    DOM: One of the primary areas for optimizing your web application's interaction with the DOM is how you handle HTMLCollection objects: document.images, document.forms, etc., as well as the results of calling getElementsByTagName() and getElementsByClassName(). As noted in the HTML spec, HTMLCollections "are assumed to be live meaning that they are automatically updated when the underlying document is changed." Any idea how long this code takes to execute?

    var divs = document.getElementsByTagName("div");
    for (var i=0; i < divs.length; i++) {
    var div = document.createElement("div");
    document.body.appendChild(div);
    }

    This code results in an infinite loop! Each time a div is appended to the document, the divs array is updated, incrementing the length so that the termination condition is never reached. It's best to think of HTMLCollections as live queries instead of arrays. Minimizing the number of times you access HTMLCollection properties (hint: copy length to a local variable) is a win. It can also be faster to copy the HTMLCollection into a regular array when the contents are accessed frequently (see the slides for a code sample).

    Another area for improving DOM performance is reflow - when the browser computes the page's layout. This happens more frequently than you might think, especially for web applications with heavy use of DHTML. If you have code that makes significant layout changes, consider making the changes within a DocumentFragment or setting the className property to alter styles.

    There is hope for a faster web as browsers come equipped with JIT compilers and native code generation. But the legacy of previous, slower browsers will be with us for quite a while longer. So hang in there. With evangelists like Nicholas in the lead, it's still possible to find your way to a fast, efficient web page.


    Check out other blog posts and videos in the Web Exponents speaker series:
    2013, By: Seo Master

    from web contents: Google Trends for your website 2013

    salam every one, this is a topic from google web master centrale blog:
    Webmaster Level: All

    In a recent post on the Official Google Blog, we mentioned our Google Trends gadget, and we thought it made sense to also post something here for all the webmasters that might be interested in having Trends on their website. Google Trends is a great way to see what's popular on the web -- people tend to search for what they care about -- and the Trends gadget makes it easy for you to put Trends on your website. Just cut and paste a small snippet of code, input your search terms, and you can show your readers how searches for Obama have changed during the last 30 days or who's the most popular American Idol contestant. So take a little piece of Google with you, and show your readers what's hot on the web.

    this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    from web contents: Design patterns for accessible, crawlable and indexable content 2013

    salam every one, this is a topic from google web master centrale blog:

    As a follow-up to my previous posts on accessibility, here are some design recommendations for creating web content that remains usable by the widest possible audience while helping ensure that the content gets indexed and crawled.

    Avoid spurious XMLHttpRequests

    Pages that enable users to look up information often use XMLHttpRequests to populate the page with additional information after the page has loaded. When using this pattern, ensure that your initial page has useful information on it -- otherwise Googlebot as well as those users who have disabled scripting in their browser may believe that your site contains only the message "loading..."

    CSS sprites and navigation links

    Having meaningful text to go with navigational links is equally important for Googlebot as well as users who cannot perceive the meaning of an image. While designing the look and feel of navigational links on your site, you may have chosen to go with images that function as links, e.g., by placing <img> tags within <a> elements. That design enables you to place the descriptive text as an alt attribute on the <img> tag.

    But what if you've switched to using CSS sprites to optimize page loading? It's still possible to include that all-important descriptive text when applying CSS sprites; for a possible solution, see how the Google logo and the various nav-links at the bottom of the Google Results page are coded. In brief, we placed the descriptive text right under the CSS-sprited image.

    Google search results with CSS enabled


    Google search result with CSS disabled ("Google" sprited image lost, descriptive "Google" link remains)


    Use unobtrusive JavaScript

    We've talked about the concept of progressive enhancement when creating a rich, interactive site. As you add features, also use unobtrusive JavaScript techniques for creating JavaScript-powered web pages that degrade gracefully. These techniques ensure that your content remains accessible by the widest possible user base without the need to sacrifice the more interactive features of Web 2.0 applications.

    Make printer-friendly versions easily available

    Web sites with highly interactive visual designs often provide all of the content for a given story as a printer-friendly version. Generated from the same content as the interactive version, these are an excellent source of high-quality content for both the Googlebot as well as visually impaired users unable to experience all of the interactive features of a web site. But all too often, these printer-friendly versions remain hidden behind scripted links of the form:

    <a href="#" onclick="javascript:print(...)">Print</a>

    Creating actual URLs for these printer-friendly versions and linking to them via plain HTML anchors will vastly improve the quality of content that gets crawled.

    <a href="http://example.com/page1-printer-friendly.html" target="_blank">Print</a>

    If you're especially worried about duplicate content from the interactive and printer-friendly version, then you may want to pick a preferred version of the content and submit a Sitemap containing the preferred URL as well as try to internally link to this version. This can help Google disambiguate if we see pieces of the article show up on different URLs.

    Create URLs for your useful content

    As a webmaster, you have the power to mint URLs for all of the useful content that you are publishing. Exercising this power is what makes the web spin. Creating URLs for every valuable nugget you publish, and linking to them via plain old HTML hyperlinks will ensure that:
    • Googlebot learns about that content,
    • users can find that content,
    • and users can bookmark it for returning later.
    Failure to do this often forces your users to have to remember complex click trails to reach that nugget of information they know they previously viewed on your site.this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    from web contents: Remove your content from Google 2013

    salam every one, this is a topic from google web master centrale blog:


    Confused about the best uses of robots.txt, nofollow, URL removal tool? Wondering how to keep some of your pages off the web? Our webspam lead, Matt Cutts, talks about the best ways to stop Google from crawling your content, and how to remove content from the Google index once we've crawled it.



    We love your feedback. Tell us what you think about this video in our Webmaster Help Group.

    * Note from Matt: Yes, robots.txt has been around since at least 1996, not 2006. It's hard for me to talk for 12-13 minutes without any miscues. :)



    Update: for more information, please see our Help Center articles on removing content.
    this is a topic published in 2013... to get contents for your blog or your forum, just contact me at: devnasser@gmail.com

    seo Google I/O: Session videos on building apps using the AJAX and Data APIs 2013

    Seo Master present to you: One of the best things about attending Google I/O is the chance to meet developers who are using our APIs and interacting with Google technology in ways we could never imagine. Not only was it amazing to see exciting examples of apps built on the AJAX and Data APIs being demoed at the developer sandbox, but it was also interesting to meet other developers who are just starting to use many of our APIs for their specific needs and cool ideas. Hopefully, by making all of our sessions available for free to watch on your own time, many of you who are interested in Google's APIs will get a better understanding of the ways we are making our API offerings easier to use, more efficient and much more feature rich.

    Big Announcements & More

    One of the most exciting announcements at this year's I/O was the developer preview of Google Wave. After its introduction during the Day 2 keynote, there were three sessions devoted to the Google Wave APIs: Programming With and For Google Wave, Google Wave: Powered by GWT, and Google Wave: Under the hood. We hope you're as excited as we are, and can't wait to see how you use these tools.

    Another new product announcement this year was Google Web Elements, which allow you to easily add your favorite Google products onto your own website. There are elements for Google News, Maps, Spreadsheets, YouTube and others, with more to come. Be sure to check out the Day 1 keynote for a complete introduction to the simple copy and paste power of Google Web Elements.

    Keeping webmasters in mind, two sessions were all about optimizing your site for search. In one talk, Matt Cutts reviewed real sites that *you* submitted. talking through real-life issues that effect developers when it comes to optimizing their app for search. The other session focused on how to maximize your site, your content, and your application's exposure to search engines.

    Javascript & Google AJAX APIs

    The session on Custom Search Engines focused on helping your users search the sites and topics that are relevant to you. Nick Weininger discussed some of the ways to embed search and ads onto your site (including the new Custom Search element), then customize the look and feel of the results. Adobe was on hand to show how they're using Custom Search Engines to enhance their products and insert contextual search into the developer's programming workflow. We also announced the launch of the Custom Search gadget for Blogger which gives your blog's visitors the ability to search not just your posts, but web pages linked from your blog, your blog lists, and link lists.

    In the session Implementing your Own Visualization Datasource, attendees learned about building a server-side data source compatible with the Google Visualization API, including hearing about the experience from a Salesforce.com expert. Itai Raz also gave a great session on using the Visualization API with GWT and treated the audience to advanced Javascript tricks such as wrapping visualizations as gadgets.

    Ben Lisbakken's session detailed some advanced Javascript techniques and then delved into some of great tips and tracks he learned while creating the Code Playground, a tool which can help developers learn about and experiment with many of Google's APIs. Some of the highlights include increasing the security and performance of applications and learning why App Engine is so easy on which to develop.

    Jon Kragh of VastRank showed off some neat ways he's Using AJAX APIs to Navigate User-Generated Content, including using Google Maps to display nearby colleges and translating reviews into the viewer's language. Also, Michael Thompson explored the idea of Building a Business with Google's free APIs using example Google Gadgets, Google Gadget Ads, Mapplets, and the Maps API.

    Google Data APIs

    Jeff Fisher and Jochen Hartmann spoke on the future direction of the YouTube API as it becomes increasingly social. They used two sample applications to demonstrate the use of the activity feeds as well as the new "SUP" feed that allows high traffic websites to monitor YouTube for activity in a scalable manner.

    The session about writing monetizable YouTube apps focused on creating applications that allowed access to YouTube videos in creative ways. In the talk, Kuan Yong showed how to expertly navigate through the YouTube API terms of service in order to avoid business pitfalls so that developers can monetize their own apps.

    Eric Bidelman and Anil Sabharwal discussed the Document List Data API in detail, highlighting common enterprise use cases such as sync, migration, sharing, and legal discovery. Partners Syncplicity, OffiSync, and gDocsBar showed off compelling demos.

    In the talk on the evolution of the Google Data protocol, Sven Mawson outlined all of the new features in the Google Data APIs that will help in the creation of more efficient applications. Two of the new additions included a compact and customizable JSON output and the option to retrieve only the parts of a feed that you want using partial GET.

    Monsur Hossain and Eric Bidelman showed how to build a read/write gadget using OAuth and the Google Data JavaScript library. They went through a step by step set of instructions that explained how to set up the gadget code, how to get a token using the OAuth proxy, and how to read and write data to Blogger using the JavaScript library inside of an iGoogle gadget.

    Google Geo APIs

    Mano Marks and Pamela Fox started with a grab bag session covering the vast spectrum of Geo APIs, discussing touring and HTML 5 in KML, the Sketchup Ruby API (with an awesome physics demo), driving directions (did you know you can solve the Traveling Salesman Problem in Javascript?), desktop AIR applications, reverse geocoding, user location, and monetization using the Maps Ad Unit and GoogleBar. Pamela finished by sneak previewing an upcoming feature in the Flash API: 3d perspective view.

    In the session on performance tips for Maps API mashups, Marcelo Camelo announced Google Maps API v3, a latency-oriented rewrite of our popular JS Maps API. Also see Susannah Raub's more in-depth talk about Maps API v3. Then Pamela gave advice on how to load many markers (by using a lightweight marker class, clustering, or rendering a clickable tile layer) and on how to load many polys (by using a lightweight poly class, simplifying, encoding, or rendering tiles). Sascha Aickin, an engineer at Redfin, showed how they were able to display 500 housing results on their real estate search site by creating the "SuperMarker" class.

    Mano and Keith presented various ways of hosting geo data on Google infrastructure: Google Base API, Google App Engine, and the just-released Google Maps data API. Jeffrey Sambells showed how ConnectorLocal used the API (and their own custom PHP wrapper) for storing user data.

    On the same day as announcing better integration between the Google Earth and Google Maps JS APIs, Roman Nurik presented on advanced Earth API topics, and released a utility library for making that advanced stuff simple.

    2013, By: Seo Master
    Powered by Blogger.