27 Jan 2012

Interesting Advertising Quotes

The "Ad Contrarian (Blog)" is a great mix of astonishing incisive commentary, irreverence and hilarious. If you are connected to advertising and marketing, you might love it.

Here are some delicious quotes from the right navigation of the blog .... interesting !

"Creative people make the ads. Everyone else makes the arrangements."

"Brand studies last for months, cost hundreds of thousands of dollars, and generally have less impact on business than cleaning the drapes."

"Nobody really knows what "creativity" is. Every year thousands of people take a pilgrimage to find out. This involves flying to Cannes, snorting cocaine, and having sex with smokers."

"Marketers always overestimate the attraction of new things and underestimate the power of traditional consumer behavior."

"If you're looking for perfection, you came to the wrong planet."

"We don’t get them to try our product by convincing them to love our brand. We get them to love our brand by convincing them to try our product."

"As an advertising medium, the web is like communism. It's never very good right now, but it's always going to be great some day."

"In American business, there is nothing stupider than the previous generation of management."

"If the message is right, who cares what screen people see it on? If the message is wrong, what difference does it make?"

"The only form of product information on the planet less trustworthy than advertising is the shrill ravings of web maniacs."

"In the entire history of civilization, nothing good ever happened to a teenager after midnight."

"There's no bigger sucker than a gullible marketer convinced he's missing a trend."

"All ad campaigns are branding campaigns. Whether you intend it to be a branding campaign is irrelevant. It will create an impression of your brand regardless of your intent."

"Nobody ever got famous predicting that things would stay pretty much the same."

24 Jan 2012

Web Page Loading Time Affects the Bottom Line

Web Page loading time is obviously an important part of any website’s user experience. And many times we’ll let it slide to accommodate better aesthetic design, new nifty functionality or to add more content to web pages. Unfortunately, website visitors tend to care more about speed than all the bells and whistles we want to add to our websites. Additionally, page loading time is becoming a more important factor when it comes to search engine rankings.

I read the two related articles from KISSmetrics:
How Loading Time Affects Your Bottom Line
Speed Is A Killer – Why Decreasing Page Load Time Can Drastically Increase Conversions

I am reproducing the infographic from KISSmetrics here below.


Click here to download a pdf version of this infographic from KISSmetrics.

As per Avinash Kaushik, there are three more factoids (not in the infographic, but still connected to impact of speed):

Google: If search results are slowed by even a fraction of a second, people search less. A 400ms delay leads to a 0.44 percent drop in search volume!

Edmunds: Reduced load times from 9 secs to 1.4 secs, ad revenue increased three percent, and page views-per-session went up 17 percent!

Shopzilla: Dropped latency from 7 seconds to 2, revenue went up 12 percent and page views jumped 25 percent. They also reduced their hardware costs by 50 percent!

22 Jan 2012

Chrome blocks HTTP auth for subresource loads

We had a website in Production Environment http://www.example.com. It's Page Construction related static subresource (css, js, jpg, gif & png) are being served from http://eimg.com. Why two separate domains? To serve the static content from cookieless domain.

For it's development we had a development environment at http://dev.example.com and subresource at http://dev.eimg.com. Both these development environment domains are password protected by HTTP Basic Authentication.

Issue:

On http://dev.example.com the static subresources are being served from http://dev.eimg.com. Both are password protected by HTTP Basic Authentication.

Everything works okay with Firefox & MS Internet Explorer. On accessing the Development Environment Website it asks for password twice. Once for http://dev.example.com and another for http://dev.eimg.com.

But, the Chrome does not ask for the second Password (for http://dev.eimg.com) and issues http status 401. Thus, blocks all content from dev.eimg.com and renders the page without stylesheet and page construction images.

Google-Chrome Developers say:
This behavior is an intentional change as a phishing defense. Sites shouldn't be doing this kind of authorization cross-origin. If you need to allow this behavior, launch chrome with the --allow-cross-origin-auth-prompt flag.
Read: http://code.google.com/p/chromium/issues/detail?id=91814

Further Read: http://blog.chromium.org/2011/06/new-chromium-security-features-june.html. It says:
Chromium 13: blocking HTTP auth for subresource loadsThere’s an unfortunate conflict between a browser’s HTTP basic auth dialog, the location bar, and the loading of subresources (such as attacker-provided <img> tag references). It’s possible for a basic auth dialog to pop up for a different origin from the origin shown in the URL bar. Although the basic auth dialog identifies its origin, the user might reasonably look to the URL bar for trust guidance. To resolve this, we’ve blocked HTTP basic auth for subresource loads where the resource origin is different to the top-level URL bar origin. We also added the command line flag switch --allow-cross-origin-auth-prompt in case anyone has legacy applications which require the old behavior.
Do you wish to see List of Chromium Command Line Switches?
http://peter.sh/experiments/chromium-command-line-switches/

Possible Resolution:

Let’s remove the Auth from the dev.eimg.com.
Pros: Issue would be resolved.
Cons: Google-Images may get the images from dev.eimg.com and index it in Google Images. This would cause duplicate images content for Search Engines (development - dev.eimg.com & production - eimg.com).

But, I do not see it as a significant duplicity threat, because only Page Construction related images and JS / CSS / images / sprites are served from this sub-domain.

Hope, this post helps anyone encountering the similar situation. BTW, I totally agree with the Chrome's implementation of defense against Phishing.

15 Jan 2012

What is Bounce Rate and Why should we worry about Bounces?

As per the "Content Characterization" section of Web Analytics Definitions by Web Analytics Association (www.webanalyticsassociation.org):

TERM: Bounce Rate
Type: Ratio
Universe: Aggregate, Segmented
Definition/Calculation: Single page view visits divided by entry pages.
Comments: If bounce rate is being calculated for a specific page, then it is the number of times that page was a single page view visit divided by the number of times that page was an entry. If bounce rate is calculated for a group of pages, then it is the number of times pages in that group was a single page view visit divided by the number of times pages in that group were entry pages. A site-wide bounce rate represents the percentage of total visits that were single page view visits.

That's the standard definition. But, it would have been better if we could measure "the percentage of website visitors who stay on the site for a small amount of time (usually five seconds or less)". This less than 5-seconds stay on the site is difficult to measure. This is because all Web-Analytics Software calculate the Time-on-Site by tracking the time spent on previous page when Visitor clicks on the next page. If the Visitor did not click on next page (bounce), how do we know the time spent on single page visits? Alas, if we could measure it somehow !

Anyway, bounce rate is a interesting way to measure the quality of traffic coming to a website. In short bounce rate measures the percentage of people who came to your website and left "instantly". Bounce rate measures quality of traffic you are acquiring, and if it is the right traffic then it helps you understand that where and how your website is failing your website visitors.

What is a Bounce?
Now, lets try to understand that how the Bounces happen:
  1. Any click on the page that directs a user to an external website or your sub-domain (yes, sub-domains are counted into your bounce rates).
  2. Pushing the back button and going back to the source.
  3. Closing the browser tab or the entire browser.
  4. Typing a new URL from that page and leaving.
  5. Timeout session, which is more than 30 minutes on a single page (in my opinion this one is little misleading factor, it can mean both that the user is inactive or that the page is so interesting that it engaged the user beyond 30 minutes - for example, a page having interesting large Video - this must be studied while analyzing bounce rate for the site in question).
All these are taken into account when calculating your bounce rate compared to the total number of visits to a single page.

By the way I would like to emphasize here that we should not get confused between Bounce-Rate and Exit-Rate. You may like to read Web Analytics Bounce-Rate and Exit-Rate.

What is the industry standard for bounce rate?

Short answer is that there is no industry standard. There are a lot of factors that influence the bounce rate, so you really can’t compare bounce rates of one site (or page) to another. The best way to know if you are doing better or worse is to set your own baseline and compare your performance over time.

Here are some of the numbers that were listed by Steve Jackson (Co-Chair Nordic Branch, Web Analytics Association), based on his experience with various sites.
Source: http://tech.groups.yahoo.com/group/webanalytics/message/6116


What are the factors that affect the bounce rate?

Below are some of the factors that determine the bounce rates. You can use this as a checklist to diagnose a high bounce rate issue. (Source: http://webanalysis.blogspot.com/2007/07/bounce-rate-demystified.html)

1. Source of your traffic – Each source results in a different bounce rate. When setting your baseline create overall baseline and baselines for each traffic source e.g. display advertising, organic traffic. Segment the Bounce Rate with Traffic Sources and then Analyze.
 

Do you need to revisit relationships with sites that are not sending you high quality traffic? What is the call to action that is causing people to come to your site and bounce? Are your email, affiliate, other marketing campaigns yielding low bounce rates?

2. Search engine ranking of the page – A page which ranks higher on irrelevant keyword will get a higher bounce rate. Measure the bounce rate of your search keyword.

3. Type of Audience – If you are advertising and reaching the wrong audience you will see higher bounce rate. Bounce rate will tell you if you need to better target your ads.

4. Landing Page Design – Landing page design affects the bounce rate. I suggest A/B testing to improve after you have set your baseline. No matter how low you go there is always an opportunity for improvement unless you somehow achieved 0% bounce rate.

5. Ad and Landing Page Messages – If the messages on your banner or search ads are not aligned with the messages on the landing page then the chances are you will have one of those 50% + bounce rates. Make sure messages are aligned and give visitors a clear call to action. Many a times marketers send users to a generic page instead of an appropriate landing page. This can (and will) result in higher bounce rates. Again A/B or multivariate testing should be used to reduce the bounce rate.

6. Emails and Newsletters – Subject lines, to and from, links, banners, the layout of email and the landing pages all work in tandem. They can either result in a great user experience and hence lower bounce rate or can result in a disaster. Do testing to reduce bounce rate.

7. Load time of your page(s) – A longer load time can result in visitor bailing out of the site causing higher bounce rates. Conversely, users can hit the refresh button, thinking there was a problem with the page load. This will incorrectly reduce bounce rate.

8. Links to external sites – A page that has links to external sites (or sub domains or pages that are not tracked in the same data warehouse) will show higher bounce rates.

9. Purpose of the page – Some pages’ purpose is to drive users inside the site while other pages provide the information that user is looking for. A page that provides the end result can show higher bounce rate. One example is the Branch-Offices page on my company’s web site, I have this page bookmarked. Whenever I need a particular branch's phone number, I go to my favorites, pull this page, get the number and leave.

10. Other factors - Pop-up ads, pop-up survey requests, music, streaming video, all can have an adverse effect on bounce rates if users become annoyed.

Worry about Bounce-Back rater than Bounce-Rate

First I want to make a couple of things clear. It’s highly unlikely that search engines use bounce rate directly when scoring or ranking webpages. Nor is a high bounce rate a definite signal of low quality or a failure to meet visitor expectations or needs.

Over the past few years, there has been a lot of debate (and confusion) about whether bounce rate is a signal that the search engines use to determine quality content. Matt Cutts has probably been asked this question ten thousand times, and has referenced bounce rate when answering some questions about ranking factors. Matt Cutts has said that bounce rate is spammable and a noisy signal. I also think that bounce rate is a poor signal for Google's use.

Just think about all of online calculators, Time Zone Converters or a Video article. They can have extremely high bounce rates since they can satisfy the user with only 1 page view needed. If your pages are satisfying the user I would not worry about bounce rate.

I have seen many instances of pages with high bounce rates (in Google Analytics) that still ranked well in Search. The content I’m referring to would clearly be identified as high quality, unique, and valuable, but had high bounce rates. Many of the pages I am referring to ranked in the top three to five listings on page one of Google, Bing, and Yahoo.

Duane Forrester, Sr. Product Manager at Bing as part of a post (How To Build Quality Content), explains that the engines can monitor “dwell time”, or the time a person remains on your page before clicking back to the search results. If visitors are clicking through the search listings to your site, and then clicking back to Bing quickly (in just a few seconds), that can be a negative signal to Bing.

Google internally refers to a Visitor that doesn't bounce back as a "long click". A long click is a click that leaves Google and doesn't come back for a long time: until that person wants to use Google for an unrelated search. The user doesn't refine their Keyword, nor does the user use the back button and click another result in the SERPs instead.

If Google is trying to use Bounce-Back as a ranking signal, they will have to deal with some complications -- A lot of people will rapidly open a series of tabbed results by repeatedly jumping back to the SERPs page. They won't even look at any of the pages until they finish opening a series of tabs. This makes it appear that rapid bounce-backs are occuring when they aren't.

In my opinion Google is clearly able to handle this multi-tab result opening. If I do a search, click on the result and then use the back button, I get a little notice in the SERPs to "block all <site> results". If I open many sites in tabs, I don't get this notice. So Google is able to penalize the Bounce-Backs. I believe that Google started using this as site wide signal as a big part of panda: "Bounce-Back from so many search terms sent to your site is high, the site (or a section thereof) must not have high quality content at all."

My recommendation for Bounce-Backs: If your traffic is coming to pages with high bounce rate with irrelevant keywords, try to adjust the content so that it should not rank for those irrelevant keyword. In case Bouncing Traffic is coming to a page with Keywords with different intent, give links to relevant content even if it is outside your site. This will reduce Bounce-Back, and the Pages / Site will ranks high.

Currently, there is no methodology available for the Webmasters to track Bounce-Back. Bounce Rate is easy to measure and can be a great proxy for Bounce-Back or Long-Click on a site with deep content.

Hopefully, analytics packages will evolve to let us see Dwell Time or Bounce-Back or Long-Click. In the meantime any measure that can be employed to improve engagement and increase the time visitors spend interacting with our content is essential.

12 Jan 2012

How Does Our Brain Know What Is a Face and What's Not?


Objects that resemble faces are everywhere. Whether it’s New Hampshire’s erstwhile granite “Old Man of the Mountain,” or Jesus’ face on a tortilla, our brains are adept at locating images that look like faces. However, the normal human brain is almost never fooled into thinking such objects actually are human faces. (Credit: MIT)

Image found through an article published at:
http://www.sciencedaily.com/releases/2012/01/120109132705.htm

9 Jan 2012

What if Google had to SEO Optimize its own Home Page

This is really very interesting.


http://meangene.com/google/design_for_google.html


Also, see the SEO Report Card for Google's Own Product "Google Products" at:


http://www.google.com/webmasters/docs/google-seo-report-card.pdf
Little old (Tuesday, March 02, 2010) but still interesting.
Geeks vs Non-geeks – Script Automation



I stumbled upon this Image while surfing the Internet. Liked it so much that thought of sharing it.
Quite interesting, isn't it ?

2 Jan 2012

My Portrait Sketch

This Christmas I was at Banaras Hindu University (BHU), Varanasi Campus. Prabhakar, a Fine Arts Student tried to draw a Portrait Sketch with Color Crayons.


DS getting his Portrait Sketched at BHU, Varanasi

Finally, the Sketch did not resemble me :) But, I did not let him know that I am disappointed and paid him Rs 200 as per his desire.


AngularJS

In this BayJax talk from September 16, 2011, Miško Hevery of Google speaks about AngularJS, an open source MVC framework for JavaScript.



Angular is an open-source MVC JavaScript framework, which simplifies web development by offering automatic view/model synchronization.

In addition to two-way binding, Angular is lightweight, supports all major browsers, and built for creating testable JavaScript code.

Angular was created by Miško Hevery (http://misko.hevery.com/).

From the Angular website:

<angular/> is what HTML would have been if it had been designed for building web applications. It provides your application’s plumbing so you can focus on what your app does, rather than how to get your web browser to do
what you need.

For more information visit: http://angularjs.org

You can find the source code here: 
https://github.com/mhevery/angular.js

Great talk ! Looking forward to learning AngularJS

Usefulness

I liked it - courtesy: http://www.bonkersworld.net/usefulness/

1 Jan 2012

Pages prohibited by robots.txt gets indexed in Search Engine

My Friends from SEO World keep on asking me following question from time to time:

Why is my url showing up in Google when I blocked it in robots.txt? It seems that Google Craws the disallowed urls.

Lets take a case from a popular B2B Portal http://dir.indiamart.com.



Now, lets see what happens when we search for a specific url from the disallowed /cgi directory.




Google has 360 pages from that "disallowed" directory.

How could this happen? The first thing to note is that Google abides with your robots.txt instructions - it does not index text of those pages. However, the URL is still displayed because Google found a link somewhere-else as:

<a href="http://dir.indiamart.com/cgi/compcatsearch.mp?ss=Painting">Painting Manufacturers & Suppliers - Business Directory, B2B...</a>
Google hasn't crawled these URLs, so it appears as an URL rather than a traditional listing.

Also, because it found the link with anchor-tag "Painting Manufacturers & Suppliers - Business Directory", it associated the listing with it.

In addition, Google can show a page description below the URL. Again, this is not a violation of robots.txt rules — it appears because Google found an entry for your robots.txt disallowed page / site in a recognized resource such as the Open Directory Project. The description comes from that site rather than your page content.

The robots.txt tells the engines to not crawl the given URL but tells them that they may keep the page in the index and display it in results (see the snapshot above – in the snapshot you will notice that there is no snippet).

This becomes a problem when these pages accumulate links. Those pages then can accumulate link juice (ranking power) and other query-independent ranking metrics (like popularity and trust) but these pages can't pass these benefits to any other pages since the links on them don't ever get crawled.

This is further more elaborated from a SeoMoz cartoon below (courtesy: Robots.txt and Meta Robots ):



This means in order to exclude individual pages from search engine indices, the noindex meta tag <meta name="robots" content="noindex, follow"> is actually superior to robots.txt.

Blocking with Meta NoIndex tells engines they can visit but they are not allowed to display the URL in results.

Matt Cutts explains in a WebMastersHelp Video titled: "Uncrawled URLs in search results" about "why a page that is disallowed in robots.txt may still appear in Google's search results".


A SitePoint Article Why Pages Disallowed in robots.txt Still Appear in Google may also be worth reading in this regard.

I did all this research for my own purpose. But, thought of sharing it, just in case it helps others.