Internet Surveys in Trademark Litigation? Option A—Yes, Option B—No

Aug. 26, 2011, 4:00 AM UTC

Fifteen years ago, surveys on the internet were unheard of—but the last decade has witnessed a raging debate as to whether surveys conducted on the internet are reliable. This could be due either to developments in technology and survey methodology or to deficiencies in traditional surveys methods. 1Charles Cowan, Defining the Population vs. Finding the Population—Theory vs. Practice in Surveys Used in Litigation (June 30, 2007), David L. Faigman et al., 1 Mod. Sci. Evidence §8:26 (2010-11 ed.), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=997791 (defines an internet survey as one in which potential respondents are contacted and their responses are collected over the internet). The World Wide Web has permeated almost every age group, culture, and demographic in a way that a broad range of populace is within its reach. 2Jared Heyman, Infosurv: Online Research Surpasses Phone Surveys (Feb. 27, 2007), available at http://www.prweb.com/releases/online_survey/market_research/prweb506653.htm.

Initial efforts to replace traditional in-person or telephonic surveys with internet surveys were viewed with significant skepticism by courts. 3Joshua M. Dalton and Alison E. Hickey, Proceed with Caution: Internet-Based Survey Research, Bloomberg L. Rep. Intell. Prop. (2009), citing Adv. Haim Ravia, Judge Hashin: Internet Surveys Are Not Reliable (Feb. 10, 2001) (Harshly criticizing the inherent reliability of Internet surveys, Israeli Supreme Court Judge Hashim declared that “an Internet survey does not give a genuine statistical picture, nor does it purport to do so.”). Internet survey results were considered to lack trustworthiness. 4Id. See also Trustees of Columbia University in the City of New York v. Columbia/HCA Healthcare Corp., 964 F. Supp. 733, 747, 43 USPQ2d 1083 (S.D.N.Y. 1997). This premature pronouncement has been sidelined, over the years, by the numerous court opinions that have admitted internet trademark surveys; and in cases where they have been excluded, the reason for their exclusion was not unique to their online medium, 5See, e.g., Citizens Banking Corp. v. Citizens Financial Group Inc., 2009 U.S. App. LEXIS 8366 (6th Cir. 2009); TrafficSchool.com Inc. v. EDriver Inc., 633 F. Supp. 2d 1063 (C.D. Cal., 2008); see also Gabriel M. Gelb & Betsy D. Gelb, Internet Surveys for Trademark Litigation: Ready or Not, Here They Come, 97 Trademark Rep. 1073 (2007). indicating that the internet is not an inherently defective medium for surveys. It is the way in which the survey is conducted, not where it is conducted, that affects its admissibility in court.

I. WHAT IS THE JUDICIAL ATTITUDE TOWARDS INTERNET SURVEYS?

Internet surveys have been criticized by courts, but the criticisms have been directed towards flaws in the universe of respondents, selection of survey format and questions, and choice of stimuli and test controls, rather than the surveys’ use of an online medium. 6See Tokidoki LLC v. Fortune Dynamic Inc., U.S. Dist. LEXIS 65665 (C.D. Cal. 2009); University of Kansas v. Sinks, U.S. Dist. LEXIS 23763 (D. Kan. 2008); ComponentOne LLC v. ComponentArt Inc., U.S. Dist. LEXIS 87066 (W.D. Pa. 2008); Kargo Global Inc. v. Advance Magazine Publishers Inc., U.S. Dist. LEXIS 57320 (S.D.N.Y. 2007). In each of these cases, the court entirely discredited a survey that had been conducted on the internet but did not criticize the use of an online methodology. This shows that the challenges faced by internet surveys are not altogether different from ones that other survey methods face.

In the nascent stages of internet surveys, the U.S. District Court for the Southern District of New York, in 1996 in the Trustees of Columbia case, excluded an internet-based health survey, finding that “there was no showing of expert testimony that supported the trustworthiness of the internet survey methodology.” 7Trustees of Columbia University v. Columbia/HCA Healthcare Corp., 964 F. Supp. 733, 747 (S.D.N.Y. 1997). In a subsequent case, St. Clair, 8St. Clair v. Johnny’s Oyster & Shrimp Inc., 76 F. Supp. 2d 773, 774 (S.D. Tex. 1999). in 1999, the U.S. District Court for the Southern District of Texas stated that “any evidence procured off the internet is adequate for almost nothing, even under the most liberal interpretation of the hearsay exception rules in Fed. R. Civ. P. 807.” 9See also Engers v. AT&T, U.S. Dist. LEXIS 41682 (D.N.J. 2005). This case, however, may be distinguished on its facts, as the internet evidence submitted was not a survey. Rather, internet message board postings were submitted as evidence of actual confusion. Such weblog evidence may be treated unfavorably as opposed to methodically conducted and controlled surveys. See also
University of Kansas v. Sinks, U.S. Dist. LEXIS 23763 (D. Kan. 2008).
Despite the seemingly unfavorable reaction in the two cases, the treatment of internet surveys in cases that followed did not justify the anxiety surrounding internet surveys for trademark litigation. 10Hal Poret, A Comparative Empirical Analysis of Online Versus Mall and Phone Methodologies for Trademark Surveys, 100 Trademark Rep. 756 at 765 (2010).

In the past decade, there have been a few instances of exclusion of internet surveys, although none apparently motivated by concerns about the underlying methodology. In a false advertisement case, Procter & Gamble Co. v. Ultreo Inc., 11574 F. Supp. 2d 339. the plaintiff conducted a study to distinguish between the lost sales it believed it would experience from lawful competition and truthful advertising and the lost sales it believed it would experience from the alleged false advertising of the plaque-removal effect of defendant’s toothbrush. 12See Johnson & Johnson v. Carter-Wallace Inc.
, 631 F.2d at 190 (requiring that plaintiff show “a logical causal connection between the alleged false advertising and its own sales position”).
In this survey, since there was no information as to the composition and selection methodology of the survey sample, and since the study was simply an estimate of the extent of defendant’s first year sales and the effect of the market entry on plaintiff’s sales without factoring in any false or misleading advertising, the study was held not probative of irreparable injury. 13See Vista Food Exchange Inc., 2005 U.S. Dist. LEXIS 42541, at *18-19 (holding that survey with improperly defined sample was not probative). People’s United Bank v. PeoplesBank
142010 WL 4877856. is yet another example of the exclusion of an internet survey report which did not meet the scientific criteria of a reliable survey. Rejecting the survey, the court found that the survey was unreliable because the questions were suggestive and leading. 15Id. The question posed—which if any of the banks are the same as or affiliated with defendant—provokes a “demand effect.” (Ex. 287.)

In 1-800 Contacts Inc. v. WhenU.com, 16414 F. 3d. 400 (2d Cir. 2005). the plaintiffs alleged that defendants’ pop-up boxes generated confusion between a competitor that purchased advertising (Vision Direct) and plaintiffs’ “1-800 Contacts.” Survey respondents were selected from an internet panel of over three million consumers. From this panel, the survey expert selected 100,000 potential respondents, and of these, 46,000 agreed to take the survey. The online survey yielded 994 responses. While some of the issues in survey design were not solely related to the use of the internet (including, the lack of a control, compound questions, leading questions, etc.) others were directly related to the medium of survey. 17Id.

First, plaintiff’s expert, Mr. Neal’s, use of an internet panel raises questions about the representativeness of the sample. Second, Mr. Neal’s questions about technical concepts were likely to lead to biased results and confusion resulting from respondents’ interpretations of what a pop-up is or how pop-ups and internet searches actually work. Third, by not actually showing the respondents pop-up ads and instead merely trying to describe them, Mr. Neal assumed that consumers shared an equivalent understanding of a pop-up. Absent this understanding, respondents were likely to be influenced by the leading and ambiguous nature of the questions posed and this, amongst other flaws, should have rendered the results meaningless.

Despite this, the court did rely on the survey evidence, finding that the survey suggested a likelihood of initial interest confusion. A positive treatment such as this underscores the possibility that an internet survey can be used to offer evidence of confusion.

It is not only the rejection of survey responses for tangential reasons, but it is also the deliberate admission and reliance on them by the courts that point towards the positive treatment of internet surveys. For example, in Kinetic Concepts Inc. v. BlueSky Medical Corp., 182006 U.S. Dist. LEXIS 60187. the motion to strike the internet survey evidence was denied; in Best Vacuum Inc. v. Ian Design Inc. accepting the internet survey evidence submitted by defendant, the court found a lack of secondary meaning; in Wallach v. Longevity Network Ltd., 192006 U.S. Dist. LEXIS 98787. the internet survey indicated “a likelihood of confusion between plaintiff Wallach’s AMERICAN LONGEVITY mark and defendant’s use of the LONGEVITY mark because the survey revealed that over 25% of the respondents believed that plaintiff was associated with defendant.” 20See also
Internet Specialties W. Inc. v. ISPWEST, 2006 U.S. Dist. LEXIS 96361; R&R Partners Inc. v. Tovar, 447 F. Supp. 2d 1141; 1-800 ContactsInc. v. WhenU.com, 2003 U. S. Dist. LEXIS 22932; University of Kansas v. Sinks, 2008 U.S. Dist. LEXIS 23763; PBM Products LLC v. Mead Johnson Nutrition Co., 2010 U.S. Dist. LEXIS 177; Citizens Banking Corp. v. Citizens Financial Group Inc., 320 Fed. Appx. 341.
Although in these cases, the weight given to the survey evidence differs, for reasons other than the mere fact that it was an internet survey; such evidence has nevertheless been admitted and relied on by courts. Even in the most recent cases, namely, GoSmile Inc. v. Levine
212011 U.S. Dist. LEXIS 23474. and POM Wonderful LLC v. Organic Juice USA Inc., 222011 U.S. Dist. LEXIS 1534 (The video survey conducted on the internet was held not so flawed that they had to be excluded). courts specifically held that internet survey evidence and expert testimony presented by the parties were not only credible and reliable, but they strongly favored the outcome of the respective cases. Even if the survey did not solely decide the outcome of the case, as in 3M Co. v. Mohan, 232010 U.S. Dist. LEXIS 124672 (The survey and the anecdotal evidence of actual confusion, taken together, was held to strongly favor a likelihood of confusion). the survey and anecdotal evidence, taken together, were held to strongly favor admissibility of such evidence.

Not only has there been an increase in judicial acceptance of internet as a new survey method, but there is substantial statistical evidence to prove the steady rise in the penetration rate of the internet over the past years. There has been over a 30 percent increase in the internet usage rate since 2000. 24The Internet World Stats reports that 77.3 percent of the population in North America uses the internet as of June 2010. Internet World Stats., available at http://www.internetworldstats.com/am/us.htm (last visited Apr. 27, 2011); see also Kathy Steinberg & Laura Light, What Methodology Should I Use for my Survey? 3(1) Newsmaker Insights 1, 4 (2008) available at http://www.harrisinteractive.com/vault/HI_NewsmakerInsights_2008_v03_i01.pdf. (mentions that four in five U.S adults (79 percent)— an estimated 178 million Americans—are now online). With over 84 percent mobile phone users, reports suggest that 15.6 percent of mobile subscribers in the United States actively use the internet. 25Global Trends In Online Shopping—A Nielsen Global Consumer Report (Jun. 2010), Nielsen, http://hk.nielsen.com/documents/Q12010OnlineShoppingTrendsReport.pdf. (As of May 2008, the U.S. mobile internet audience was about evenly split between those over the age of 35 (48 percent) and those under the age of 35 (52 percent). Nielsen data show that 56 percent of mobile internet users are male and 44 percent are female. While 24 percent of mobile internet users have household incomes of $100K or more, 26 percent have a household income of less than $50K. However, the question arises as to whether survey experts would prefer an internet survey on portable devices where the stimuli images may be displayed in image sizes considerably smaller than the original. An added concern would be the increased levels of distraction, third-party interference, and the risk of unverifiable identity of the respondent when mobile devices are used for answering survey questions. Yet, as internet usage continues to grow, the demographic profile of internet users increasingly looks like that of the nation as a whole. 26See Steinberg & Light, supra note 24. Just as the medium of survey in market research has shifted to the internet, 27The Online Research Industry: An Update on Current Practices and Trends (May 2006), Dufferin Research, http://www.dufferinresearch.com/downloads/TheOnlineResearchIndustry2006.pdf. (The 2006 poll showed that 87 percent of all market research companies conducted research online, up 9 percent over just one year earlier). the judiciary should also be encouraged to accept a similar shift in the venue for conducting surveys in trademark litigation.

II. DO WE REQUIRE A SURVEY? WHAT MAKES THE SURVEY RELIABLE?

Trademark surveys assist in measuring the subjective mental associations of prospective purchasers by attempting to recreate potential purchasing environments in which an asserting senior and disputed junior mark are found in the marketplace. 286 J. Thomas McCarthy, McCarthy on Trademarks and Unfair Competition § 32:195 (4th ed. 2010) (calling surveys “the most direct method of demonstrating secondary meaning and likelihood of confusion”) (citing Charles Jaquin Et Cie Inc. v. Destileria Serralles Inc., 921 F.2d 467, 476 (3d Cir. 1990)). Michael J. Allen, The Role of Actual Confusion Evidence in Federal Trademark Infringement Litigation, 16 Campbell L. Rev. 19 at 27, 28 (1994); Robert H. Thornburg, Trademark Surveys: Development of Computer-Based Survey Methods, 4 J. Marshall L. Rev. Intell. Prop. 91 (2005); accord McCarthy,supra note 28 at § 32:189 (“[S]urveys are used to describe or enumerate objects or the beliefs, attitudes, or behavior of persons or other social units.”). The attitude towards survey evidence has evolved from rejection of survey results based on hearsay concerns to relying on such surveys to decide the trademark case. 29Arthur Best, Evidence: Examples and Explanations (New York, Aspen Pub., 7th ed. 2009). In early trademark cases, judges tended to reject the survey method or asked for live witnesses in addition to survey results. This approach changed when the survey evidence was held to simply offer to show the state of mind of the respondents when confronted with the survey stimulus, rather than to prove the truth in their response. 30Irina D. Manta, In Search Of Validity: A New Model for the Content and Procedural Treatment of Trademark Infringement Surveys, 24 Cardozo Arts & Ent. L. J. 1027 (2007). Subsequently, recognizing that the credibility of a survey would be accorded the weight rather than the admissibility of the survey, courts put the burden on the jury to decide on the credibility of the survey evidence and the admission of expert evidence. However, the case Daubert v. Merrell Dow Pharmaceuticals shifted this “gatekeeping role” from the jury to the trial judge. 31509 U.S. 579 (1993). The Daubert court explained that “the trial judge must ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.” Today, survey evidence is generally admissible.” 32Christopher B. Mueller & Laird C. Kirkpatrick, Modern EvidenceDoctrine and Practice note 9.18, at 1527 (1995). Trademark surveys are intended to inspire the outcome of trademark litigation, to the extent that the absence of a survey report is sometimes read as a sign that the litigant is less serious about its case. 33McCarthy, supra note 28 at § 32:195; see also Citigroup Inc. v. City Holding Co., 2003 U.S. Dist. LEXIS 1845, at *72 (S.D.N.Y., 2003) (quoting Merriam-Webster Inc. v. Random House, 35 F.3d 65, 72 (2d Cir. 1994) and E.S. Originals Inc. v. Stride Rite Corp., 656 F. Supp. 484, 490 (S.D.N.Y. 1987) (“[T]hat [plaintiff] did not undertake a consumer survey ... strongly suggests that a likelihood of confusion cannot be shown.”)).

The importance of the circumstantial evidence afforded by consumer surveys in trademark litigation cannot be overemphasized. 34Jack P. Lipton, Trademark Litigation: A New Look at the Use of Social Science Evidence, 29 Ariz. L. Rev. 639 at 641, 42 (1987) (“In the trademark area, surveys have been offered as evidence of the existence of secondary meaning or consumer confusion ... to assess whether a brand name has become generic ... [and] in support of or in opposition to an application for federal registration ... surveys, as a means of assessing consumers’ state of mind, have played an essential role in aiding courts to make factual findings under the appropriate legal test.”) Although methodological errors in trademark surveys generally go only to the weight of the evidence, rather than to its admissibility, if the survey is so flawed that its probative value is outweighed by the risk of prejudice, it would be excluded. 35Schering Corp. v. Pfizer Inc., 189 F.3d 218 (2d Cir. 1999). See also Kenneth A. Plevan, Daubert’s Impact on Survey Experts in Lanham Act Litigation, 95 Trademark Rep. 596 (2005) (In a study of reported Lanham Act decisions during period 1997-2004, there were 14 decisions which excluded survey evidence altogether, while in 30 cases the admissibility of a survey was challenged but the survey was not excluded.); Jacob Jacoby, Experimental Design and Selection of Controls in Trademark and Deceptive Advertising Surveys, 92 Trademark Rep. 890 (2002); Mark Gideon & Jacob Jacoby, “Continuing Commercial Impression and its Measurement.” 10(3) Marq. Intell. Prop. L. Rev. at 431-454 (2006), NYU Law and Econ. Research Paper No. 05-29 available at http://ssrn.com/abstract=869279. Courts look to specific indicia of reliability regarding how a trademark survey was conducted and performed to determine if a survey is reliable and trustworthy, and has “evidentiary value.” These indicia, among others, include:

whether a sufficient number of individuals were surveyed; whether specific controls were created to measure if any portion of the survey was confusing in itself; whether the questions followed a proper format or were too leading; and whether a specific percentage of survey respondents were called again to verify the accuracy of their answers. 36Thornburg, supra note 28. See also Toys “R” Us v. Canarsie Kiddie Shop Inc., 559 F.Supp. 1189 (E.D.N.Y. 1983).

Whereas importance has been given to these factors, there has been no scholarly or judicial mention that the medium used for data collection affects the reliability of the survey.

III. WHICH SURVEY METHOD MAXIMIZES RELIABILITY?

The traditional methods of survey in trademark litigation are the mall-intercept surveys, telephone surveys, and central location surveys. Selection of a survey format is often based on the consumer group that would purchase or be associated with the goods or services sold under a particular trademark. 37Thornburg, supra note 28. More often than not, the ultimate choice of survey medium is based on the cost of the survey, reliability due to scrivener error, and risk of fabrication of data. 38Id.

In addition to the fact that traditional survey forms require a huge manpower and cost thousands of dollars, these environments are all subject to being discredited due to the risk that survey interviewers could falsify or mischaracterize data entries, or that information could be given without sufficient verification. 39Id. Could the advent of internet browsing and online shopping overcome these drawbacks of the traditional survey medium?

(i) Mall-Intercept Surveys

As the mall has generally been connected with the sales and promotion of multiple consumer goods tied to trademark association, trademark surveys for consumer goods often occur within these venues. The key to mall-intercept surveys is that they allow for direct interaction between consumers and the trademark elements alleged to have secondary meaning or to be the cause of likelihood of confusion. 40Id. Although this method of survey is popular, it is not without flaws. The inherent flaw with the mall-intercept survey is that selection of the sample from the universe is accomplished by merely observing an individual’s external appearance without knowledge of the actual demographic that the respondent represents. Moreover, “‘self-selection’ may be a problem with mall-intercept surveys” in that only certain types of individuals may come forward to be interviewed, especially if the interest is based on a desire to obtain a free gift. 41Tyco Industries Inc. v. Lego Systems Inc., 5 USPQ2d.2d (BNA) 1023, 1031 (D.N.J. 1987). Where internet and catalogue shopping for higher end goods and product availability across international borders is the emerging trend, mall-intercept surveys no longer provide the proper universe of the general consumer population. In addition to this shift in the consumer population, there is another factor that limits the scope of the universe: malls are considered recreational places of interest—this environment could exclude a certain economic class or racial group, whose members do not generally shop recreationally at malls. 42Miles Laboratories Inc. v. Naturally Vitamin Supplements Inc., 1 USPQ2d 1445 at 1455 n.33 (T.T.A.B. 1986). See John P. Reiner, The Universe and Sample: How Good is Good Enough? 73 Trademark Rep. 366 at 373 (1983) (“[I]f the selected mall caters to people with an ascertainable socio-economic standard of living, the views of that group of shoppers (i.e., universe) may not reflect the views of different groups that shop at other locations.”).

An arguable flaw with the mall-intercept survey is its human element. 43Thornburg, supra 28. Arguably, it is this human element by which respondents can be supervised and verified. The reliability and validity of estimates about the population derived from sampling depends on whether the data gathered was accurately reported and analyzed in accordance with accepted statistical principles. 44Manual for Complex Litigation (Third) § 21.493 at 102 (1995). The data collected from a mall-intercept survey can either be improperly recorded or even misstated. The interviewers could be temporary hires lacking in educated expertise in conducting surveys or recording the responses. Such flaws, if found in the cross-examination of the survey expert, could lead to exclusion of the survey. 45Ref. Manual on Sci. Evidence 221 at 258 (1994).

Internet surveys are the practical solution to these drawbacks. Online shopping not only provides for the same direct interaction between the customer and the products as in a mall, but it also overcomes, to a large extent, the disparity in the profile of the customers. Online surveys enable the selection of interviewees based on information provided regarding territorial location, age, gender, race, etc. during an online purchase or sign-in to a password protected website. Courts have often placed importance on the selection of the sample based on prospective purchasers of the items over selection based on age, gender and past users, etc. 46While confusion of non-purchasers is relevant in many cases, most often, a survey is designed to prove the state of mind of a prospective purchaser. McCarthy supra note 28 at § 32:163; Jordache Enterprises Inc. v. Levi Strauss Co., 841 F. Supp. 506, 518 (S.D.N.Y. 1993) (The survey was “defective” and the results were “irrelevant” as the survey did not enquire as to whether the participants intended to purchase jeans in the future.); See Ideal Toy Corp. v. Kenner Products Division of General Mills Fun Group Inc., 443 F. Supp. 291 (S.D.N.Y. 1977) (Here, the survey to prove association of toys with characters in Star Wars movie was faulty because it did not reach persons in the market for space toys or those persons who were making a purchase.); Original Appalachian Artwork Inc. v. Blue Box Factory (USA) Ltd., 577 F. Supp. 625 (S.D.N.Y. 1983) (A mall survey of persons planning on buying gifts for girls under 12 years was found to cover too broad a universe, as the survey did not focus on prospective purchasers of the plaintiff’s dolls.). This problem of an over-inclusive universe may be eliminated in the streamlined shopping environment of the internet, where based on the search history and prior purchases, it is possible to select the appropriate sample. 47In situations where the actual marketplace is virtual, an internet survey may be the closest possible recreation of that marketplace. Dalton & Hickey, supra note 3. There have been many precedents where the responses have been invalidated because the mall-intercept survey did not produce a nationally projectable percentage. 48General Motors Corp. v. Cadillac Marine & Boat Co., 226 F. Supp. 716 (W.D. Mich. 1964) (noting that 150 people in a single area is inadequate to sample entire U.S. market); R. J. Reynolds Tobacco Co. v. Loew’s Theatres Inc., 511 F. Supp. 867 (S.D.N.Y. 1980) (holding that suburban shopping mall survey fails to produce nationally projectable percentage). Whereas in general, a store may be the best place to measure the state of mind at the time of purchase, it would be virtually impossible to obtain a representative national sample if stores were used. 49Zippo Manufacturing Co. v. Rogers Imports Inc., 216 F. Supp. 670 (S.D.N.Y. 1963). Bradlee B. Boal, Techniques for Ascertaining Likelihood of Confusion and the Meaning of Advertising Communications, 73 Trademark Rep. 405 (1983). However, critics may argue that territoriality is an issue in the borderless cyberspace, where trademark law is governed by territorial limitations. 50Sears, Roebuck & Co. v. Allstate Driving School Inc., 301 F. Supp. 4 (E.D.N.Y. 1969) (survey of entire county was too large to give an indication of consumer confusion in an area within 10-mile radius of defendant’s small business).

Mall-intercept has the advantage of securing verifiable results in the presence of an interviewer and obtaining the response of customers in the shopping environment, both of which could be lacking in an internet survey unless carefully designed.

(ii) Telephone Surveys

The Federal Judicial Center recognizes the usefulness of telephone surveys and recommends that the expert’s report specify three elements:

(1) the procedures that were used to identify potential respondents; (2) the number of telephone numbers where no contact was made; and (3) the number of contacted potential respondents who refused to participate. 51McCarthy, supra note 28 at § 32:163 (4th ed. 2010) citing 262 Ref. Manual on Sci. Evidence § 21.493 at 102 (2nd ed. 2000).

The overwhelming benefit of a telephone survey is that it is easier to supervise the interviewees and ensure that information is properly recorded. In addition, such surveys often provide the best method to test the state of mind of consumers when they are confronted with the key issues affecting a trademark dispute without prior contemplation of the answers.

Although telephone surveys have been used in trademark litigation for over fifty years, it has been reported that between 2000 and 2006, there has been a 50 percent decline in participation and response rates in telephone surveys. 52Gleb & Gelb, supra note 5 at 1076. It is becoming increasingly difficult to contact the target population due to a recent decline in the use of landline telephones, and anti-solicitation measures taken by potential respondents, such as registration on “Do Not Call” registries. 53Dalton & Hickey supra note 3. With the increased use of mobile phones and change in social habits, a large number of people are removed from the potential sample universe. Firms that sell lists of phone numbers to research organizations for random digit dialing exclude cellphone numbers. Also, caller ID and call blocking make it difficult to elicit a response from landline phone numbers. Even if a phone is answered, an increasing number of people are not willing to spend time responding to survey questions. Telephone surveys as a probability sample 54See Jacoby, supra note 35 at 184 (1985) (stating that “probability sampling involves the random selection of elements (e.g., people) from the universe, where each element has a known probability of being selected.”). fall short of reflecting the scientific ideal. 55Michael Rappeport, Litigation Surveys: Social Science as Evidence, 92 Trademark Rep. 957 at 971 (2002) (“In a well-defined strict probability sense, probability samples are the golden standard in survey sampling.”). They involve random choice, where persons in each selected group are contacted and an appointment made for an interview. This can be a time-consuming and expensive process. Randomness is compromised unless each individual within a household has a known chance of being questioned. These factors skew the randomness of the survey. 56If the caller asks to speak to a person at this number who is 18 years or older, it narrows the group from which the spokesperson for the household is chosen to those over 18 who are home at the time of the call. This is characterized as “a decidedly non-random procedure.” Jacob Jacoby, Survey and Field Experimental Evidence, The Psychology of Evidence and Trial Procedure 181 at 184 (Saul M. Kassin & Lawrence S. Wrightsman eds., Sage Pub’l. 1985).

Another drawback of telephone surveys is that they only provide naked questions. This is especially true when the visual appearance of the asserted mark is the key to the secondary meaning or likelihood of confusion issue. 57Schieffelin & Co. v. Jack Company of Boca, 850 F. Supp. 232, 240 (S.D.N.Y. 1994). While telephone surveys are “not per se unreliable,” they are confined to aural stimuli in which visual impressions are lost. 58See
Inc. Publishing Corp. v. Manhattan Magazine Inc., 616 F. Supp. 370, 227 USPQ 257 (S.D.N.Y. 1985), aff’d without op., 788 F.2d 3 (2d Cir. 1986).
It is also imperative that the closer the survey context comes to marketplace conditions, the greater the evidentiary weight it has. 59McCarthy, supra note 28 at § 32:163. But telephone survey interviews are not conducted “in the context of the marketplace.” 60Sears, Roebuck & Co. v. Allstate Driving School Inc., 301 F. Supp. 4, 163 USPQ 335 (E.D.N.Y. 1969); Oneida, Ltd. v. National Silver Co., 25 N.Y.S.2d 271, 48 USPQ 33 (Sup. Ct. 1940).

In this age of the dwindling importance of telephone surveys, internet surveys can prove to be an appropriate substitute. They not only provide the option of visual as well as aural stimuli, but they also have the ability to mirror marketplace conditions.

(iii) Central Location Surveys

In this survey environment, a market research company calls specific people to come to the company’s facility to be interviewed. 61Thornburg, supra note 28. The most notable limitation in this method of survey is the inherent cost required to pay a research company for logistics, rent space, and personnel to perform the survey. This method has the advantage over internet surveys of allowing visual materials to be shown in an order and manner controlled by the interviewer. 621 Mod. Sci. Evidence § 8:23 (2010-11 ed.). However, in order for this method of survey to be successful, the interviewer must be unbiased and trained to implement complex skip sequences and control the order of the questions and visual materials.

Central location surveys also suffer the weakness of a defective probability sample. They involve random choice by which each group has a known chance of being selected. Then, persons in each selected group are contacted and an appointment made for an in-person visit. This can be a very time-consuming and expensive process. Internet surveys save the cost involved in conducting the interview. They also eliminate the need for the interviewee to take the time and effort to visit the company’s facility, instead, allowing freedom for the interviewee to respond to the questions at their convenience. Although a mail survey does not present the option of changing the order of the visual materials, other forms of internet surveys can be based on automated changes in the order of questions depending on the answer to the previous question.

IV. WHAT MAKES AN INTERNET SURVEY ADMISSIBLE AS EVIDENCE?

Courts have, time and again, excluded surveys evidence obtained from traditional survey studies for non-fulfillment of the scientific criteria. 63University of Kansas v. Sinks, 2008 U.S. Dist. LEXIS 23763; Hodgdon Powder Co. v. Alliant Techsystems Inc., 512 F. Supp. 2d 1178, 1181 (D. Kan. 2007); see also 15 U.S.C. §§ 1114, 1125 (requiring proof of likelihood of confusion) (citing Citizen Financial Group Inc. v. Citizens National Bank, No. Civ.A. 01-1524, 2003 U.S. Dist. LEXIS 25977, 2003 WL 24010950, at *1 (W.D. Pa. Apr. 23, 2004) (citing McCarthy, supra note 28 at § 32:159; Rudolf Callman, 3A Callman on Unfair Competition, Trademarks and Monopolies § 21:67 (4th ed. 2001). The imperative question is whether and how an internet survey can overcome the limitations of other traditional methods in fulfilling these criteria.

(i) Proper choice and definition of the universe

Diamond, in her treatise on survey evidence has described the “universe” in a survey to mean “the target population consisting of all elements (i.e., objects, individuals, or other social units) whose characteristics or perceptions the survey is intended to represent.” 641 Mod. Sci. Evidence§ 8:23 (2010-11 ed.). The choice of relevant target population is very crucial to the weight of the evidence in court. 65See generally, Exxon Corp. v. Texas Motor Exchange Inc. (1980, CA5 Tex) 208 USPQ 384 (The survey was conducted on four different days between 10:00 a.m. and 6:00 p.m. in two high-traffic shopping centers. All 515 respondents were licensed drivers and approximately two-thirds were male. The group was evenly divided among age groups with a varied range of occupations. The court stated that “the appropriateness of the universe and the appropriateness of the survey’s format gave the survey a great amount of weight in court.”).

A major factor in defining the appropriate universe is the environment in which the target population may be found to exist. McCarthy’s treatise on trademark law states that “convincing evidence of significant actual confusion occurring under actual marketplace conditions is the best evidence of a likelihood of confusion.” 66See generally Allen, supra note 28 at 19 (1994) (cited in 3 J. McCarthy, at § 23:13 n.1 (4th ed. 1993)). It is in this actual marketplace environment that the relevant population, including all prospective and actual purchasers of the plaintiff’s or defendant’s goods or services, are to be found. 671 Mod. Sci. Evidence § 8:10 (2010-11 ed.). Therefore, the closer the survey methods mirror the situation in which the ordinary persons would encounter the mark, the greater the evidentiary weight of the survey results.

Some courts have rejected a survey because it did not accurately reproduce the state of mind of customers “in a buying mood.” 68American Luggage Works Inc. v. United States Trunk Co., 158 F. Supp. 50, 116 USPQ 188 (D. Mass. 1957), supplemental op., 161 F. Supp. 893, 117 USPQ 83 (D. Mass. 1957), aff’d, 259 F.2d 69, 118 USPQ 424 (1st Cir. 1958) (Judge Wyzanski commented that: “Many men do not take the same trouble to avoid confusion when they are responding to sociological investigators as when they spend their cash.”). In Amstar Corp. v. Domino’s Pizza, the consumer survey results submitted by the plaintiff were discounted by the court for failure to capture the proper universe. The weakness in the universe was that eight of the 10 cities in which the survey was conducted did not have “Domino’s Pizza” outlets, the outlets in two cities had been open for less than three months, and the interviewees were all women of the household primarily responsible for buying grocery who were found at home during six daylight hours. As plaintiff’s sugar was sold primarily in grocery stores, participants would have been repeatedly exposed to plaintiff’s mark, but would have had little, if any, exposure to defendants’ mark. Furthermore, the survey neglected completely defendants’ primary customers including young, single, male college students. 69Amstar Corp. v. Domino’s Pizza Inc., 615 F.2d 252 at 264 (5th Cir.), cert. denied, 449 U.S. 899 (1980).

Surveys conducted on the internet can overcome this shortcoming to a large extent. Studies show that 75 percent of the U.S. population shops online. 70See Nielsen, supra note 25. While this figure must not be taken to mean that this size of population shops only online, it may also not be denied that a major part of the population still shops physically in malls. Even with a demographic of online shoppers as this, the internet is a congenial environment where the conditions of an actual marketplace or typical consumer shopping behavior could be easily replicated.

Internet surveys also have the advantage of screening large populations quickly, thereby enabling contact with small groups of people who comprise a universe for the survey. 71Gleb & Gelb, supra note 5 at 1073. In the above mentioned cases where the surveys were rejected, had an internet survey been conducted in place of the traditional survey, the proper universe could have been reached. For instance, in the survey in Domino’s Pizza case, the relevant demographic, including both primary shoppers of groceries as well as young college students, could have been selected by screening them from online sites frequented by each group respectively. 72For example, persons responsible for grocery shopping may frequent websites specific to food preparation and recipes or certain television programs, whereas the second group of persons who are young college students could easily be chosen from gaming sites, social networking sites, educational forums, etc. In Exxon Corp. the defect in the survey could have been overcome by ensuring that the mark was not displayed on the survey screen or webpage, by disabling the option to use the “back” button, and by disallowing the option to change browser windows during the survey so as to prevent the interviewee from searching for the relevant information using a web browser.

It cannot be denied that one of the disadvantages of internet surveys is population bias if the universe consists of low-income, rural, or elderly persons, given the lower percentages of internet usage among these groups than others. However, such limitations only mean that the universe size is smaller, rather than that it is nonexistent. As established by statistical data, the population of internet users is increasingly high—presumably, this would yield a larger universe. However, although such surveys are quantitative in nature, the larger sample does not necessarily mean more representative respondents or a better quality of data. 73See also Kent D. Van Liere & Sarah Butler, Emerging Issues in the Use of Surveys in Trademark Infringement on the Web, Working Paper Prepared for the LSA Advanced Trademark & Advertising Law Conference (Sep. 2007), http://www.nera.com/nera-files/PUB_Survey_Trademark_Internet.pdf.

(ii) Representativeness of the sample of the universe

As asserted in 1-800 Contacts Inc. v. WhenU.com, the true evidentiary value of a consumer trademark survey not only depends on whether a proper universe of interviewees was ascertained, but also whether a representative sample was drawn and actually interviewed. 741-800 Contacts Inc. v. WhenU.com, 309 F. Supp. 2d 467, 499 (S.D.N.Y. 2003). This criterion may be the most influential factor in deciding the reliability of an internet survey.

The lack of standardization in the manner in which internet panel providers recruit and verify the identity of participants can raise concerns affecting the reliability of the survey result. In order to overcome this weakness, employing a survey research company that is a Council of American Survey Research Organizations member 75The Council of American Survey Research Organizations (CASRO) is an initiative that represents over 300 companies and market research operations in the United States and abroad available at http://www.casro.org/. and has a verifiable research industry history (e.g., references from major corporations) and that routinely validates its panel increases the defensibility of a survey. 76Gleb & Gelb, supra note 5 at 1073.

There is considerable debate in professional survey research about whether internet survey panels are representative of the sampled population, given the possibility of selection bias in the panels. 77Thornburg, supra note 28. Many respondents may be encouraged to volunteer as members of an internet survey panel, in consideration for cash, points, prizes, and the like. This raises the issue of the representativeness of the universe. 78Gleb & Gelb, supra note 5 at 1073. The incentive of rewards increases the number of “professional respondents” who typically belong to several panels and tend to participate in a lot of studies. 79Internet panels rely on respondents who have agreed to participate in a multitude of surveys and these respondents may not be the best to represent the population of interest. See Van Liere & Butler, supra note 73. This is a form of self-selection that is a source of potential selection bias. In order to minimize this source of bias, the screening questions for individual projects should be “blinded” such that screening questions do not in themselves reveal the “right answer.” 80Gleb & Gelb, supra note 5 at 1082. For example, instead of asking each member of a general panel only, “Have you bought milk in the last week?” and then inviting those answering “yes” to participate, the appropriate screening question would be, “In the last week, which of the following, if any, have you purchased?” followed by a list of several items. Such an approach does not signal a focus on milk in a survey, although only those who check that purchase will be included as members of the relevant population on that topic.

The anonymity attached to internet surveys also makes it difficult to track the identity of the people who have agreed to take part in the panel, to establish who actually completed the survey and to determine if the same person has completed the survey more than once. 81Id; Adv. Ravia, supra note 2. An example is In re Steelbuilding.com, where the court excluded the results of an online poll, finding that it lacked sufficient signs of reliability, as no efforts were made to “prevent visitors from voting more than once … [and] prevent interested parties, such as friends, associates or even employees of the applicant, from voting multiple times to skew the results.” 82415 F.3d 1293, 1300 (Fed. Cir. 2005). One proposed method of addressing this issue of falsification of identity is through verification of the panel provider or the respondent by repeating the screening question previously asked as a qualifying criteria. These verification questions may be asked during the survey as well as at the end of the survey.

(iii) Framing of the survey questions and choice of stimuli

The questions posed in a survey should be framed in a clear, precise and non-leading manner. 83Gleb & Gelb, supra note 5 at 1073. The interviewer should be trained in the method of asking questions, controlling the order of the questions and eliciting unbiased response from the respondents. Due to the human element in the traditional types of surveys, this criterion may be difficult to meet. The automated and non-personal nature of an internet survey is a well-suited alternative.

The order in which questions are asked and the order in which response alternatives are provided in a closed-ended question can influence the answers. Mail surveys generally have a “primary effect,” where respondents are more likely to select the first choice offered. In telephone surveys, respondents are more likely to choose the last choice offered, causing a “recency effect.” 84McCarthy, supra note 66 at § 23:13 n.1. In the absence of a formula to adjust values of order effects, a method to overcome this unintended error is by reordering the questions. Internet surveys may provide the option of an automated and random rotation of the questions or the display of the stimuli at the same time or the ability of repeating the stimuli as many times as desired by the respondent.

A shortcoming of an internet survey is the inability to pose “probe” questions possible in a live interview. 85For example, an interviewer is often instructed to “probe” yes-no responses. If the respondent says, “It makes me think of that brand,” the interviewer can probe with: “What about it that makes you think of that brand?” Although an internet survey can be programmed to ask “Why do you say that?” it cannot usually follow up using the respondent’s own language. 86Gleb & Gelb, supra note 5 at 1073. While this may be seen as a drawback, the absence of a human interviewer ensures that the survey and probe questions are administered consistently. It eliminates biased responses created by the conscious or inadvertent directive or judgmental questioning by interviewers. 87Id. An internet survey format allows for the direct entering of the survey results by the participant himself/herself rather than by an interviewer, 88Thornburg, supra note 28. which prevents survey noise that can arise from the particular interaction between an interviewer and a respondent, and also eliminates questioning by untrained or unqualified interviewers. Arguably, such a method, in the absence of supervision, may be threatened by the possibility of an unknown person answering the survey in place of the desired respondent. It is also likely that the respondent may look for the right answer or verify his/her response from external sources or seek the opinion of a third person before answering the survey question. Thus, while the “human element” superficially proposes to disfavor traditional methods, the absence of it in an internet-based survey could question the results.

An attempt to overcome the above shortcomings would involve diligent programming of the internet survey questionnaire. The survey must be programmed such that the questions following would depend on the answer provided for the preceding question. This procedure is known as “branching.” 89Branching and piping mean that no two respondents see exactly the same questionnaire, offering a researcher designing an internet survey the same degree of customization permitted by computer-aided telephone interview (CATI) surveys. Often, the questions displayed include a word or phrase provided by the respondent in a previous answer (known as “piping”). The survey can be programmed to require that each question be answered, with a minimum number of characters before moving on to the next question. The respondent may also be prevented from going back to an earlier question to change the answers based on second thoughts or additional knowledge acquired from a subsequent question.

(iv) Accurate reporting and analysis of the data gathered

A concern expressed in the American Luggage Works case was that the interviewees that gave survey answers to the interviewer ex parte could not be called for cross-examination. 90As Judge Wyzanski remarked: “So long as the interviewees are not cross-examined, there is no testing of their sincerity, narrative ability, perception and memory. There is no showing whether they were influenced by leading questions, the environment in which questions were asked, or the personality of the investigator.” See
American Luggage Works Inc. v. United States Trunk Co., 158 F. Supp. 50, 116 USPQ 188 (D. Mass. 1957), supplemental op., 161 F. Supp. 893, 117 USPQ 83 (D. Mass. 1957), aff’d, 259 F.2d 69 (1st Cir. 1958).
However, if the survey is fairly and scientifically conducted, absence of cross-examination of the interviewees should not detract from the probative value of the survey.

Internet surveys afford the advantage of an automated analysis of the data gathered. They eliminate the need for manual interpretation of the responses, and thereby the possibility of falsification of the data, mis-statement of the results or over-sight of information. The automatic and computerized analysis of data not only reduces error but also expedites the process. 91In practice, manual reporting and analysis of the data would take approximately 2-3 weeks whereas reporting and analysis of the data gathered from the internet survey is a matter of hours, or a couple of days at the most.

Thus, internet surveys are not only an advantageous alternative to the traditional methods, but are also a viable solution to the shortcomings of such survey methods.

V. ANY LAST COMMENTS?

The underlying intention of Congress in enacting the legislation against trademark infringement was to avoid situations of consumer confusion, potential consumer dissatisfaction, and losses to trademark owners. 921 Mccarthy at § 2:33 (4th ed. 1996) (“Almost all trademark disputes are between firms that use conflicting marks; the consumer is not a party to the litigation. But it is the consumer’s state of mind that largely controls the result.”); See also Manta, supra note 30. The advertising clutter has produced a nation of ad weary and ad savvy buyers. This so-called phenomenon of “greater brand literacy” of consumers 93Richard Cross & Janet Smith, Retailers Move Towards New Customer Relationships, Direct Marketing (1994) at 20. has resulted in increasing customer “competence” 94C.K. Prahalad & Venkatram Ramaswamy, CoOpting Customer Competence, Harv. Bus. Rev. (Jan-Feb 2000) at 79. and has placed consumers in a position where their opinions and perceptions could define the future of the brand. So as to further the goal of the public policy, the consumers must be given their due importance in opining about the brands and expressing their perceptions—and surveys are the voice of consumers and the most accurate means of assessing such impressions. 95See, e.g., Robert H. Thornburg, Trademark Survey Evidence: Review of Current Trends in the Ninth Circuit, 21 Santa Clara Computer & High Tech L.J. 715 (2005) (“Surveys represent the most scientific means of measuring relevant consumers’ subjective mental associations.”); Ruth M. Corbin & Arthur Renaud, When Confusion Surveys Collide: Poor Designs or Good Science?, 94 Trademark Rep. 781 at 783 (2004) (“[S]urvey research incorporates all essential structural techniques of other scientific expert evidence, including rigorous hypothesis testing, experimental design, control conditions and statistical inference.”).

Judicial observation indicates that a court’s reaction is at best not determinative and at worst irrelevant. 96American Brands Inc. v. R. J. Reynolds Tobacco Co., 413 F. Supp. 1352, 1357 (S.D. N.Y. 1976). When called on to determine whether a manufacturer of girdles labeled “Miss Seventeen” infringed the trademark of the magazine, Seventeen, Judge Frank suggested that in the absence of a test of the reactions of “numerous girls and women,” the trial court judge’s finding as to what was likely to confuse was “nothing but a surmise, a conjecture, a guess,” and noted that “neither the trial judge nor any member of this court is (or resembles) a teen-age girl or the mother or sister of such a girl.” 97Triangle Publications v. Rohrlich, 167 F.2d 969, 976 (C.C.A. 2d Cir. 1948) (overruled on other grounds by, Monsanto Chemical Co. v. Perfect Fit Products Manufacturing Co., 349 F.2d 389 (2d Cir. 1965)).

Trademark surveys have been resorted to in proving whether a mark has caused confusion or deceived consumers, 98Ty Inc. v. Softbelly’s Inc., 353 F.3d 528, 531 (7th Cir. 2003). whether a trademark has achieved secondary meaning, 99Bristol-Myers Squibb Co. v. McNeil-P.P.C. Inc., 973 F.2d 1033, 1043 (2d Cir. 1992). whether false advertising is occurring in the marketplace, 100Rice v. Fox Broadcasting Co., 330 F.3d 1170, 1182 n.8 (9th Cir. 2003). and whether a type of product configuration is a protectable trade dress element or if it is “functional.” 101See OddzOn Prods. v. Just Toys, 122 F.3d 1396, 1405 (Fed. Cir. 1997). The pivotal legal question in such cases demands survey research because it centers on consumer perception (i.e., is the consumer likely to be confused about the source of a product, or does the advertisement imply an inaccurate message). While each type of trademark issue demands a survey methodology tailored to the elements required to be proved and specific to the facts of the case, it may be deduced that the peculiarities would remain the same whether the medium of the survey is the internet, a mall, the telephone or otherwise.

It is generally understood that “no survey is perfect” and that some problems “in the questions and methodology should only affect the weight accorded to the survey results” rather than their admissibility. 102Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993). It is in the best interest of the public and litigants to use the most efficient and cost-effective method available to obtain the data. 103Alex Simonson, Online Interviewing For Use In Lanham Act Litigation, 14(2) Intell. Prop. Strategist 3 (Nov. 2007) (concluding that “while mall-intercepts took the place of door-to-door interviewing, internet interviewing will undoubtedly become the norm over the next decade”). While there may be reluctance in understanding the new procedures, reading automated results, etc., in this era of technological and marketing development, internet surveys are the promising way forward.

Internet surveys have survived long enough to show their viability as a survey medium. Yet, given its limitations, the use of the internet as a medium for conducting surveys could be questionable for reasons beyond mere skepticism. While each survey method has its pros and pitfalls, the trick lies in making the right choice of survey method or a combination of it, on a case-by-case basis, depending on an unbiased review of the products/marks in dispute, the demographic of customers, and the feasibility of the survey. It is, therefore, not a matter of choosing either Option A or Option B—but finding the balance in a third—Option C.

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.