US8918312B1 - Assigning sentiment to themes - Google Patents

Assigning sentiment to themes Download PDF

Info

Publication number
US8918312B1
US8918312B1 US13/842,159 US201313842159A US8918312B1 US 8918312 B1 US8918312 B1 US 8918312B1 US 201313842159 A US201313842159 A US 201313842159A US 8918312 B1 US8918312 B1 US 8918312B1
Authority
US
United States
Prior art keywords
review
theme
reviews
sentiment
keyword
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US13/842,159
Inventor
John Andrew Rehling
Thomas Gerardo Dignan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reputation com Inc
Original Assignee
Reputation com Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=52101907&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US8918312(B1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
US case filed in Delaware District Court litigation https://portal.unifiedpatents.com/litigation/Delaware%20District%20Court/case/1%3A21-cv-00129 Source: District Court Jurisdiction: Delaware District Court "Unified Patents Litigation Data" by Unified Patents is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Reputation com Inc filed Critical Reputation com Inc
Priority to US13/842,159 priority Critical patent/US8918312B1/en
Assigned to REPUTATION.COM, INC. reassignment REPUTATION.COM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIGNAN, Thomas Gerardo, REHLING, JOHN ANDREW
Application granted granted Critical
Publication of US8918312B1 publication Critical patent/US8918312B1/en
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: REPUTATION.COM, INC.
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: REPUTATION.COM, INC.
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REPUTATION.COM, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06F17/28
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3325Reformulation based on results of preceding query
    • G06F16/3326Reformulation based on results of preceding query using relevance feedback from the user, e.g. relevance feedback on documents, documents sets, document terms or passages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • Businesses are increasingly concerned with their online reputations, and the reputations of their competitors. For example, both positive and negative reviews posted to a review website can impact revenue. As more review websites are created, and as more users post more content to those sites, it is becoming increasingly difficult for businesses to monitor online information.
  • FIG. 1 illustrates an embodiment of an environment in which business reputation information is collected, analyzed, and presented.
  • FIG. 2 illustrates an example of components included in embodiments of a reputation platform.
  • FIG. 3 illustrates an embodiment of a process for enrolling a business with a reputation platform.
  • FIG. 4 illustrates an example of components included in embodiments of a reputation platform.
  • FIG. 5 illustrates an embodiment of a process for refreshing reputation data.
  • FIG. 6 illustrates an example of an interface as rendered in a browser.
  • FIG. 7 illustrates an example of components included in an embodiment of a reputation platform.
  • FIG. 8 illustrates an embodiment of a process for generating a reputation score.
  • FIG. 9 illustrates an example of an interface as rendered in a browser.
  • FIG. 10 illustrates an example of an interface as rendered in a browser.
  • FIG. 11 illustrates an example of an interface as rendered in a browser.
  • FIG. 12 illustrates a portion of an interface as rendered in a browser.
  • FIG. 13 illustrates a portion of an interface as rendered in a browser.
  • FIG. 14 illustrates an example of an interface as rendered in a browser.
  • FIG. 15 illustrates a portion of an interface as rendered in a browser.
  • FIG. 16 illustrates a portion of an interface as rendered in a browser.
  • FIG. 17 illustrates an example of an interface as rendered in a browser.
  • FIG. 18 illustrates a portion of an interface as rendered in a browser.
  • FIG. 19 illustrates a portion of an interface as rendered in a browser.
  • FIG. 20 illustrates an embodiment of a reputation platform that includes a review request engine.
  • FIG. 21 illustrates an embodiment of a process for targeting review placement.
  • FIG. 22 illustrates an example of a target distribution.
  • FIG. 23 illustrates an example of a target distribution.
  • FIG. 24 illustrates an embodiment of a process for performing an industry review benchmark.
  • FIG. 25 illustrates an embodiment of a process for recommending potential reviewers.
  • FIG. 26 illustrates an embodiment of a process for determining a follow-up action.
  • FIG. 27 illustrates a portion of an interface as rendered in a browser.
  • FIG. 28 illustrates an embodiment of a process for stimulating reviews.
  • FIG. 29 illustrates an example of an interface as rendered in a browser.
  • FIG. 30 illustrates an example of an interface as rendered in a browser.
  • FIG. 31 illustrates an example of an interface as rendered in a browser.
  • FIG. 32 illustrates an example of a popup display of reviews including a term.
  • FIG. 33 illustrates an alternate example of a popup display of reviews including a term.
  • FIG. 34 illustrates an example of an interface as rendered in a browser.
  • FIG. 35 illustrates an example of an interface as rendered in a browser.
  • FIG. 36 illustrates an embodiment of a process for assigning sentiment to themes.
  • FIG. 37A illustrates an embodiment of an ontology associated with medical practices.
  • FIG. 37B illustrates an embodiment of an ontology associated with a restaurant.
  • FIG. 38 illustrates an example of sentiment being assigned to themes based on three reviews.
  • FIG. 39 illustrates an example of a process for assigning a sentiment to a theme.
  • FIG. 40 is a table of example positivity calculations.
  • FIG. 41A is a portion of a table of themes and scores for an example restaurant.
  • FIG. 41B is a portion of a table of themes and scores for an example restaurant.
  • FIG. 41C is a portion of a table of themes and scores for an example restaurant.
  • FIG. 42 illustrates an example of a sentence included in a review.
  • FIG. 43 illustrates an example of a sentence included in a review.
  • FIG. 44 illustrates an example of a sentence included in a review.
  • FIG. 45 illustrates an example of sentence extractions used in deduplication.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
  • FIG. 1 illustrates an embodiment of an environment in which business reputation information is collected, analyzed, and presented.
  • the user of client device 106 (hereinafter referred to as “Bob”) owns a single location juice bar (“Bob's Juice Company”).
  • the user of client device 108 (hereinafter referred to as “Alice”) is employed by a national chain of convenience stores (“ACME Convenience Stores”).
  • ACME Convenience Stores As will be described in more detail below, Bob and Alice can each access the services of reputation platform 102 (via network 104 ) to track the reputations of their respective businesses online.
  • the techniques described herein can work with a variety of client devices 106 - 108 including, but not limited to personal computers, tablet computers, and smartphones.
  • Reputation platform 102 is configured to collect reputation and other data from a variety of sources, including review websites 110 - 114 , social networking websites 120 - 122 , and other websites 132 - 134 .
  • users of platform 102 such as Alice and Bob, can also provide offline survey data to platform 102 .
  • review site 110 is a general purpose review site that allows users to post reviews regarding all types of businesses. Examples of such review sites include Google Places, Yahoo! Local, and Citysearch.
  • Review site 112 is a travel-oriented review site that allows users to post reviews of hotels, restaurants, and attractions.
  • One example of a travel-oriented review site is TripAdvisor.
  • Review site 114 is specific to a particular type of business (e.g., car dealers).
  • Examples of social networking sites 120 and 122 include Twitter and Foursquare. Social networking sites 120 - 122 allow users to take actions such as “checking in” to locations.
  • personal blog 134 and online forum 132 are examples of other types of websites “on the open Web” that can contain business reputation information.
  • Platform 102 is illustrated as a single logical device in FIG. 1 .
  • platform 102 is a scalable, elastic architecture and may comprise several distributed components, including components provided by one or more third parties. Further, when platform 102 is referred to as performing a task, such as storing data or processing data, it is to be understood that a sub-component or multiple sub-components of platform 102 (whether individually or in cooperation with third party components) may cooperate to perform that task.
  • FIG. 2 illustrates an example of components included in embodiments of a reputation platform.
  • FIG. 2 illustrates components of platform 102 that are used in conjunction with a business setup process.
  • Bob In order to access the services provided by reputation platform 102 , Bob first registers for an account with the platform. At the outset of the process, he accesses interface 202 (e.g., a web-based interface) and provides information such as a desired username and password. He also provides payment information (if applicable). If Bob has created accounts for his business on social networking sites such as sites 120 and 122 , Bob can identify those accounts to platform 102 as well.
  • interface 202 e.g., a web-based interface
  • Bob is prompted by platform 102 to provide the name of his business (e.g., “Bob's Juice Company”), a physical address of the juice bar (e.g., “123 N. Main St.; Cupertino, Calif. 95014), and the type of business that he owns (e.g., “restaurant” or “juice bar”).
  • the business information entered by Bob is provided to auto find engine 204 , which is configured to locate, across sites 110 - 114 , the respective profiles on those sites pertaining to Bob's business (e.g., “www.examplereviewsite.com/CA/Cupertino/BobsJuiceCo.html”), if present. Since Bob has indicated that his business is a juice bar, reputation platform 102 will not attempt to locate it on site 114 (a car dealer review site), but will attempt to locate it within sites 110 and 112 .
  • sites 110 and 114 make available respective application programming interfaces (APIs) 206 and 208 that are usable by auto find engine 204 to locate business profiles on their sites.
  • Site 112 does not have a profile finder API.
  • auto find engine 204 is configured to perform a site-specific search using a script that accesses a search engine (e.g., through search interface 210 ).
  • a query of: “site:www.examplereviewsite.com ‘Bob's Juice Company’ ‘Cupertino’” could be submitted to the Google search engine using interface 210 .
  • Results obtained by auto find engine 204 are provided to verification engine 212 , which confirms that information, such as the physical address and company name provided by Bob are present in the located profiles.
  • Verification engine 212 can be configured to verify all results (including any obtained from site 110 and 114 ), and can also be configured to verify (or otherwise process) just those results obtained via interface 210 .
  • the first ten results obtained from search interface 210 can be examined. The result that has the best match score and also includes the expected business name and physical address is designated as the business's profile at the queried site.
  • verification engine 212 presents results to Bob for verification that the located profiles correspond to his business.
  • Bob may be shown (via interface 202 ) a set of URLs corresponding to profiles on each of the sites 110 - 114 where his business has been located and asked to verify that the profiles are indeed for his business.
  • the URLs of the profiles also referred to herein as “subscriptions”
  • any other appropriate data are stored in database 214 . Examples of such other data include overview information appearing on the business's profile page (such as a description of the business) and any social data (e.g., obtained from sites 120 - 122 ).
  • users are given the option by platform 102 to enter the specific URLs corresponding to their business profiles on review sites. For example, if Bob knows the URL of the Google Places page corresponding to his business, he can provide it to platform 102 and use of auto find engine 204 is omitted (or reduced) as applicable.
  • FIG. 3 illustrates an embodiment of a process for enrolling a business with a reputation platform.
  • process 300 is performed by platform 102 .
  • the process begins at 302 when a physical address of a business is received.
  • a physical address of a business is received.
  • Bob provides the address of his business to platform 102 via interface 202
  • that address is received at 302 .
  • the received address is used as a query.
  • the processing performed at 304 the received address is provided to site 110 using API 206 .
  • a site-specific query (e.g., of site 112 ) is submitted to a search engine via search interface 210 .
  • results of the query (or queries) performed at 304 are verified.
  • verification engine 212 performs checks such as confirming that the physical address received at 302 is present in a given result.
  • a user can be asked to confirm that results are correct, and if so, that confirmation is received as a verification at 306 .
  • verified results are stored.
  • URLs for each of the verified profiles is stored in database 214 .
  • platform 102 makes use of multiple storage modules, such as multiple databases.
  • Such storage modules may be of different types. For example, user account and payment information may be stored in a MySQL database, while extracted reputation information (described in more detail below) may be stored using MongoDB.
  • the business owner (or a representative of the business, such as Alice) can be prompted to loop through process 300 for each of the business locations.
  • Physical addresses and/or the URLs of the corresponding profiles on sites such as sites 110 - 114 can also be provided to platform 102 in a batch, rather than by manually entering in information via interface 202 .
  • Alice may instead elect to upload to platform 102 a spreadsheet or other file (or set of files) that includes the applicable information.
  • Tags associated with each location can also be provided to platform 102 (e.g., as name-value pairs). For example, Alice can tag each of the 2,000 locations with a respective store name (Store #1234), manager name (Tom Smith), region designation (West Coast), brand (ACME-Quick vs. Super-ACME), etc. As needed, tags can be edited and deleted, and new tags can be added. For example, Alice can manually edit a given location's tags (e.g., via interface 202 ) and can also upload a spreadsheet of current tags for all locations that supersede whatever tags are already present for her locations in platform 102 . As will be described in more detail below, the tags can be used to segment the business to create custom reports and for other purposes.
  • FIG. 4 illustrates an example of components included in embodiments of a reputation platform.
  • FIG. 4 illustrates components of platform 102 that are used in conjunction with the ongoing collection and processing of data.
  • Reputation platform 102 includes a scheduler 402 that periodically instructs collection engine 404 to obtain data from sources such as sites 110 - 114 .
  • data from sites 120 - 122 , and/or 132 - 134 is also collected by collection engine 404 .
  • Scheduler 402 can be configured to initiate data collection based on a variety of rules. For example, it can cause data collection to occur once a day for all businesses across all applicable sites. It can also cause collection to occur with greater frequency for certain businesses (e.g., which pay for premium services) than others (e.g., which have free accounts). Further, collection can be performed across all sites (e.g., sites 110 - 114 ) with the same frequency or can be performed at different intervals (e.g., with collection performed on site 110 once per day and collection performed on site 112 once per week).
  • data collection can also be initiated based on the occurrence of an arbitrary triggering event. For example, collection can be triggered based on a login event by a user such as Bob (e.g., based on a permanent cookie or password being supplied). Collection can also be triggered based on an on-demand refresh request by the user (e.g., where Bob clicks on a “refresh my data” button in interface 202 ). Other elements depicted in FIG. 4 will be described in conjunction with process 500 shown in FIG. 5 .
  • FIG. 5 illustrates an embodiment of a process for refreshing reputation data.
  • process 500 is performed by platform 102 .
  • the process begins at 502 when a determination is made that a data refresh should be performed. As one example, such a determination is made at 502 by scheduler 402 based on an applicable schedule. As another example, such a determination is made at 502 when a triggering event (such as a login event by Bob) is received by platform 102 .
  • a triggering event such as a login event by Bob
  • collection engine 404 reviews the set of subscriptions stored in database 214 for Bob's Juice Company.
  • the set of subscriptions associated with Bob's company are the ones that will be used by collection engine 404 during the refresh operation.
  • a refresh can be performed on behalf of multiple (or all) businesses, instead of an individual one such as Bob's Juice Company. In such a scenario, portion 504 of the process can be omitted as applicable.
  • helper 420 is configured with instructions to fetch data from a particular type of source.
  • site 110 provides an API for locating business profiles, it does not make review data available via an API. Such data is instead scraped by platform 102 accordingly.
  • an instance 430 of helper 420 is executed on platform 102 .
  • Instance 430 is able to extract, for a given entry on site 110 , various components such as: the reviewer's name, profile picture, review title, review text, and rating.
  • Helper 424 is configured with instructions for scraping reviews from site 114 . It is similarly able to extract the various components of an entry as posted to site 114 .
  • Site 112 has made available an API for obtaining review information and helper 422 is configured to use that API.
  • helper 426 is configured to extract check-in data from social site 120 using an API provided by site 120 .
  • helper 428 is executed on platform 102 , a search is performed across the World Wide Web for blog, forum, or other pages that discuss Bob's Juice Company. In some embodiments, additional processing is performed on any results of such a search, such as sentiment analysis.
  • information, obtained on behalf of a given business is retrieved from different types of sites in accordance with different schedules.
  • review site data might be collected hourly, or on demand
  • social data may be collected once a day.
  • Data may be collected from sites on the open Web (e.g., editorials, blogs, forums, and/or other sites not classified as review sites or social sites) once a week.
  • any new results are stored in database 214 .
  • the results are processed (e.g., by converting reviews into a single, canonical format) prior to being included in database 214 .
  • database 214 supports heterogeneous records and such processing is omitted or modified as applicable. For example, suppose reviews posted to site 110 must include a score on a scale from one to ten, while reviews posted to site 112 must include a score on a scale from one to five.
  • Database 214 can be configured to store both types of reviews.
  • the raw score of a review is stored in database 214 , as is a converted score (e.g., in which all scores are converted to a scale of one to ten).
  • database 214 is implemented using MongoDB, which supports such heterogeneous record formats.
  • platform 102 includes a theme engine 434 , which is configured to identify themes common across reviews.
  • alerter 432 is configured to alert Bob (e.g., via an email message) whenever process 500 (or a particular portion thereof) is performed with respect to his business. In some cases, alerts are only sent when new information is observed, and/or when reputation scores associated with Bob's business (described in more detail below) change, or change by more than a threshold amount.
  • Platform 102 is configured to determine a variety of reputation scores on behalf of businesses such as Bob's Juice Company.
  • businesses such as Bob's Juice Company.
  • individual reputation scores are determined for each of the locations, and the scores of individual businesses can be aggregated in a variety of ways.
  • the scores provide users with perspective on how their businesses are perceived online.
  • users are able to explore the factors that contribute to their businesses' reputation scores by manipulating various interface controls, and they can also learn how to improve their scores.
  • users can segment the locations in a variety of ways to gain additional insight.
  • FIG. 6 illustrates an example of an interface as rendered in a browser.
  • Bob is presented with interface 600 after logging in to his account on platform 102 using a browser application on client device 106 and clicking on tab option 602 .
  • a composite reputation score (728 points) is depicted on a scale 606 .
  • Example ways of computing a composite score are described in conjunction with FIG. 7 .
  • the composite reputation score provides Bob with a quick perspective on how Bob's Juice Company is perceived online.
  • a variety of factors can be considered in determining a composite score.
  • Six example factors are shown in region 608 , each of which is discussed below.
  • Bob can see tips on how to improve his score with respect to that factor by clicking on the appropriate box (e.g., box 622 for tips on improving score 610 ).
  • a recommendation box is present for each score presented in region 608 .
  • such boxes are only displayed for scores that can/should be improved.
  • box 626 is omitted from the interface as displayed to Bob, or an alternate message is displayed, such as a general encouragement to “keep up the good work.”
  • a review score e.g., star rating
  • Timeliness ( 612 ): This score indicates how current a business's reviews are (irrespective of whether they are positive or negative). In the example shown, reviews older than two months have less of an impact than more recent reviews. Thus, if one entity has 200 reviews with an average rating of four stars, at least some of which were recently authored, and a second entity has the same volume and star rating but none of the reviews were written in the last two months, the first entity will have a higher timeliness score and thus a higher composite reputation score. If Bob clicks on box 624 , he will be presented with a suggestion, such as the following: “Managing your online reviews is not a one-time exercise, but a continual investment into your business.
  • Timeliness can also be used, such as a score that indicates the relative amount of new vs. old positive reviews and new vs. old negative reviews. (I.e., to see whether positive or negative reviews dominate in time.)
  • Length ( 614 ): This score indicates the average length of a business's reviews. Longer reviews add weight to the review's rating. If two reviews have the same star rating (e.g., one out of five stars), but the first review is ten words and the second review is 300 words, the second review will be weighted more when computing the composite score. If Bob clicks on box 626 , he will be presented with a suggestion, such as the following: “Encourage your positive reviewers to write in-depth reviews. They should detail their experiences and highlight what they like about your business. This provides credibility and the guidance makes review writing easier for them.” Other measures of Length can also be used, such as a score that indicates the relative amount of long vs. short positive reviews and long vs. short negative reviews. (I.e., to see whether positive or negative reviews dominate in length.)
  • Social Factors ( 616 ): Reviews that have been marked with social indicators (e.g., they have been marked by other members of the review community as being “helpful” or “funny”) will have more bearing on the outcome of the composite score. By clicking on box 632 , Bob will be presented with an appropriate suggestion for improvement.
  • Reviewer Authority ( 618 ): A review written by an established member of a community (e.g., who has authored numerous reviews) will have a greater impact on the outcome of the composite score than one written by a reviewer with little or no history on a particular review site. In some embodiments, the audience of the reviewer is also taken into consideration. For example, if the reviewer has a large Twitter following, his or her review will have a greater bearing on the outcome of the score. If Bob clicks on box 628 , he will be presented with a suggestion, such as the following: “Established reviewers can be a major boon to your review page. Their reviews are rarely questioned and their opinions carry significant weight. If you know that one of your customers is an active reviewer on a review site, make a special effort to get him or her to review your business.”
  • a control can be provided that allows a user to see individual outlier reviews—reviews that contributed the most to/deviated the most from the overall score (and/or individual factors).
  • a one-star review that is weighted heavily in the calculation of a score or scores can be surfaced to the user. The user could then attempt to resolve the negative feelings of the individual that wrote the one-star review by contacting the individual.
  • a particularly important five-star review e.g., due to being written by a person with a very high reviewer authority score
  • the review can be surfaced to the user so that the user can ask the author to provide an update or otherwise refresh the review.
  • weights can be assigned to the above factors when generating the composite score shown in region 604 . Further, the factors described above need not all be employed nor need they be employed in the manners described herein. Additional factors can also be used when generating a composite score. An example computation of a composite score is discussed in conjunction with FIG. 7 .
  • FIG. 7 illustrates an example of components included in an embodiment of a reputation platform.
  • FIG. 7 illustrates components of platform 102 that are used in conjunction with generating reputation scores.
  • the composite score shown at 604 in FIG. 6 is refreshed.
  • scoring engine 702 retrieves, from database 214 , review and other data pertaining to Bob's business and generates the various scores shown in FIG. 6 .
  • Example ways of computing a composite reputation score are as follows.
  • scoring engine 702 computes a base score “B” that is a weighted average of all of the star ratings of all of the individual reviews on all of the sites deemed relevant to Bob's business:
  • N r is the total number of reviews
  • s i is the number of “stars” for review “i” normalized to 10
  • w i is the weight for review “i”
  • is the Heaviside step function
  • N min is the minimum number of reviews needed to score (e.g., 4).
  • the factor 100 is used to expand the score to a value from 0 to 1000.
  • w i D A ⁇ T i ⁇ P i ⁇ R A ⁇ S F ⁇ L F
  • D A is the domain authority, which reflects how important the domain is with respect to the business.
  • a doctor-focused review site may be a better authority for reviews of doctors than a general purpose review site.
  • One way to determine domain authority values is to use the domain's search engine results page placement using the business name as the keyword.
  • R A is the reviewer authority.
  • One way to determine reviewer authority is to take the logarithm of 1+the number of reviews written by the reviewer. As explained above, a review written by an individual who has authored many reviews is weighted more than one written by a less prolific user.
  • S F is the social feedback factor.
  • One way to determine the factor is to use the logarithm of 1+the number of pieces of social feedback a review has received.
  • L F is the length factor. One way to specify this value is to use 1 for short reviews, 2 for medium reviews, and 4 for long reviews.
  • T i is the age factor.
  • T i max( e ⁇ (a i ⁇ 2) ,0.5)
  • is the time-based decay rate
  • Position factor is the position factor for review “i.”
  • the position factor indicates where a given review is positioned among other reviews of the business (e.g., it is at the top on the first page of results, or it is on the tenth page).
  • One way to compute the position factor is as follows:
  • a given site may have an overall rating given for the business on the main profile page for that business on the site.
  • the base score is normalized (to generate “B norm ”). In some embodiments this is performed by linearly stretching out the range of scores from 8 to 10 to 5 to 10 and linearly squeezing the range of scores from 0 to 8 to 0 to 5.
  • a correction factor “C” is used for the number of reviews in a given vertical and locale:
  • N r is the number of reviews for the business and the median number of reviews is taken for the business's vertical and locale.
  • An example value for “a” is 0.3 and an example value for “b” is 0.7.
  • correction factor “C” is as follows:
  • N min ” and “N max ” are the limits put on the comparator “N r ” in the denominator of the argument of the arctan in the correction factor.
  • An example value for “N min ” is 4 and an example value for “N max ” is 20.
  • a randomization correction “R” can also be used:
  • R min ⁇ ( 1000 , C ⁇ B norm + mod ⁇ ( uid , 40 ) - 20 Nr )
  • C is a correction factor (e.g., one of the two discussed above)
  • B norm is the normalized base score discussed above
  • uid is a unique identifier assigned to the business by platform 102 and stored in database 214 .
  • the randomization correction can be used where only a small number of reviews are present for a given business.
  • R max(0 ,C ⁇ B norm ⁇ 37.5 ⁇ e ⁇ 0.6 ⁇ )
  • scoring engine 702 can be used by scoring engine 702 in determining reputation scores.
  • scores for all types of businesses are computed using the same sets of rules.
  • reputation score computation varies based on industry (e.g., reputation scores for car dealers using one approach and/or one set of factors, and reputation scores for doctors using a different approach and/or different set of factors).
  • Scoring engine 702 can be configured to use a best in class entity when determining appropriate thresholds/values for entities within a given industry. The following are yet more examples of factors that can be used in generating reputation scores.
  • the volume of reviews across all review sites can be used as a factor. For example, if the average star rating and the number of reviews are high, a conclusion can be reached that the average star rating is more accurate than where an entity has the same average star rating and a lower number of reviews.
  • the star rating will carry more weight in the score if the volume is above a certain threshold.
  • thresholds vary by industry.
  • review volume can use more than just a threshold. For example, an asymptotic function of number of reviews, industry, and geolocation of the business can be used as an additional scoring factor.
  • Reviews that have multimedia associated with them can be weighted differently.
  • the length score of the review is increased (e.g., to the maximum value) when multimedia is present.
  • the population of reviews on different sites can be examined, and where a review distribution strays from the mean distribution, the score can be impacted. As one example, if the review distribution is sufficiently outside the expected distribution for a given industry, this may indicate that the business is engaged in gaming behavior. The score can be discounted (e.g., by 25%) accordingly.
  • An example of advice for improving a score based on this factor would be to point out to the user that their distribution of reviews (e.g., 200 on site 110 and only 2 on site 112 ) deviates from what is expected in the user's industry, and suggest that the user encourage those who posted reviews to site 110 do so on site 112 as well.
  • Text analysis can be used to extract features used in the score. For example, reviews containing certain key terms (e.g., “visited” or “purchased”) can be weighted differently than those that do not.
  • FIG. 8 illustrates an embodiment of a process for generating a reputation score.
  • process 800 is performed by platform 102 .
  • the process begins at 802 when data obtained from each of a plurality of sites is received.
  • process 800 begins at 802 when Bob logs into platform 102 and, in response, scoring engine 702 retrieves data associated with Bob's business from database 214 .
  • scores can also be generated as part of a batch process. As one example, scores across an entire industry can be generated (e.g., for benchmark purposes) once a week. In such situations, the process begins at 802 when the designated time to perform the batch process occurs and data is received from database 214 .
  • at least some of the data received at 802 is obtained on-demand directly from the source sites (instead of or in addition to being received from a storage, such as database 214 ).
  • a reputation score for an entity is generated.
  • Various techniques for generating reputation scores are discussed above. Other approaches can also be used, such as by determining an average score for each of the plurality of sites and combining those average scores (e.g., by multiplying or adding them and normalizing the result).
  • the entity for which the score is generated is a single business (e.g., Bob's Juice Company).
  • the score generated at 804 can also be determined as an aggregate across multiple locations (e.g., in the case of ACME Convenience Stores) and can also be generated across multiple businesses (e.g., reputation score for the airline industry), and/or across all reviews hosted by a site (e.g., reputation score for all businesses with profiles on site 110 ).
  • One way to generate a score for multiple locations (and/or multiple businesses) is to apply scoring techniques described in conjunction with FIG. 7 using as input the pool of reviews that correspond to the multiple locations/businesses.
  • Another way to generate a multi-location and/or multi-business reputation score is to determine reputation scores for each of the individual locations (and/or businesses) and then combine the individual scores (e.g., through addition, multiplication, or other appropriate combination function).
  • the reputation score is provided as output.
  • a reputation score is provided as output in region 604 of interface 600 .
  • scoring engine 702 can be configured to send reputation scores to users via email (e.g., via alerter 432 ).
  • platform 102 can also provide reputation information for multi-location businesses (also referred to herein as “enterprises”). Examples of enterprises include franchises, chain stores, and any other type of multi-location business. The following section describes various ways that enterprise reputation information is made available by platform 102 to users, such as Alice, who represent such enterprises.
  • FIG. 9 illustrates an example of an interface as rendered in a browser.
  • Alice is presented with interface 900 after logging in to her account on platform 102 using a browser application on client 108 .
  • Alice can also reach interface 900 by clicking on tab option 902 .
  • Alice is presented in region 912 with a map of the United States that highlights the average performance of all ACME locations within all states.
  • other maps are used. For example, if an enterprise only has stores in a particular state or particular county, a map of that state or county can be used as the default map.
  • a multi-country map can be shown as the default for global enterprises.
  • Legend 914 indicates the relationship between state color and the aggregate performance of locations in that states.
  • Controls 928 allow Alice to take actions such as specifying a distribution list, printing the map, and exporting a CSV file that includes the ratings/reviews that power the display.
  • region 916 is the average reputation score across all 2,000 ACME stores.
  • Region 918 indicates that ACME stores in Alaska have the highest average reputation score, while region 920 indicates that ACME stores in Nevada have the lowest average reputation score.
  • a list of the six states in which ACME has the lowest average reputation scores is presented in region 922 , along with the respective reputation scores of ACME in those states.
  • the reputation scores depicted in interface 900 can be determined in a variety of ways, including by using the techniques described above.
  • the data that powers the map can be filtered using the dropdown boxes shown in region 904 .
  • the view depicted in region 906 will change based on the filters applied.
  • the scores and other information presented in regions 916 - 922 will refresh to correspond to the filtered locations/time ranges.
  • Alice is electing to view a summary of all review data (authored in the last year), across all ACME locations.
  • Alice can refine the data presented by selecting one or more additional filters (e.g., limiting the data shown to just those locations in California, or to just those reviews obtained from site 110 that pertain to Nevada locations).
  • the filter options presented are driven by the data, meaning that only valid values will be shown. For example, if ACME does not have any stores in Wyoming, Wyoming will not be shown in dropdown 910 .
  • Alice selects “California” from dropdown 910 , only Californian cities will be available in dropdown 930 .
  • Alice can click on “Reset Filters” ( 926 ).
  • filters available to Alice make use of the tags that she previously uploaded (e.g., during account setup).
  • Other filters e.g., 910
  • filters are automatically provided by platform 102 .
  • which filters are shown in region 904 are customizable. For example, suppose ACME organizes its stores in accordance with “Regions” and “Zones” and that Alice labeled each ACME location with its appropriate Region/Zone information during account setup. Through an administrative interface, Alice can specify that dropdowns for selecting “Region” and “Zone” should be included in region 904 . As another example, Alice can opt to have store manager or other manager designations available as a dropdown filter. Optionally, Alice could also choose to hide certain dropdowns using the administrative interface.
  • interface 900 updates into interface 1000 as illustrated in FIG. 10 , which includes a more detailed view for the state.
  • pop-up 1002 is presented and indicates that across all of ACME's California stores, the average reputation score is 3.
  • the stores in Toluca Lake, Studio City, and Alhambra have the highest average reputation scores, while the stores in South Pasadena, Redwood City, and North Hollywood have the lowest average reputation scores.
  • Alice can segment the data shown in interface 1000 by selecting California from dropdown 1006 and one or more individual cities from dropdown 1004 (e.g., to show just the data associated with stores in Redwood City).
  • Interface 1100 makes available, in region 1102 , the individual reviews collected by platform 102 with respect to the filter selections made in region 1104 . Alice can further refine which reviews are shown in region 1102 by interacting with checkboxes 1112 . Summary score information is provided in region 1106 , and the number of reviews implicated by the filter selections is presented in region 1108 . Alice can select one of three different graphs to be shown in region 1110 . As shown in FIG. 11 , the first graph shows how the average rating across the filtered set of reviews has changed over the selected time period. If Alice clicks on region 1114 , she will be presented with the second graph. As shown in FIG.
  • the second graph shows the review volume over the time period.
  • the third graph shows a breakdown of reviews by type (e.g., portion of positive, negative, and neutral reviews).
  • FIG. 14 allows her to view a variety of standard reports by selecting them from regions 1402 and 1406 .
  • Alice can also create and save custom reports.
  • One example report is shown in region 1404 .
  • the report indicates, for a given date range, the average rating on a normalized (to 5) scale.
  • a second example report is shown in FIG. 15 .
  • Report 1500 depicts the locations in the selected data range that are declining in reputation most rapidly. In particular, what is depicted is the set of locations that have the largest negative delta in their respective normalized rating between two dates.
  • FIG. 16 Report 1600 provides a summary of ACME locations in a list format.
  • Column 1602 shows each location's average review score, normalized to a 5 point scale.
  • Column 1604 shows the location's composite reputation score (e.g., computed using the techniques described in conjunction with FIG. 7 ).
  • Alice can instruct platform 102 to email reports such as those listed in region 1402 .
  • Alice clicks on tab 940 she will be presented with an interface that allows her to select which reports to send, to which email addresses, and on what schedule.
  • Alice can set up a distribution list that includes the email addresses of all ACME board members and can further specify that the board members should receive a copy of the “Location vs. Competitors” report once per week.
  • Interface 1700 shows data obtained from platform 102 by social sites such as sites 120 - 122 .
  • Alice can apply filters to the social data by interacting with the controls in region 1702 and can view various reports by interacting with region 1704 .
  • platform 102 includes a review request engine that is configured to assist businesses in strategically obtaining additional reviews.
  • the engine can guide businesses through various aspects of review solicitation, and can also automatically make decisions on the behalf of those businesses. Recommendations regarding review requests can be presented to users in a variety of ways. For example, interface 600 of FIG. 6 can present a suggestion that additional reviews be requested, if applicable. As another example, periodic assessments can be made on behalf of a business, and an administrator of the business alerted via email when additional reviews should be solicited.
  • FIG. 20 illustrates an embodiment of a reputation platform that includes a review request engine.
  • Platform 2000 is an embodiment of platform 102 .
  • Other components e.g. as depicted in FIGS. 2 and/or 4 as being included in platform 102 ) can also be included in platform 2000 as applicable.
  • review request engine 2002 is configured to perform a variety of tasks. For example, review request engine 2002 can determine which sites (e.g., site 110 or site 112 ) a given business would benefit from having additional reviews on. In various embodiments, platform 102 performs these determinations at least in part by determining how a business's reputation score would change (whether positive or negative) based on simulating the addition of new reviews to various review sites. Further, review request engine 2002 can determine which specific individuals should be targeted as potential reviewers, and can facilitate contacting those individuals, including by suggesting templates/language to use in the requests, as well as the timing of those requests.
  • one factor that can be considered in determining a reputation score for a business is the “review distribution” of the business's reviews.
  • review distribution As one example, suppose a restaurant has a review distribution as follows: Of the total number of reviews of the restaurant that are known to platform 102 , 10% of those reviews appear on travel-oriented review site 112 , 50% of those reviews appear on general purpose review site 110 , and 40% of those reviews appear (collectively) elsewhere.
  • review request engine 2002 is configured to compare the review distribution of the business to one or more target distributions and use the comparison to recommend the targeting of additional reviews.
  • reputation platform 102 is configured to determine industry-specific review benchmarks.
  • the benchmarks can reflect industry averages or medians, and can also reflect outliers (e.g., focusing on data pertaining to the top 20% of businesses in a given industry). Further, for a single industry, benchmarks can be calculated for different regions (e.g., one for Restaurants-West Coast and one for Restaurants-Mid West). The benchmark information determined by platform 102 can be used to determine target distributions for a business.
  • Benchmark information can also be provided to platform 102 (e.g., by a third party), rather than or in addition to platform 102 determining the benchmark information itself.
  • a universal target distribution e.g., equal distribution across all review sites, or specific predetermined distributions
  • review request engine 2002 uses a business's review distribution and one or more target distributions to determine on which site(s) additional reviews should be sought.
  • FIG. 21 illustrates an embodiment of a process for targeting review placement.
  • process 2100 is performed by review request engine 2002 .
  • the process begins at 2102 when an existing distribution of reviews for an entity is evaluated across a plurality of review sites. A determination is made, at 2104 , that the existing distribution should be adjusted. Finally, at 2106 , an indicator of at least one review site on which placement of at least one additional review should be targeted is provided as output.
  • process 2100 is as follows: Once a week, the review distribution for a single location dry cleaner (“Mary's Dry Cleaning”) is determined by platform 102 . In particular, it is determined that approximately 30% of Mary's reviews appear on site 110 , approximately 30% appear on site 112 , and 40% of Mary's reviews appear elsewhere ( 2102 ). Suppose a target distribution for a dry cleaning business is: 70% site 110 , 10% site 112 , and 20% remainder. Mary's review distribution is significantly different from the target, and so, at 2104 a determination is made that adjustments to the distribution should be sought. At 2106 , review request engine 2002 provides as output an indication that Mary's could use significantly more reviews on site 110 . The output can take a variety of forms.
  • platform 102 can send an email alert to the owner of Mary's Dry Cleaning informing her that she should visit platform 102 to help correct the distribution imbalance.
  • the output can be used internally to platform 2002 , such as by feeding it as input into a process such as process 2500 .
  • the target distribution is multivariate, and includes, in addition to a proportion of reviews across various sites, information such as target timeliness for the reviews, a review volume, and/or a target average score (whether on a per-site basis, or across all applicable sites).
  • Multivariate target distributions can also be used in process 2100 . For example, suppose that after a few weeks of requesting reviews (e.g., using process 2100 ), the review distribution for Mary's Dry Cleaning is 68% site 110 , 12% site 112 , and 20% remainder ( 2102 ). The site proportions in her current review distribution are quite close to the target.
  • her review distribution may nonetheless deviate significantly from aspects of a multivariate target and need adjusting to bring up her reputation score.
  • the industry target may be a total of 100 reviews (i.e., total review volume) and Mary's Dry Cleaning may only have 80 total reviews.
  • the industry target average age of review may be six months, while the average age for Mary's Dry Cleaning is nine months.
  • Decisions made at 2104 to adjust the existing review distribution can take into account such non-site-specific aspects as well.
  • these additional aspects of a target distribution are included in the distribution itself (e.g., within a multivariate distribution).
  • the additional information is stored separately (e.g. in a flat file) but is nonetheless used in conjunction with process 2100 when determining which sites to target for additional reviews. Additional information regarding multivariate distribution targets is provided below (e.g., in the section titled “Industry Review Benchmarking”).
  • process 2100 is as follows: Once a week, the review distribution of each location of a ten-location franchise is determined ( 2102 ). Comparisons against targets can be done individually on behalf of each location, e.g., with ten comparisons being performed against a single, industry-specific target. Comparisons can also be performed between the locations. For example, of the ten locations, the location having the review distribution that is closest to the industry-specific target can itself be used to create a review target for the other stores. The review distributions of the other stores can be compared against the review distributions of the top store, instead of or in addition to being compared against the industry target.
  • additional processing is performed in conjunction with process 2100 .
  • a determination can be made as to whether or not the entity has a presence on (e.g., has a registered account with) each of the sites implicated in the target distribution. If an entity is expected to have a non-zero number of reviews on a given site (in accordance with the target distribution), having a presence on that site is needed.
  • a car dealer business should have an account on review site 114 (a car dealer review site). A restaurant need not have an account on the site, and indeed may not qualify for an account on the site.
  • platform 102 If the car dealer business does not have an account with site 114 , a variety of actions can be taken by platform 102 . As one example, an alert that the car dealer is not registered with a site can be emailed to an administrator of the car dealer's account on platform 102 . As another example, the output provided at 2106 can include, e.g., in a prominent location, a recommendation that the reader of the output register for an account with site 114 . In some embodiments, platform 102 is configured to register for an account on (or otherwise obtain a presence on) the site, on behalf of the car dealer.
  • review request engine 2002 can use a variety of target distributions, obtained in a variety of ways, in performing process 2100 .
  • Two examples of target distributions are depicted in FIGS. 22 and 23 , respectively.
  • the target distributions shown in FIG. 22 are stored as groups of lines ( 2202 , 2204 ) in a single flat file, where an empty line is used as a delimiter between industry records.
  • the first line e.g., 2206
  • the second line e.g., 2208
  • the third line indicates the industry average review rating, normalized to a 5 point scale (e.g., 3.5).
  • the fourth line (e.g., 2212 ) indicates for how long of a period of time a review will be considered “fresh” (e.g., 1 year) and thus count in the calculation of a business in that industry's reputation score.
  • a decay factor is included, that is used to reduce the impact of a particular review in the calculation of a business's reputation score over time.
  • the remaining lines of the group ( 2214 - 2218 ) indicate what percentage of reviews should appear on which review sites. For example, 40% of reviews should appear on general purpose review site 110 ; 10% of reviews should appear on travel review site 112 ; and 50% of reviews should appear on a review site focused on auto dealers.
  • a target review volume for restaurants is 100 ( 2220 )
  • the industry average review rating is 4 ( 2222 )
  • the freshness value is two years ( 2224 ).
  • the target review distribution is also different.
  • the target distributions depicted in FIG. 22 can be used to model the impact that additional reviews would have for a business. For example, for a given car dealer business, simulations of additional reviews (e.g., five additional positive reviews obtained on site 110 vs. three additional positive reviews obtained on site 112 ) can be run, and a modeled reputation score (e.g., using techniques described in “Example Score Generation” above) determined. Whichever simulation results in the highest reputation score can be used to generate output at 2106 in process 2100 .
  • additional reviews e.g., five additional positive reviews obtained on site 110 vs. three additional positive reviews obtained on site 112
  • a modeled reputation score e.g., using techniques described in “Example Score Generation” above
  • FIG. 23 illustrates another example of a target distribution.
  • the first two columns of table 2300 list an industry ( 2302 ) and sub-industry ( 2304 ).
  • the next column lists the target review volume ( 2306 ).
  • the remaining columns provide target review proportions with respect to each of sites 2308 - 2324 .
  • many of the cells in the table are empty, indicating that, for a given type of business, only a few review sites significantly impact the reputations of those businesses. For example, while car dealers and car rental businesses are both impacted by reviews on sites 110 - 114 ( 2308 - 2312 ), reviews on site 2322 (a dealer review site) are important to car dealers, but not important to car rental businesses (or entirely different industries, such as restaurants).
  • reviews of hospitals appearing on a health review site 2314 are almost as important as reviews appearing on site 110 .
  • reviews appearing on site 2314 are considerably less important to elder care businesses, while reviews on a niche nursing review site 2318 matter for nursing homes but not hospitals.
  • FIG. 23 A small subset of data that can be included in a distribution (also referred to herein as an industry table) is depicted in FIG. 23 .
  • hundreds of rows i.e., industries/sub-industries
  • hundreds of columns i.e., review sites
  • additional types of information can be included in table 2300 , such as freshness values, review volume over a period of time (e.g., three reviews per week), decay factors, average scores, etc.
  • target distributions can be provided to platform 102 in a variety of ways.
  • an administrator of platform 102 can manually configure the values in the file depicted in FIG. 22 .
  • the top business in each category i.e., the business having the highest reputation score
  • process 2400 can be used to generate target distribution 2300 .
  • FIG. 24 illustrates an embodiment of a process for performing an industry review benchmark.
  • process 2400 is performed by industry benchmarking module 2006 to create/maintain industry table 2300 .
  • benchmarking module 2006 can be configured to execute process 2400 once a month.
  • Benchmarking module 2006 can also execute process 2400 more frequently, and/or can execute process 2400 at different times with respect to different industries (e.g., with respect to automotive industries one day each week and with respect to restaurants another day each week), selectively updating portions of table 2300 instead of the entire table at once.
  • process 2400 is performed multiple times, resulting in multiple tables.
  • platform 102 can be configured to generate region-specific tables.
  • the process begins at 2402 when review data is received.
  • industry benchmarker 2006 queries database 214 for information pertaining to all automotive sales reviews. For each automotive sales business (e.g., a total of 16,000 dealers), summary information such as each dealer's current reputation score, current review distribution, and current review volume is received at 2402 .
  • benchmarker 2006 can be configured to average the information received at 2402 into a set of industry average information (i.e., the average reputation score for a business in the industry; the averaged review distribution; and the average review volume). Benchmarker 2006 can also be configured to consider only a portion of the information received at 2402 when determining a benchmark, and/or can request information for a subset of businesses at 2402 . As one example, instead of determining an industry average at 2404 , benchmarker 2006 can consider the information pertaining to only those businesses having reputation scores in the top 20% of the industry being benchmarked. In some embodiments, multiple benchmarks are considered (e.g., in process 2100 ) when making determinations. For example, both an industry average benchmark, and a “top 20%” benchmark can be considered (e.g., by being averaged themselves) when determining a target distribution for a business.
  • additional processing is performed at 2404 and/or occurs after 2404 .
  • a global importance of a review site e.g., its Page Rank or Alexa Rank
  • the industry benchmarked during process 2400 is segmented and multiple benchmarks are determined (e.g., one benchmark for each segment, along with an industry-wide benchmark).
  • benchmarks are determined for various geographic sub-regions.
  • One reason for performing regional benchmarking is that different populations of people may rely on different review websites for review information. For example, individuals on the West Coast may rely heavily on site 112 for reviews of restaurants, while individuals in the Mid West may rely heavily on a different site. In order to improve its reputation score, a restaurant located in Ohio will likely benefit from a review distribution that more closely resembles that of other Mid Western restaurants than a nationwide average distribution.
  • FIG. 25 illustrates an embodiment of a process for recommending potential reviewers.
  • process 2500 is performed by review request engine 2002 .
  • the process begins at 2502 when a list of potential reviewers is received.
  • the list can be received in a variety of ways.
  • a list of potential reviewers can be received at 2502 in response to, or in conjunction with, the processing performed at 2106 .
  • a business such as a car dealership, can periodically provide platform 102 a list of new customers (i.e., those people who have recently purchased cars) including those customers' email addresses (at 2502 ).
  • a business can provide to platform 102 a comprehensive list of all known customers (e.g., those subscribed to the business's email newsletters and/or gleaned from past transactions).
  • customer email addresses are stored in database 214 ( 2008 ), and a list of reviewers is received at 2502 in response to a query of database 214 being performed.
  • a variety of techniques can be used to make this determination.
  • all potential reviewers received at 2502 could be targeted (e.g., because the list received at 2502 includes an instruction that all members be targeted).
  • any members of the list received at 2502 that have Google email addresses i.e., @gmail.com addresses
  • One reason for such a selection is that the individuals with @gmail.com addresses will be more likely to write reviews on Google Places (because they already have accounts with Google).
  • a similar determination can be made at 2504 with respect to other domains, such as by selecting individuals with @yahoo.com addresses when additional reviews on Yahoo! Local are recommended.
  • Whether or not an individual has already registered with a review site can also be determined (and therefore used at 2504 ) in other ways as well.
  • some review sites may provide an API that allows platform 102 to confirm whether an individual with a particular email address has an account with that review site.
  • the API might return a “yes” or “no” response, and may also return a user identifier if applicable (e.g., responding with “CoolGuy22” when presented with a particular individual's email address).
  • a third party service may supply mappings between email addresses and review site accounts to platform 102 .
  • the automobile dealer could ask the purchaser for a list of review sites the user has accounts on and/or can present the customer with a list of review sites and ask the customer to indicate which, if any, the customer is registered with.
  • any review site accounts/identifiers determined to be associated with the customer are stored in database 214 in a profile for the individual.
  • Other information pertinent to the individual can also be included in the profile, such as the number of reviews the user has written across various review sites, the average rating per review, and verticals (e.g., health or restaurants) associated with those reviews.
  • Additional/alternate processing is performed at 2504 in various embodiments.
  • database 214 can be queried for information pertaining to each of the potential reviewers received at 2502 and an analysis can be performed on the results.
  • Individuals with no histories and/or with any negative aspects to their review histories can be removed from consideration, as applicable.
  • an examination of the potential reviewer e.g., an analysis of his or her existing reviews
  • reviewer evaluations are performed asynchronously, and previously-performed assessments (e.g., stored in database 214 ) are used in evaluating potential reviewers at 2504 .
  • review request engine 2002 is configured to predict a likelihood that a potential reviewer will author a review and to determine a number of reviews to request to arrive at a target number of reviews. For example, suppose a company would benefit from an additional five reviews on site 110 and that there is a 25% chance that any reviewer requested will follow through with a review. In some embodiments, engine 2002 determines that twenty requests should be sent (i.e., to twenty individuals selected from the list received at 2502 ). Further, various thresholding rules can be employed by platform 102 when performing the determination at 2504 . For example, a determination may have been made (e.g., as an outcome of process 2100 ) that a business would benefit from fifty additional reviews being posted to site 110 .
  • site 110 employs anti-gaming features to identify and neutralize excessive/suspicious reviews.
  • platform determines limits on the number of requests to be made and/or throttles the rate at which they should be made at 2504 .
  • transmission of a review request to a potential reviewer is facilitated.
  • the processing of 2506 can be performed in a variety of ways.
  • all potential reviewers determined at 2504 can be emailed identical review request messages by platform 102 , in accordance with a template 2010 stored on platform 102 .
  • Information such as the name of the business to be reviewed, and the identity of each potential reviewer is obtained from database 214 and used to fill in appropriate fields of the template.
  • different potential reviewers of a given business receive different messages from platform 102 .
  • the message can include a specific reference to one or more particular review site(s), e.g., where the particular reviewer has an account.
  • the request can include a region such as region 1804 as depicted in FIG. 18 .
  • the ordering of the sites can be based on factors such as the concentration of new reviews needed to maximize a business's score increase, and/or factors such as where the potential reviewer already has an account and/or is otherwise most likely to complete a review.
  • statistical information is known about the potential reviewer (e.g., stored in database 214 is information that the reviewer typically writes reviews in the evening or in the morning), that information can be used in conjunction with facilitating the transmission of the review request (e.g., such that the review is sent at the time of day most likely to result in the recipient writing a review).
  • statistical information is not known about the specific potential reviewer, statistical information known about other individuals can be used for decision-making
  • Different potential reviewers can also be provided messages in different formats. For example, some reviewers can be provided with review request messages via email, while other reviewers can be provided with review requests via social networking websites, via postal mail, or other appropriate contact methods.
  • A/B testing is employed by platform 102 in message transmission. For example, a small number of requests can be sent—some at one time of day and the others at a different time of day (or sent on different days of week, or with different messaging).
  • Follow-up engine 2004 can be configured to determine, after a period of time (e.g., 24 hours) how many of the targeted reviewers authored reviews, and to use that information as feedback in generating messages for additional potential reviewers.
  • Other information pertaining to the message transmission (and its reception) can also be tracked. For example, message opens and message click throughs (and their timing) can be tracked and stored in database 214 ( 2012 ).
  • FIG. 26 illustrates an embodiment of a process for determining a follow-up action.
  • process 2600 is performed by platform 102 .
  • the process begins at 2602 when a transmission of a review request is facilitated.
  • portion 2506 of process 2500 , and portion 2602 of process 2600 are the same.
  • portion 2604 of process 2600 is performed by follow-up engine 2004 .
  • follow-up engine 2004 when an initial review request is sent (e.g., at 2506 ), information ( 2012 ) associated with that request is stored in database 214 .
  • follow-up engine 2004 periodically monitors appropriate review sites to determine whether the potential reviewer has created a review.
  • engine 2004 determines that a review was authored, in some embodiments, no additional processing is performed by follow-up engine 2004 (e.g., beyond noting that a review has been created and collecting statistical information about the review, such as the location of the review, and whether the review is positive or negative). In other embodiments, platform 102 takes additional actions, such as by sending the reviewer a thank you email. In the event it is determined that no review has been created ( 2604 ), follow-up engine 2004 determines a follow-up action to take regarding the review request.
  • follow-up engine 2004 can determine, from information 2012 (or any other appropriate source), whether the potential reviewer opened the review request email. The follow-up engine can also determine whether the potential reviewer clicked on any links included in the email.
  • follow-up engine 2004 can select different follow-up actions based on these determinations. For example, if the potential reviewer did not open the email, one appropriate follow-up action is to send a second request, with a different subject line (i.e., in the hopes the potential reviewer will now open the message). If the potential reviewer opened the email, but didn't click on any links, an alternate message can be included in a follow-up request.
  • follow-up engine 2004 can select another appropriate action as applicable, such as by featuring a different review site, or altering the message included in the request.
  • Another example of a follow-up action includes contacting the potential reviewer using a different contact method than the originally employed one. For example, where a request was originally sent to a given potential reviewer via email, follow-up engine 2004 can determine that a follow-up request be sent to the potential reviewer via a social network, or via a physical postcard.
  • Another example of a follow-up action includes contacting the potential reviewer at a different time of day than was employed in the original request (e.g., if the request was originally sent in the morning, send a follow-up request in the evening).
  • follow-up engine 2004 is configured to determine a follow-up schedule. For example, based on historical information (whether about the potential reviewer, or based on information pertaining to other reviewers), follow-up engine 2004 may determine that a reminder request (asking that the potential reviewer write a review) should be sent on a particular date and/or at a particular time to increase the likelihood of a review being authored by the potential reviewer.
  • follow-up engine can also determine other scheduling optimizations, such as how many total times requests should be made before being abandoned, and/or what the conditions are for ceasing to ask the potential reviewer for a review.
  • A/B testing is employed (e.g., with respect to a few potential reviewers that did not write reviews) by follow-up engine 2004 to optimize follow-up actions.
  • FIG. 27 illustrates a portion of an interface as rendered in a browser.
  • interface 2700 provides feedback (e.g., to a business owner) regarding two six-week periods of a review request campaign that includes follow-up.
  • the current campaign has led to approximately twice as many “click throughs” ( 2702 ) while not resulting in any additional “opt-outs” ( 2704 ). Further, the current campaign has resulted in nearly triple the number of reviews ( 2706 ) being written.
  • FIG. 28 illustrates an embodiment of a process for stimulating reviews.
  • process 2800 is performed on a device (e.g., one having interface 2900 ).
  • the process begins at 2802 when a user is prompted to provide a review at a point of sale.
  • businesses make available devices that visitors can use to provide feedback while they are at the business. For example, a visitor can be handed a tablet and asked for feedback prior to leaving.
  • a kiosk can be placed on premise and visitors can be asked to visit and interact with the kiosk.
  • FIG. 29 Illustrated in FIG. 29 is an interface 2900 to such devices.
  • the visitor is asked to provide a rating.
  • the visitor is asked to provide additional feedback.
  • the visitor is asked to provide an email address and identify other information, such as the purpose of the visitor's visit.
  • the visitor is offered an incentive for completing the review (but is not required to provide a specific type of review (e.g., positive review)).
  • the user is asked to click button 2910 to submit the review.
  • the device receives the review data (at 2804 of process 2800 ).
  • the device transmits the visitor's review data to platform 102 .
  • platform 102 is configured to evaluate the review data. If the review data indicates that the visitor is unhappy (e.g., a score of one or two), a remedial action can be taken, potentially while the visitor is still in the store. For example, a manager can be alerted that the visitor is unhappy and can attempt to make amends in person. As another example, the manager can write to the visitor as soon as possible, potentially helping resolve/diffuse the visitor's negativity prior to the visitor reaching a computer (e.g., at home or at work) and submitting a negative review to site 112 . In various embodiments, platform 102 is configured to accept business-specific rules regarding process 2900 .
  • a remedial action can be taken, potentially while the visitor is still in the store. For example, a manager can be alerted that the visitor is unhappy and can attempt to make amends in person. As another example, the manager can write to the visitor as soon as possible, potentially helping resolve/diffuse the visitor's negativity prior to the visitor reaching a computer (e.g.,
  • a representative of a business can specify that, for that business, “negative” is a score of one through three (i.e., including neutral reviews) or that a “positive” is a score of 4.5 or better.
  • the business can also specify which actions should be taken—e.g., by having a manager alerted to positive reviews (not just negative reviews).
  • platform 102 can automatically contact the visitor (via the visitor's self-supplied email address), provide a copy of the visitor's review information (supplied via interface 2900 ), and ask that the visitor post the review to a site such as site 110 or site 112 .
  • platform 102 can instruct the device to ask the visitor for permission to post the review on the visitor's behalf.
  • the device, and/or platform 102 can facilitate the posting (e.g., by obtaining the user's credentials for a period of time).
  • techniques described herein are used to identify products, services, or other aspects of a business that reviewers perceive positively or negatively. These perceptions are also referred to herein as “themes.” One example of a theme is “rude.” Another example of a theme is “salty fries.”
  • FIG. 30 illustrates an example of an interface as rendered in a browser.
  • interface 3000 is an embodiment of a dashboard display (e.g., displayed to Alice when she clicks on link 3002 ).
  • a variety of techniques can be used to determine themes that are common across reviews, as well as their sentiment (e.g., positive, negative, or neutral).
  • system 102 is configured to use a rating accompanying a review when assigning sentiment, rather than (or in addition to) an underlying connotation of a term.
  • the phrase, “sales tactics” might carry a negative (or neutral) connotation in typical conversational use. If an author of a five (out of five) star review uses the expression, however, the author is likely indicating that “sales tactics” were a positive thing encountered about the business being reviewed.
  • the term, “rude,” has a negative connotation in typical conversational use. Its presence in a five star review can indicate that rudeness at a given establishment is not a problem.
  • the term, “cheap,” can have a positive or neutral connotation (e.g., indicating something is inexpensive) but can also have a negative connotation (e.g., “cheap meat” or “cheap quality”).
  • a rating accompanying a review can be used to determine whether “cheap” is being used as a pejorative term.
  • the phrase, “New Mexico is not known for its sushi,” would typically be considered to express a negative sentiment (e.g., when analyzed using traditional sentiment analysis techniques). Where the phrase appears in a 5 star review, however, the author is likely expressing delight at having found a good sushi restaurant in New Mexico. Using the techniques described herein, the review author's sentiment (positive) will accurately be reflected in determining sentiment for a theme, such as “food” for the sushi restaurant being reviewed.
  • each of the headings included in region 3036 is an example of a theme (e.g., “Environment” and “Speed”).
  • themes are the most common terms with respect to a given category (e.g., with “Knowledgeable” and “Rude” being examples of themes in the category of customer service).
  • both the keywords, and any parents of the keywords in a hierarchy are considered to be themes—with some themes being more specific (e.g., “dirty floor”) than others (e.g. “cleanliness”).
  • region 3004 As indicated in region 3004 , across all of the 2,000 ACME stores in the United States, the staff at ACME is perceived positively as being nice ( 3006 ), knowledgeable ( 3008 ), and providing a good returns process ( 3010 ). The areas in which ACME is perceived most negatively (with respect to customer service) are that the staff is rude ( 3012 ), the checkout process has issues ( 3014 ), and that the employees are too busy ( 3016 ).
  • the positive and negative terms listed in region 3004 are examples of themes having their indicated respective sentiments.
  • the types of themes that are presented in interface 3000 are pre-selected—whether based on a template, based on the selections of an administrator, or otherwise selected, such as based on the industry of the reviewed entity.
  • a car dealership for example, can be evaluated with respect to “parts department” oriented themes, while a restaurant can be evaluated with respect to “food” oriented themes (without evaluating the restaurant with respect to parts or the dealership with respect to food).
  • Both types of business can be evaluated with respect to common business elements (e.g., “cleanliness” and/or “value”).
  • Alice can customize which types of themes are presented in interface 3000 .
  • which themes are presented in interface 3000 depends, at least in part, on the review information associated with the entity. For example, as will be described in more detail below, themes can be organized into hierarchies. Those themes in the hierarchy that are more prevalent in reviews can be surfaced automatically in addition to/instead of being included (e.g., in region 3036 ) by default.
  • Interface 3000 depicts, in region 3022 , the top rated states (with respect to customer service) and the most common positive ( 3024 ) and negative ( 3026 ) terms that appear in their respective reviews. If Alice clicks on icon 3038 , the bottom ranked states (and their terms) will be displayed first.
  • Map 3028 depicts, based on color, whether the stores in a given state are viewed, with respect to customer service, positively (e.g., 3030 ), negatively (e.g., 3032 ), or neutrally (e.g., 3034 ).
  • region 3102 depicts summary information with respect to overall perception ( 3104 ), and perception within six specific areas ( 3106 ).
  • region 3104 shows that ACME's California stores are ranked 39 th in the country, and that overall, the most positive aspects of the California stores are that shopping at them is fast and convenient, and that the stores have a good selection. Overall, the most negative aspects of the California stores is that employees are rude, shoppers are kept waiting, and the stores are dirty.
  • region 3108 the highest ranked stores in California are listed, along with their respective most prevalent positive and negative terms. If Alice clicks on icon 3110 , the worst ranked stores will be listed first. Alice can see the individual reviews mentioning a given term, for a given store, by clicking on the term shown in region 3108 . As one example, suppose Alice would like to see the reviews that mentioned ACME's “friendly” clerks at the store located on Highway 1. She clicks on region 3112 and is presented with the popup displayed in interface 3200 in FIG. 32 .
  • a total of 21 reviews of the ACME store located at 140 Highway 1 in California contain the word “friendly.”
  • the reviews are sorted in reverse date order, and the term, “friendly,” is highlighted in each review (e.g., at 3202 and 3204 ).
  • a given store may have an employee (e.g., “Jeff”) who is mentioned multiple times in reviews.
  • keywords such as “Jeff” will surface as themes. Where the theme has a positive sentiment, this can indicate that Jeff is a great employee. Where the theme has a negative sentiment, this can indicate that Jeff is a problematic employee.
  • smoothing techniques can be applied so that where a company has received only a handful of reviews about Jeff, he will not surface as a “theme.”
  • a review of a hotel or an apartment that includes the word, cockroach is highly likely to be expressing negative sentiment. Typical people only think about/mention cockroaches when they have had a negative experience.
  • the mere presence of the term, cockroach does not mean that the reviewer is authoring a negative review.
  • the author might be commenting favorably on how the hotel manager or landlord has managed the presence of such creatures.
  • FIG. 33 illustrates an alternate example of a popup display of reviews including a term.
  • interface 3300 shows, to an administrator of a car dealership franchise's account on platform 102 , reviews at various locations that include the term, “tactic.” As indicated by the star ratings accompanying the reviews, the term, “tactic” is present in both positive (e.g., 3302 ) and negative (e.g., 3304 ) reviews.
  • Interface 3400 displays, for each ACME store, numerical indications of each store's average rating with respect to each theme (or category of themes, as applicable). If Alice clicks on tab 3042 of interface 3400 , she will see ACME's data compared against the data of competitor convenience stores. In various embodiments, Alice can specify what types of competitor data should be shown. For example, Alice can compare ACME's ratings with respect to given themes against industry averages and/or against specific competitors. This can be particularly insightful in certain industries, such as telephone carriers, or airlines, where people frequently write reviews only when they are upset.
  • Themes of “broken charger” or “lost baggage” are likely to be surfaced, with negative sentiment, for any business in the industry. Being able to determine whether the number of complaints/severity of negative sentiment pertaining to baggage handling is higher or lower than as compared to complaints made about competitors may be more useful to a representative of a company than merely knowing that people are unhappy about a given aspect.
  • Alice can specify location constraints on the competitor information—such as by specifying that she would like to compare all ACME stores against competitor stores in Denver. She can also specify that she would like to compare ACME California stores against the industry average in California (or the industry average in Texas).
  • additional tabs are included in interface 3400 , for example, ones allowing Alice to compare ACME stores against one another (e.g., based on geography) and also to compare the same stores over time (e.g., determining what the most positively and negatively perceived themes were in one year vs. another for a store, a group of stores, and/or competitor/industry information).
  • Interface 3500 displays, for the specific ACME store she clicks on, the top positive terms and negative terms for the store (across each of the themes), associated reviews, and scores. Additional information is also presented, such as the store's rank across all other ACME stores ( 3502 ).
  • FIG. 36 illustrates an embodiment of a process for assigning sentiment to themes.
  • process 3600 is performed by theme engine 434 .
  • a single review will be described. However, portions of process 3600 can be repeated with respect to several, or all, reviews of an entity, whether in parallel, or in sequence.
  • the process begins at 3602 when reputation data is received.
  • a review having text and an accompanying score is received at 3602 .
  • One example of review text is, “The toiletries are the best thing at Smurfson Hotels,” with a score provided by the author of the review of 5.
  • reputation data is received by system 102 in conjunction with the processing performed at 506 in process 500 .
  • process 3600 is performed when/as data is ingested into system 102 .
  • process 3600 is performed asynchronously to process 500 .
  • process 3600 can be performed nightly, weekly, or in response to an arbitrary triggering event (examples of which are described above in conjunction with discussion of FIGS. 4 and 5 ).
  • a determination of one or more keywords is made, using the review's text.
  • a variety of techniques can be used to make the determination at 3604 .
  • every word in the review i.e., “The,” “toiletries,” “are,” . . .
  • varying amounts of natural language processing can be employed. For example, articles or other parts of speech can be skipped, only those words that are nouns and adjectives can be extracted as keywords, stemming/normalization can be applied, etc. Additional detail regarding the use of NLP in various embodiments is described in more detail below.
  • ontologies 436 are used in determining keywords at 3604 .
  • Ontologies can be created by an administrator, obtained from a third party (e.g., a parts listing), and/or can be at least partially automatically generated from existing review data (e.g., by performing term frequency analysis, NLP, etc.).
  • users of system 102 can customize/supplement the ontologies used. For example, if a particular business offers trademarked products for sale, those trademarked goods can be included in an ontology associated with that business.
  • a master set of terms can be used (e.g., for all/major business types), and refinement sets combined with the master set as applicable (e.g., refinements for hotels; refinements for restaurants). In some cases, such refinements may be added to the master set(s) and used for processing reviews. In other cases, some refinements may override portions of the master set(s).
  • blacklists (whether global, industry specific, or specific to a given company) can be used to exclude certain terms from consideration as keywords at 3604 . Examples of excerpts of ontologies are depicted in FIGS. 37A and 37B .
  • FIG. 37A is an excerpt of an ontology for use in processing reviews of medical practices.
  • the ontology includes substitutions (e.g., synonyms and typo corrections), and is hierarchical. For example, if a reviewer uses the term “physician,” “doc,” “MD,” or “docktor,” in a review ( 3702 ), theme engine 434 will substitute the term, “doctor” in its processing (i.e., as if the author had used the term, doctor). Substitutions are indicated in FIG. 37A as pairs where the right item appears in lowercase. In the case of an ontology for a car dealer, terms such as “car,” “cars,” “automobile,” “automobiles,” and “autos,” could similarly be collapsed.
  • any reviews pertaining to “PARKING,” “BATHROOM,” or “LOBBY,” pertain (more generally) to the “ENVIRONMENT” of a medical practice.
  • FIG. 37B is an excerpt of an ontology for use in processing reviews of a specific restaurant.
  • Some of the terms associated with the “FOOD” category are common ingredients, such as “mayo” ( 3708 ) and “pickle” ( 3710 ).
  • Other entries are generic names for menu items such as “apple pie” ( 3712 ) and yet other entries are trademarked names for items unique to the specific restaurant, such as “BlueCool” and “SpiffBurger” ( 3714 ).
  • Yet other “FOOD” words are not nouns, but are instead adjectives that reflect how people perceive food, such as that it is “bland,” “burnt,” “salty,” and “watery” ( 3716 ). The remaining examples of “FOOD” words shown in FIG.
  • ontologies 37 A and 37 B are example excerpts.
  • ontologies can include significantly more terms.
  • an ontology for use with car repair businesses could include, by name, every part of a car (e.g., to help analyze reviews referring to specific parts, such as “my gasket broke,” or “I needed a replacement carburetor”).
  • the same term can be differently associated with different themes, such as based on industry usage.
  • “patient” in the ontology of FIG. 37A ( 3730 ) is placed in a “PATIENT” hierarchy—referring to the customer of a doctor.
  • “Patient” in the ontology of FIG. 37B ( 3728 ) is placed in the “SERVICE” hierarchy—referring to the patience of staff (or the patience of patrons).
  • sentiment is assigned for or more themes associated with the keywords based at least in part on the review score.
  • a variety of techniques can be used to assign sentiment. One example is discussed in conjunction with FIG. 38 .
  • FIG. 38 illustrates an example of sentiment being assigned to themes based on three reviews.
  • the ontology shown in FIG. 37B is used to identify keywords in the reviews (i.e., the processing of 3604 ).
  • Those terms appearing in the ontology have been underlined in FIG. 38 .
  • Attached to each underlined term is a pair of terms and values.
  • the term, “SpiffBurger” was located in review A.
  • Review A is a 3 star review.
  • the term, “SpiffBurger,” is assigned 3 stars, as is the “FOOD” category to which it belongs.
  • pickles is also assigned 3 stars, as is the “FOOD” category to which “pickles” belongs.
  • each term included in the review that is also in the ontology shown in FIG. 38 is assigned a value that corresponds to the overall review rating provided by the author of the review (i.e., “3 stars,” or “neutral”).
  • any parents/grandparents in the hierarchy (i.e., “FOOD”) of those terms are also assigned the overall review rating (i.e., for Review A, “FOOD” receives a value of “3 stars” or “neutral”).
  • Review B is a 2 star review.
  • terms associated with ENVIRONMENT are present.
  • Each of the underlined terms is assigned a value that corresponds to the overall review rating provided by the author of the review (i.e., “2 stars” or “negative”). Further, “FOOD” and “ENVIRONMENT” are also assigned a score of 2.
  • Review C is a 5 star review.
  • terms associated with VALUE and SERVICE are present.
  • Each of the underlined terms, and those categories to which the terms belong, are assigned a value of 5.
  • the reviewed “SpiffBurger” was not to the reviewer's liking. However, it (and FOOD) received a score of “5 stars” (or “positive”) because the overall review was a 5.
  • a variety of techniques can be used to assign sentiment to themes ( 3606 ).
  • the point value assigned to each term e.g., “SpiffBurger” and to any parents of a term (e.g., “FOOD”) could be summed and then subjected to additional processing such as normalization and/or the application of thresholds.
  • additional processing such as normalization and/or the application of thresholds.
  • apple pie would have a (negative) sentiment score of 2: (2 points awarded from the second review (a single review)).
  • the term, “pickles,” would have a (neutral) score of 3: (3 points awarded from the first review (a single review)). Since the terms “apple pie” and “pickles” only appear in single review, respectively, in some embodiments those terms are excluded from being considered “themes,” because an insufficient number of reviewers have seen fit to comment on them.
  • the score for the concept, FOOD can also be determined in a variety of ways. As one example, because two distinct food items are mentioned in the first review, the value for FOOD could be counted twice (i.e., 3+3 (for review A)+2+2 (for review B)+5 (for review C)/5 mentions). As another example, multiple mentions within a single review of a term (or its parent categories, by extension) could be collapsed into a single instance. In this scenario, FOOD would receive a total raw score of (3+2+5)/3.
  • FIG. 39 illustrates an example of a process for assigning a sentiment to a theme. In particular, process 3900 can be used to assign a sentiment to the theme, FOOD, based on the presence of keywords such as “SpiffBurger” and “salty” across multiple reviews.
  • a variety of alternate and/or more sophisticated scoring approaches can also be used to assign sentiment to themes at 3606 .
  • every keyword extracted from a set of reviews e.g., per 3604
  • can be given a “Positivity score” based on the number of “Pos”itive (4 or 5 stars), “Neut”ral (3 stars), and “Neg”ative (1 or 2 stars) reviews as follows: Positivity (5+Pos+0.5*Neut)/(10+Pos+Neut+Neg).
  • FIGS. 41A-41C are portions of tables of themes and scores for an example restaurant.
  • the first column in each table lists keyword/parent categorizations (e.g., obtained at 3604 for all reviews of the restaurant).
  • the second column of each table lists the number of positive reviews in which the term (or its child) appears.
  • the third column of each table lists the number of neutral reviews in which the term (or its child) appears.
  • the fourth column of each table lists the number of negative reviews in which the term (or its child) appears.
  • the fifth column of each table lists the total number of reviews in which the term (or its child) appears.
  • the final column is a positivity calculation for the term, (e.g., in accordance with the formula given above or other appropriate techniques).
  • FIG. 41A lists the most common themes across all reviews of the restaurant, irrespective of sentiment. The table is sorted on column five. Terms related to “FOOD” ( 4102 ) were the most prevalent (present in a total of 628 reviews: 212 positive, 134 neutral, and 282 negative). “FOOD” has a positivity score of 0.45.
  • FIG. 41B lists the most prevalent negative themes in reviews, as sorted by positivity score.
  • the most notorious aspect of the restaurant is its “management,” ( 4104 ) which appears in a single positive review, three neutral reviews, and thirty-eight negative reviews.
  • the next most notorious aspect of the restaurant is the rudeness of its employees ( 4106 ).
  • FIG. 41C lists the most prevalent positive themes in reviews, as sorted by positivity score. Reviewers like the restaurant's “Tuesday” offerings the most ( 4108 ), followed by the beers the restaurant has on tap ( 4110 ).
  • additional processing is performed prior to using information such as is shown in FIGS. 41A-41C as input to interfaces/reports such as are shown in FIG. 30 .
  • an administrator reviewing the table shown in FIG. 41C may decide some of the terms, such as “yum” ( 4112 ) and “yummy” ( 4114 ) should be collapsed into a single term (e.g., “yum”) or merged with an existing term (e.g., “tasty”).
  • the administrator might also decide that certain terms aren't probative (i.e., are vacuous terms) and should be removed (e.g. “yum” and “yummy” should be ignored).
  • vacuous terms include terms such as “experience,” “day,” and “time.” Such modifications can be accomplished in a variety of ways.
  • the administrator can edit the ontology to map “yum” and “yummy” to tasty.
  • the administrator can also create or edit an existing blacklist to include those terms, so that they are not used as themes in the future.
  • system 102 makes available an interface that allows an end user, such as Alice, to manipulate which terms are included (e.g., in an ontology) or excluded (e.g., in a blacklist) without needing administrator privileges.
  • theme engine 434 is configured to use NLP, such as to identify themes and to perform review deduplication.
  • theme engine 434 can be configured to use the GATE modules ANNIE and OpenNLP, in conjunction with performing additional NLP processing.
  • FIG. 42 illustrates an example of a sentence included in a review.
  • the sentence, “The toiletries are the best thing at Smurfson Hotels” is processed by three NLP engines.
  • the processing performed by ANNIE is shown in region 4202 .
  • Each line represents a “token,” a unit of meaning which is a word or a phrase that has a single meaning.
  • “Surface” is the word exactly as it appears in the review.
  • “Lemma” is the dictionary form of the word (e.g., the single form of a noun or infinitive of a verb).
  • POS is the Part of Speech, from a set of tags in the Penn Treebank Tag Set.
  • Entity is the Named Entity type, which is given only to proper nouns.
  • theme engine 434 is configured to use NLP techniques to identify keywords.
  • the output of ANNIE can be used to generate a list of keywords, e.g., based on parts of speech, and used by theme engine 434 in conjunction with process 3600 or 3900 .
  • the processing performed by OpenNLP is shown in region 4204 .
  • the “S” line represents a clause, which is a larger unit of structure that has at least a subject and a predicate, a thing doing something.
  • the remaining lines are phrases, which serve distinct roles in the clause. These are shown preceded by tags which are also from the Penn Treebank Tag Set.
  • the indentation shows the hierarchical structure by which a phrase is a component of another phrase.
  • region 4206 additional processing performed by theme engine 434 is shown in region 4206 .
  • the analysis performed in region 4206 turns the OpenNLP analysis into “Subject Verb Object” structure.
  • the “Agent” is similar to the subject of a clause
  • the “Predicate” is similar to the verb
  • the “Patient” is similar to the direct object. Additional examples of processing performed on two additional sentences is shown in FIGS. 43 and 44 .
  • theme engine 434 is configured to perform deduplication on reviews (e.g., prior to determining sentiments for themes). Deduplication can be performed to minimize the ability of reviewers to spam system 102 with duplicate reviews/reviews that reuse phrases.
  • a business might seek to bolster its reputation by creating several artificial positive reviews for itself.
  • a business might also seek to discredit a competitor by creating several artificial negative reviews for the competitor.
  • Duplicate reviews may be wholesale copies of one another, or may have slight alterations, e.g. a different introduction or conclusion, but with common sentences/clauses.
  • deduplication is performed as follows. An identifier is assigned to each specific sentence and clause. One way to do this is to use a low-level Java operator that hashes each string such that any two arbitrary strings will not have the same resulting hashes. Each item extracted from a review is assigned a hash for the sentence from which it was derived, and, if a clause structure is successfully identified, another hash is generated for the clause.
  • Extractions from the sample sentences depicted in FIGS. 42-44 are shown in FIG. 45 .
  • review deduplication is performed when processes such as process 3600 and 3900 are performed, and/or when the data feeding reports such as are shown in interface 3000 is collected.
  • items are counted on the basis of the number of occurrences that are unique in all fields. Therefore, six extractions for NOM-Smurfson Hotels-neut with different hash codes count as six such items. If either hash code is the same for the six extractions, they will only be counted as a single item, preventing duplicate text from being counted multiple times.

Abstract

Assigning sentiment to themes is disclosed. Reputation data extracted from at least one data source is received. The reputation data includes user-authored reviews. The user-authored reviews include text and at least one rating. For a first review included in the reputation data, at least one keyword is determined using the first review's text. A sentiment is assigned for a theme associated with the keyword based at least in part on the first review's rating.

Description

CROSS REFERENCE TO OTHER APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 61/666,586 entitled BUSINESS REPUTATION SYSTEM filed Jun. 29, 2012 and to U.S. Provisional Patent Application No. 61/747,340 entitled REVIEW REQUEST AUTOMATION filed Dec. 30, 2012, both of which are incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTION
Businesses are increasingly concerned with their online reputations, and the reputations of their competitors. For example, both positive and negative reviews posted to a review website can impact revenue. As more review websites are created, and as more users post more content to those sites, it is becoming increasingly difficult for businesses to monitor online information.
BRIEF DESCRIPTION OF THE DRAWINGS
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
FIG. 1 illustrates an embodiment of an environment in which business reputation information is collected, analyzed, and presented.
FIG. 2 illustrates an example of components included in embodiments of a reputation platform.
FIG. 3 illustrates an embodiment of a process for enrolling a business with a reputation platform.
FIG. 4 illustrates an example of components included in embodiments of a reputation platform.
FIG. 5 illustrates an embodiment of a process for refreshing reputation data.
FIG. 6 illustrates an example of an interface as rendered in a browser.
FIG. 7 illustrates an example of components included in an embodiment of a reputation platform.
FIG. 8 illustrates an embodiment of a process for generating a reputation score.
FIG. 9 illustrates an example of an interface as rendered in a browser.
FIG. 10 illustrates an example of an interface as rendered in a browser.
FIG. 11 illustrates an example of an interface as rendered in a browser.
FIG. 12 illustrates a portion of an interface as rendered in a browser.
FIG. 13 illustrates a portion of an interface as rendered in a browser.
FIG. 14 illustrates an example of an interface as rendered in a browser.
FIG. 15 illustrates a portion of an interface as rendered in a browser.
FIG. 16 illustrates a portion of an interface as rendered in a browser.
FIG. 17 illustrates an example of an interface as rendered in a browser.
FIG. 18 illustrates a portion of an interface as rendered in a browser.
FIG. 19 illustrates a portion of an interface as rendered in a browser.
FIG. 20 illustrates an embodiment of a reputation platform that includes a review request engine.
FIG. 21 illustrates an embodiment of a process for targeting review placement.
FIG. 22 illustrates an example of a target distribution.
FIG. 23 illustrates an example of a target distribution.
FIG. 24 illustrates an embodiment of a process for performing an industry review benchmark.
FIG. 25 illustrates an embodiment of a process for recommending potential reviewers.
FIG. 26 illustrates an embodiment of a process for determining a follow-up action.
FIG. 27 illustrates a portion of an interface as rendered in a browser.
FIG. 28 illustrates an embodiment of a process for stimulating reviews.
FIG. 29 illustrates an example of an interface as rendered in a browser.
FIG. 30 illustrates an example of an interface as rendered in a browser.
FIG. 31 illustrates an example of an interface as rendered in a browser.
FIG. 32 illustrates an example of a popup display of reviews including a term.
FIG. 33 illustrates an alternate example of a popup display of reviews including a term.
FIG. 34 illustrates an example of an interface as rendered in a browser.
FIG. 35 illustrates an example of an interface as rendered in a browser.
FIG. 36 illustrates an embodiment of a process for assigning sentiment to themes.
FIG. 37A illustrates an embodiment of an ontology associated with medical practices.
FIG. 37B illustrates an embodiment of an ontology associated with a restaurant.
FIG. 38 illustrates an example of sentiment being assigned to themes based on three reviews.
FIG. 39 illustrates an example of a process for assigning a sentiment to a theme.
FIG. 40 is a table of example positivity calculations.
FIG. 41A is a portion of a table of themes and scores for an example restaurant.
FIG. 41B is a portion of a table of themes and scores for an example restaurant.
FIG. 41C is a portion of a table of themes and scores for an example restaurant.
FIG. 42 illustrates an example of a sentence included in a review.
FIG. 43 illustrates an example of a sentence included in a review.
FIG. 44 illustrates an example of a sentence included in a review.
FIG. 45 illustrates an example of sentence extractions used in deduplication.
DETAILED DESCRIPTION
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
FIG. 1 illustrates an embodiment of an environment in which business reputation information is collected, analyzed, and presented. In the example shown, the user of client device 106 (hereinafter referred to as “Bob”) owns a single location juice bar (“Bob's Juice Company”). The user of client device 108 (hereinafter referred to as “Alice”) is employed by a national chain of convenience stores (“ACME Convenience Stores”). As will be described in more detail below, Bob and Alice can each access the services of reputation platform 102 (via network 104) to track the reputations of their respective businesses online. The techniques described herein can work with a variety of client devices 106-108 including, but not limited to personal computers, tablet computers, and smartphones.
Reputation platform 102 is configured to collect reputation and other data from a variety of sources, including review websites 110-114, social networking websites 120-122, and other websites 132-134. In some embodiments, users of platform 102, such as Alice and Bob, can also provide offline survey data to platform 102. In the examples described herein, review site 110 is a general purpose review site that allows users to post reviews regarding all types of businesses. Examples of such review sites include Google Places, Yahoo! Local, and Citysearch. Review site 112 is a travel-oriented review site that allows users to post reviews of hotels, restaurants, and attractions. One example of a travel-oriented review site is TripAdvisor. Review site 114 is specific to a particular type of business (e.g., car dealers). Examples of social networking sites 120 and 122 include Twitter and Foursquare. Social networking sites 120-122 allow users to take actions such as “checking in” to locations. Finally, personal blog 134 and online forum 132 are examples of other types of websites “on the open Web” that can contain business reputation information.
Platform 102 is illustrated as a single logical device in FIG. 1. In various embodiments, platform 102 is a scalable, elastic architecture and may comprise several distributed components, including components provided by one or more third parties. Further, when platform 102 is referred to as performing a task, such as storing data or processing data, it is to be understood that a sub-component or multiple sub-components of platform 102 (whether individually or in cooperation with third party components) may cooperate to perform that task.
Account/Business Setup
FIG. 2 illustrates an example of components included in embodiments of a reputation platform. In particular, FIG. 2 illustrates components of platform 102 that are used in conjunction with a business setup process.
In order to access the services provided by reputation platform 102, Bob first registers for an account with the platform. At the outset of the process, he accesses interface 202 (e.g., a web-based interface) and provides information such as a desired username and password. He also provides payment information (if applicable). If Bob has created accounts for his business on social networking sites such as sites 120 and 122, Bob can identify those accounts to platform 102 as well.
Next, Bob is prompted by platform 102 to provide the name of his business (e.g., “Bob's Juice Company”), a physical address of the juice bar (e.g., “123 N. Main St.; Cupertino, Calif. 95014), and the type of business that he owns (e.g., “restaurant” or “juice bar”). The business information entered by Bob is provided to auto find engine 204, which is configured to locate, across sites 110-114, the respective profiles on those sites pertaining to Bob's business (e.g., “www.examplereviewsite.com/CA/Cupertino/BobsJuiceCo.html”), if present. Since Bob has indicated that his business is a juice bar, reputation platform 102 will not attempt to locate it on site 114 (a car dealer review site), but will attempt to locate it within sites 110 and 112.
In the example shown in FIG. 2, sites 110 and 114 make available respective application programming interfaces (APIs) 206 and 208 that are usable by auto find engine 204 to locate business profiles on their sites. Site 112 does not have a profile finder API. In order to locate a business profile there, auto find engine 204 is configured to perform a site-specific search using a script that accesses a search engine (e.g., through search interface 210). As one example, a query of: “site:www.examplereviewsite.com ‘Bob's Juice Company’ ‘Cupertino’” could be submitted to the Google search engine using interface 210.
Results obtained by auto find engine 204 are provided to verification engine 212, which confirms that information, such as the physical address and company name provided by Bob are present in the located profiles. Verification engine 212 can be configured to verify all results (including any obtained from site 110 and 114), and can also be configured to verify (or otherwise process) just those results obtained via interface 210. As one example, for a given query, the first ten results obtained from search interface 210 can be examined. The result that has the best match score and also includes the expected business name and physical address is designated as the business's profile at the queried site.
In some embodiments, verification engine 212 presents results to Bob for verification that the located profiles correspond to his business. As one example, Bob may be shown (via interface 202) a set of URLs corresponding to profiles on each of the sites 110-114 where his business has been located and asked to verify that the profiles are indeed for his business. Once confirmed by Bob, the URLs of the profiles (also referred to herein as “subscriptions”) and any other appropriate data are stored in database 214. Examples of such other data include overview information appearing on the business's profile page (such as a description of the business) and any social data (e.g., obtained from sites 120-122).
In various embodiments, users are given the option by platform 102 to enter the specific URLs corresponding to their business profiles on review sites. For example, if Bob knows the URL of the Google Places page corresponding to his business, he can provide it to platform 102 and use of auto find engine 204 is omitted (or reduced) as applicable.
FIG. 3 illustrates an embodiment of a process for enrolling a business with a reputation platform. In some embodiments process 300 is performed by platform 102. The process begins at 302 when a physical address of a business is received. As one example, when Bob provides the address of his business to platform 102 via interface 202, that address is received at 302. At 304, the received address is used as a query. As one example of the processing performed at 304, the received address is provided to site 110 using API 206. As another example, a site-specific query (e.g., of site 112) is submitted to a search engine via search interface 210.
At 306, results of the query (or queries) performed at 304 are verified. As one example of the processing performed at 304, verification engine 212 performs checks such as confirming that the physical address received at 302 is present in a given result. As another example, a user can be asked to confirm that results are correct, and if so, that confirmation is received as a verification at 306. Finally, at 308, verified results are stored. As one example, URLs for each of the verified profiles is stored in database 214. Although pictured as a single database in FIG. 2, in various embodiments, platform 102 makes use of multiple storage modules, such as multiple databases. Such storage modules may be of different types. For example, user account and payment information may be stored in a MySQL database, while extracted reputation information (described in more detail below) may be stored using MongoDB.
Where a business has multiple locations, the business owner (or a representative of the business, such as Alice) can be prompted to loop through process 300 for each of the business locations. Physical addresses and/or the URLs of the corresponding profiles on sites such as sites 110-114 can also be provided to platform 102 in a batch, rather than by manually entering in information via interface 202. As one example, suppose ACME Convenience Stores has 2,000 locations throughout the United States. Instead of manually entering in the physical location of each of the stores, Alice may instead elect to upload to platform 102 a spreadsheet or other file (or set of files) that includes the applicable information.
Tags associated with each location can also be provided to platform 102 (e.g., as name-value pairs). For example, Alice can tag each of the 2,000 locations with a respective store name (Store #1234), manager name (Tom Smith), region designation (West Coast), brand (ACME-Quick vs. Super-ACME), etc. As needed, tags can be edited and deleted, and new tags can be added. For example, Alice can manually edit a given location's tags (e.g., via interface 202) and can also upload a spreadsheet of current tags for all locations that supersede whatever tags are already present for her locations in platform 102. As will be described in more detail below, the tags can be used to segment the business to create custom reports and for other purposes.
Ongoing Data Collection and Processing
Once a business (e.g., Bob's Juice Company) has an account on reputation platform 102, and once the various subscriptions (i.e., the URLs of the business's profiles on the various review sites) have been identified and stored in database 214, collecting and processing of review and other data is performed. FIG. 4 illustrates an example of components included in embodiments of a reputation platform. In particular, FIG. 4 illustrates components of platform 102 that are used in conjunction with the ongoing collection and processing of data.
Reputation platform 102 includes a scheduler 402 that periodically instructs collection engine 404 to obtain data from sources such as sites 110-114. In some embodiments, data from sites 120-122, and/or 132-134 is also collected by collection engine 404. Scheduler 402 can be configured to initiate data collection based on a variety of rules. For example, it can cause data collection to occur once a day for all businesses across all applicable sites. It can also cause collection to occur with greater frequency for certain businesses (e.g., which pay for premium services) than others (e.g., which have free accounts). Further, collection can be performed across all sites (e.g., sites 110-114) with the same frequency or can be performed at different intervals (e.g., with collection performed on site 110 once per day and collection performed on site 112 once per week).
In addition to or instead of the scheduled collection of data, data collection can also be initiated based on the occurrence of an arbitrary triggering event. For example, collection can be triggered based on a login event by a user such as Bob (e.g., based on a permanent cookie or password being supplied). Collection can also be triggered based on an on-demand refresh request by the user (e.g., where Bob clicks on a “refresh my data” button in interface 202). Other elements depicted in FIG. 4 will be described in conjunction with process 500 shown in FIG. 5.
FIG. 5 illustrates an embodiment of a process for refreshing reputation data. In some embodiments process 500 is performed by platform 102. The process begins at 502 when a determination is made that a data refresh should be performed. As one example, such a determination is made at 502 by scheduler 402 based on an applicable schedule. As another example, such a determination is made at 502 when a triggering event (such as a login event by Bob) is received by platform 102.
At 504, a determination is made as to which sites should be accessed. As one example, in some embodiments collection engine 404 reviews the set of subscriptions stored in database 214 for Bob's Juice Company. The set of subscriptions associated with Bob's company are the ones that will be used by collection engine 404 during the refresh operation. As previously mentioned, a refresh can be performed on behalf of multiple (or all) businesses, instead of an individual one such as Bob's Juice Company. In such a scenario, portion 504 of the process can be omitted as applicable.
At 506, information is obtained from the sites determined at 504. As shown in FIG. 4, collection engine 404 makes use of several different types of helpers 420-428. Each helper (e.g., helper 420) is configured with instructions to fetch data from a particular type of source. As one example, although site 110 provides an API for locating business profiles, it does not make review data available via an API. Such data is instead scraped by platform 102 accordingly. In particular, when a determination is made that reviews associated with Bob's Juice Company on site 110 should be refreshed by platform 102, an instance 430 of helper 420 is executed on platform 102. Instance 430 is able to extract, for a given entry on site 110, various components such as: the reviewer's name, profile picture, review title, review text, and rating. Helper 424 is configured with instructions for scraping reviews from site 114. It is similarly able to extract the various components of an entry as posted to site 114. Site 112 has made available an API for obtaining review information and helper 422 is configured to use that API.
Other types of helpers can extract other types of data. As one example, helper 426 is configured to extract check-in data from social site 120 using an API provided by site 120. As yet another example, when an instance of helper 428 is executed on platform 102, a search is performed across the World Wide Web for blog, forum, or other pages that discuss Bob's Juice Company. In some embodiments, additional processing is performed on any results of such a search, such as sentiment analysis.
In various embodiments, information, obtained on behalf of a given business, is retrieved from different types of sites in accordance with different schedules. For example, while review site data might be collected hourly, or on demand, social data (collected from sites 120-122) may be collected once a day. Data may be collected from sites on the open Web (e.g., editorials, blogs, forums, and/or other sites not classified as review sites or social sites) once a week.
At 508, any new results (i.e., those not already present in database 214) are stored in database 214. As needed, the results are processed (e.g., by converting reviews into a single, canonical format) prior to being included in database 214. In various embodiments, database 214 supports heterogeneous records and such processing is omitted or modified as applicable. For example, suppose reviews posted to site 110 must include a score on a scale from one to ten, while reviews posted to site 112 must include a score on a scale from one to five. Database 214 can be configured to store both types of reviews. In some embodiments, the raw score of a review is stored in database 214, as is a converted score (e.g., in which all scores are converted to a scale of one to ten). As previously mentioned, in some embodiments, database 214 is implemented using MongoDB, which supports such heterogeneous record formats. As will be described in more detail below, in some embodiments, platform 102 includes a theme engine 434, which is configured to identify themes common across reviews.
Prior to the first time process 500 is executed with respect to Bob's Juice Company, no review data is present in database 214. Portion 506 of the process is performed for each of the data sources applicable to Bob's business (via instances of the applicable helpers), and the collected data is stored at 508. On subsequent refreshes of data pertinent to Bob's company, only new/changed information is added to database 214. In various embodiments, alerter 432 is configured to alert Bob (e.g., via an email message) whenever process 500 (or a particular portion thereof) is performed with respect to his business. In some cases, alerts are only sent when new information is observed, and/or when reputation scores associated with Bob's business (described in more detail below) change, or change by more than a threshold amount.
Reputation Scoring
Platform 102 is configured to determine a variety of reputation scores on behalf of businesses such as Bob's Juice Company. In the case of multiple-location businesses, such as ACME, individual reputation scores are determined for each of the locations, and the scores of individual businesses can be aggregated in a variety of ways. As will be described in more detail below, the scores provide users with perspective on how their businesses are perceived online. Also as will be described in more detail below, users are able to explore the factors that contribute to their businesses' reputation scores by manipulating various interface controls, and they can also learn how to improve their scores. In the case of multi-location businesses, such as ACME, users can segment the locations in a variety of ways to gain additional insight.
FIG. 6 illustrates an example of an interface as rendered in a browser. In particular, Bob is presented with interface 600 after logging in to his account on platform 102 using a browser application on client device 106 and clicking on tab option 602.
In region 604 of interface 600, a composite reputation score (728 points) is depicted on a scale 606. Example ways of computing a composite score are described in conjunction with FIG. 7. The composite reputation score provides Bob with a quick perspective on how Bob's Juice Company is perceived online. A variety of factors can be considered in determining a composite score. Six example factors are shown in region 608, each of which is discussed below. For each factor, Bob can see tips on how to improve his score with respect to that factor by clicking on the appropriate box (e.g., box 622 for tips on improving score 610). In the example shown in FIG. 6, a recommendation box is present for each score presented in region 608. In some embodiments, such boxes are only displayed for scores that can/should be improved. For example, given that score 614 is already very high, in some embodiments, box 626 is omitted from the interface as displayed to Bob, or an alternate message is displayed, such as a general encouragement to “keep up the good work.”
Overall Score (610): This value reflects the average review score (e.g., star rating) across all reviews on all review sites. As shown, Bob's business has an average rating of 0.50 across all sites. If Bob clicks on box 622, he will be presented with a suggestion, such as the following: “Overall score is the most influential metric. It can appear in both the review site search results and in your general search engine results. Generating a larger volume of positive reviews is the best way to improve the overall score. Typically, volume is the best approach as your average, happy customer will not write a review without being asked.” Additional, personalized advice may also be provided, such as telling Bob he should click on tab 634 and request five reviews.
Timeliness (612): This score indicates how current a business's reviews are (irrespective of whether they are positive or negative). In the example shown, reviews older than two months have less of an impact than more recent reviews. Thus, if one entity has 200 reviews with an average rating of four stars, at least some of which were recently authored, and a second entity has the same volume and star rating but none of the reviews were written in the last two months, the first entity will have a higher timeliness score and thus a higher composite reputation score. If Bob clicks on box 624, he will be presented with a suggestion, such as the following: “Managing your online reviews is not a one-time exercise, but a continual investment into your business. Encourage a steady trickle of new reviews on a regular basis to ensure that your reviews don't become stale.” Other measures of Timeliness can also be used, such as a score that indicates the relative amount of new vs. old positive reviews and new vs. old negative reviews. (I.e., to see whether positive or negative reviews dominate in time.)
Length (614): This score indicates the average length of a business's reviews. Longer reviews add weight to the review's rating. If two reviews have the same star rating (e.g., one out of five stars), but the first review is ten words and the second review is 300 words, the second review will be weighted more when computing the composite score. If Bob clicks on box 626, he will be presented with a suggestion, such as the following: “Encourage your positive reviewers to write in-depth reviews. They should detail their experiences and highlight what they like about your business. This provides credibility and the guidance makes review writing easier for them.” Other measures of Length can also be used, such as a score that indicates the relative amount of long vs. short positive reviews and long vs. short negative reviews. (I.e., to see whether positive or negative reviews dominate in length.)
Social Factors (616): Reviews that have been marked with social indicators (e.g., they have been marked by other members of the review community as being “helpful” or “funny”) will have more bearing on the outcome of the composite score. By clicking on box 632, Bob will be presented with an appropriate suggestion for improvement.
Reviewer Authority (618): A review written by an established member of a community (e.g., who has authored numerous reviews) will have a greater impact on the outcome of the composite score than one written by a reviewer with little or no history on a particular review site. In some embodiments, the audience of the reviewer is also taken into consideration. For example, if the reviewer has a large Twitter following, his or her review will have a greater bearing on the outcome of the score. If Bob clicks on box 628, he will be presented with a suggestion, such as the following: “Established reviewers can be a major boon to your review page. Their reviews are rarely questioned and their opinions carry significant weight. If you know that one of your customers is an active reviewer on a review site, make a special effort to get him or her to review your business.”
Industry (620): Review sites that are directly related to the vertical in which the entity being reviewed resides are given more weight. For example, if the entity being reviewed is a car dealership and the review site caters specifically to reviews about car dealerships, the reviews in that specific site will have a greater impact on the outcome of the composite score than those on vertically ambiguous websites. If Bob clicks on box 630, he will be presented with a suggestion, such as the following: “The most important review sites for your business should have your best reviews. Monitor your website analytics to find the sites having the biggest impact on your business, and reinforce your presence on those sites.”
In various embodiments of interface 600, additional controls for interactions are made available. For example, a control can be provided that allows a user to see individual outlier reviews—reviews that contributed the most to/deviated the most from the overall score (and/or individual factors). As one example, a one-star review that is weighted heavily in the calculation of a score or scores can be surfaced to the user. The user could then attempt to resolve the negative feelings of the individual that wrote the one-star review by contacting the individual. As another example, a particularly important five-star review (e.g., due to being written by a person with a very high reviewer authority score) can be surfaced to the user, allowing the user to contact the reviewer and thank him or her. As yet another example, if an otherwise influential review is stale (and positive), the review can be surfaced to the user so that the user can ask the author to provide an update or otherwise refresh the review.
A variety of weights can be assigned to the above factors when generating the composite score shown in region 604. Further, the factors described above need not all be employed nor need they be employed in the manners described herein. Additional factors can also be used when generating a composite score. An example computation of a composite score is discussed in conjunction with FIG. 7.
Example Score Generation
FIG. 7 illustrates an example of components included in an embodiment of a reputation platform. In particular, FIG. 7 illustrates components of platform 102 that are used in conjunction with generating reputation scores.
In some embodiments, whenever Bob accesses platform 102 (and/or based on the elapsing of a certain amount of time), the composite score shown at 604 in FIG. 6 is refreshed. In particular, scoring engine 702 retrieves, from database 214, review and other data pertaining to Bob's business and generates the various scores shown in FIG. 6. Example ways of computing a composite reputation score are as follows.
(1) Base Score
First, scoring engine 702 computes a base score “B” that is a weighted average of all of the star ratings of all of the individual reviews on all of the sites deemed relevant to Bob's business:
B = 100 · i N r s i w i i N r w i · Θ ( N r - N min )
where “Nr” is the total number of reviews, “si” is the number of “stars” for review “i” normalized to 10, “wi” is the weight for review “i,” Θ is the Heaviside step function, and “Nmin” is the minimum number of reviews needed to score (e.g., 4). The factor 100 is used to expand the score to a value from 0 to 1000.
One example of the function “wi” is as follows:
w i =D A ·T i ·P i ·R A ·S F ·L F
In the above, “DA” is the domain authority, which reflects how important the domain is with respect to the business. As one example, a doctor-focused review site may be a better authority for reviews of doctors than a general purpose review site. One way to determine domain authority values is to use the domain's search engine results page placement using the business name as the keyword.
“RA” is the reviewer authority. One way to determine reviewer authority is to take the logarithm of 1+the number of reviews written by the reviewer. As explained above, a review written by an individual who has authored many reviews is weighted more than one written by a less prolific user.
“SF” is the social feedback factor. One way to determine the factor is to use the logarithm of 1+the number of pieces of social feedback a review has received.
“LF” is the length factor. One way to specify this value is to use 1 for short reviews, 2 for medium reviews, and 4 for long reviews.
“Ti” is the age factor. One way to specify this factor is through the following: If the age is less than two months Ti=1, if the age “ai” (in months)>2 months, then the following value is used:
T i=max(e −ω·(a i −2),0.5)
where ω is the time-based decay rate.
“Pi” is the position factor for review “i.” The position factor indicates where a given review is positioned among other reviews of the business (e.g., it is at the top on the first page of results, or it is on the tenth page). One way to compute the position factor is as follows:
P i = - p i λ
where λ is the positional decay length.
In some cases, a given site (e.g., site 110) may have an overall rating given for the business on the main profile page for that business on the site. In some embodiments, the provided overall rating is treated as an additional review with age a=a0 and position p=p0 and given an additional weight factor of 2.
(2) Normalization
Once the base score has been computed, it is normalized (to generate “Bnorm”). In some embodiments this is performed by linearly stretching out the range of scores from 8 to 10 to 5 to 10 and linearly squeezing the range of scores from 0 to 8 to 0 to 5.
Optional Correction Factors
In some embodiments, a correction factor “C” is used for the number of reviews in a given vertical and locale:
C = a + b · 2 π tan - 1 ( 2 · N r N r _ )
where “Nr” is the number of reviews for the business and the median number of reviews is taken for the business's vertical and locale. An example value for “a” is 0.3 and an example value for “b” is 0.7.
One alternate version of correction factor “C” is as follows:
C = a + b · 2 π tan - 1 ( 2 · N r min ( max N r _ , N min ) , N max )
where “Nmin” and “Nmax” are the limits put on the comparator “Nr” in the denominator of the argument of the arctan in the correction factor. An example value for “Nmin” is 4 and an example value for “Nmax” is 20.
A randomization correction “R” can also be used:
R = min ( 1000 , C · B norm + mod ( uid , 40 ) - 20 Nr )
where “C” is a correction factor (e.g., one of the two discussed above), “Bnorm” is the normalized base score discussed above, and “uid” is a unique identifier assigned to the business by platform 102 and stored in database 214. The randomization correction can be used where only a small number of reviews are present for a given business.
Another example of “R” is as follows:
R=max(0,C·B norm−37.5·e −0.6·α)
where “α” is the age of the most recent review.
Additional Examples of Scoring Embodiments
As explained above, a variety of techniques can be used by scoring engine 702 in determining reputation scores. In some embodiments, scores for all types of businesses are computed using the same sets of rules. In other embodiments, reputation score computation varies based on industry (e.g., reputation scores for car dealers using one approach and/or one set of factors, and reputation scores for doctors using a different approach and/or different set of factors). Scoring engine 702 can be configured to use a best in class entity when determining appropriate thresholds/values for entities within a given industry. The following are yet more examples of factors that can be used in generating reputation scores.
Review Volume:
The volume of reviews across all review sites can be used as a factor. For example, if the average star rating and the number of reviews are high, a conclusion can be reached that the average star rating is more accurate than where an entity has the same average star rating and a lower number of reviews. The star rating will carry more weight in the score if the volume is above a certain threshold. In some embodiments, thresholds vary by industry. Further, review volume can use more than just a threshold. For example, an asymptotic function of number of reviews, industry, and geolocation of the business can be used as an additional scoring factor.
Multimedia:
Reviews that have multimedia associated with them (e.g., a video review, or a photograph) can be weighted differently. In some embodiments, instead of using a separate multimedia factor, the length score of the review is increased (e.g., to the maximum value) when multimedia is present.
Review Distribution:
The population of reviews on different sites can be examined, and where a review distribution strays from the mean distribution, the score can be impacted. As one example, if the review distribution is sufficiently outside the expected distribution for a given industry, this may indicate that the business is engaged in gaming behavior. The score can be discounted (e.g., by 25%) accordingly. An example of advice for improving a score based on this factor would be to point out to the user that their distribution of reviews (e.g., 200 on site 110 and only 2 on site 112) deviates from what is expected in the user's industry, and suggest that the user encourage those who posted reviews to site 110 do so on site 112 as well.
Text Analysis:
Text analysis can be used to extract features used in the score. For example, reviews containing certain key terms (e.g., “visited” or “purchased”) can be weighted differently than those that do not.
FIG. 8 illustrates an embodiment of a process for generating a reputation score. In some embodiments, process 800 is performed by platform 102. The process begins at 802 when data obtained from each of a plurality of sites is received. As one example, process 800 begins at 802 when Bob logs into platform 102 and, in response, scoring engine 702 retrieves data associated with Bob's business from database 214. In addition to generating reputation scores on demand, scores can also be generated as part of a batch process. As one example, scores across an entire industry can be generated (e.g., for benchmark purposes) once a week. In such situations, the process begins at 802 when the designated time to perform the batch process occurs and data is received from database 214. In various embodiments, at least some of the data received at 802 is obtained on-demand directly from the source sites (instead of or in addition to being received from a storage, such as database 214).
At 804, a reputation score for an entity is generated. Various techniques for generating reputation scores are discussed above. Other approaches can also be used, such as by determining an average score for each of the plurality of sites and combining those average scores (e.g., by multiplying or adding them and normalizing the result). As mentioned above, in some embodiments the entity for which the score is generated is a single business (e.g., Bob's Juice Company). The score generated at 804 can also be determined as an aggregate across multiple locations (e.g., in the case of ACME Convenience Stores) and can also be generated across multiple businesses (e.g., reputation score for the airline industry), and/or across all reviews hosted by a site (e.g., reputation score for all businesses with profiles on site 110). One way to generate a score for multiple locations (and/or multiple businesses) is to apply scoring techniques described in conjunction with FIG. 7 using as input the pool of reviews that correspond to the multiple locations/businesses. Another way to generate a multi-location and/or multi-business reputation score is to determine reputation scores for each of the individual locations (and/or businesses) and then combine the individual scores (e.g., through addition, multiplication, or other appropriate combination function).
Finally, at 806 the reputation score is provided as output. As one example, a reputation score is provided as output in region 604 of interface 600. As another example, scoring engine 702 can be configured to send reputation scores to users via email (e.g., via alerter 432).
Enterprise Reputation Information
As explained above, in addition to providing reputation information for single location businesses, such as Bob's Juice Company, platform 102 can also provide reputation information for multi-location businesses (also referred to herein as “enterprises”). Examples of enterprises include franchises, chain stores, and any other type of multi-location business. The following section describes various ways that enterprise reputation information is made available by platform 102 to users, such as Alice, who represent such enterprises.
FIG. 9 illustrates an example of an interface as rendered in a browser. In particular, Alice is presented with interface 900 after logging in to her account on platform 102 using a browser application on client 108. Alice can also reach interface 900 by clicking on tab option 902. By default, Alice is presented in region 912 with a map of the United States that highlights the average performance of all ACME locations within all states. In various embodiments, other maps are used. For example, if an enterprise only has stores in a particular state or particular county, a map of that state or county can be used as the default map. As another example, a multi-country map can be shown as the default for global enterprises. Legend 914 indicates the relationship between state color and the aggregate performance of locations in that states. Controls 928 allow Alice to take actions such as specifying a distribution list, printing the map, and exporting a CSV file that includes the ratings/reviews that power the display.
Presented in region 916 is the average reputation score across all 2,000 ACME stores. Region 918 indicates that ACME stores in Alaska have the highest average reputation score, while region 920 indicates that ACME stores in Nevada have the lowest average reputation score. A list of the six states in which ACME has the lowest average reputation scores is presented in region 922, along with the respective reputation scores of ACME in those states. The reputation scores depicted in interface 900 can be determined in a variety of ways, including by using the techniques described above.
The data that powers the map can be filtered using the dropdown boxes shown in region 904. The view depicted in region 906 will change based on the filters applied. And, the scores and other information presented in regions 916-922 will refresh to correspond to the filtered locations/time ranges. As shown, Alice is electing to view a summary of all review data (authored in the last year), across all ACME locations. Alice can refine the data presented by selecting one or more additional filters (e.g., limiting the data shown to just those locations in California, or to just those reviews obtained from site 110 that pertain to Nevada locations). The filter options presented are driven by the data, meaning that only valid values will be shown. For example, if ACME does not have any stores in Wyoming, Wyoming will not be shown in dropdown 910. As another example, once Alice selects “California” from dropdown 910, only Californian cities will be available in dropdown 930. To revert back to the default view, Alice can click on “Reset Filters” (926).
Some of the filters available to Alice (e.g., 908) make use of the tags that she previously uploaded (e.g., during account setup). Other filters (e.g., 910) are automatically provided by platform 102. In various embodiments, which filters are shown in region 904 are customizable. For example, suppose ACME organizes its stores in accordance with “Regions” and “Zones” and that Alice labeled each ACME location with its appropriate Region/Zone information during account setup. Through an administrative interface, Alice can specify that dropdowns for selecting “Region” and “Zone” should be included in region 904. As another example, Alice can opt to have store manager or other manager designations available as a dropdown filter. Optionally, Alice could also choose to hide certain dropdowns using the administrative interface.
Suppose Alice would like to learn more about the reputation of ACME's California stores. She hovers (or clicks) her mouse on region 924 of the map and interface 900 updates into interface 1000 as illustrated in FIG. 10, which includes a more detailed view for the state. In particular, pop-up 1002 is presented and indicates that across all of ACME's California stores, the average reputation score is 3. Further, out of the 24 California cities in which ACME has stores, the stores in Toluca Lake, Studio City, and Alhambra have the highest average reputation scores, while the stores in South Pasadena, Redwood City, and North Hollywood have the lowest average reputation scores. Alice can segment the data shown in interface 1000 by selecting California from dropdown 1006 and one or more individual cities from dropdown 1004 (e.g., to show just the data associated with stores in Redwood City).
Alice can view more detailed information pertaining to reviews and ratings by clicking tab 932. Interface 1100 makes available, in region 1102, the individual reviews collected by platform 102 with respect to the filter selections made in region 1104. Alice can further refine which reviews are shown in region 1102 by interacting with checkboxes 1112. Summary score information is provided in region 1106, and the number of reviews implicated by the filter selections is presented in region 1108. Alice can select one of three different graphs to be shown in region 1110. As shown in FIG. 11, the first graph shows how the average rating across the filtered set of reviews has changed over the selected time period. If Alice clicks on region 1114, she will be presented with the second graph. As shown in FIG. 12, the second graph shows the review volume over the time period. Finally, if Alice clicks on region 1116, she will be presented with the third graph. As shown in FIG. 13, the third graph shows a breakdown of reviews by type (e.g., portion of positive, negative, and neutral reviews).
If Alice clicks on tab 934, she will be presented with interface 1400 of FIG. 14, which allows her to view a variety of standard reports by selecting them from regions 1402 and 1406. Alice can also create and save custom reports. One example report is shown in region 1404. In particular, the report indicates, for a given date range, the average rating on a normalized (to 5) scale. A second example report is shown in FIG. 15. Report 1500 depicts the locations in the selected data range that are declining in reputation most rapidly. In particular, what is depicted is the set of locations that have the largest negative delta in their respective normalized rating between two dates. A third example report is shown in FIG. 16. Report 1600 provides a summary of ACME locations in a list format. Column 1602 shows each location's average review score, normalized to a 5 point scale. Column 1604 shows the location's composite reputation score (e.g., computed using the techniques described in conjunction with FIG. 7). If desired, Alice can instruct platform 102 to email reports such as those listed in region 1402. In particular, if Alice clicks on tab 940, she will be presented with an interface that allows her to select which reports to send, to which email addresses, and on what schedule. As one example, Alice can set up a distribution list that includes the email addresses of all ACME board members and can further specify that the board members should receive a copy of the “Location vs. Competitors” report once per week.
If Alice clicks on tab 936, she will be presented with interface 1700, depicted in FIG. 17. Interface 1700 shows data obtained from platform 102 by social sites such as sites 120-122. As with the review data, Alice can apply filters to the social data by interacting with the controls in region 1702 and can view various reports by interacting with region 1704.
Requesting Reviews
If Alice clicks on tab 938, she will be presented with the interface shown in FIG. 18, which allows her to send an email request for a review. Once an email has been sent, the location is tracked and available in interface 1900, shown in FIG. 19. In the example shown in FIG. 18, Alice is responsible for making decisions such as who to request reviews from, and how frequently, based on tips provided in region 1802 (and/or her own intuition). In various embodiments, platform 102 includes a review request engine that is configured to assist businesses in strategically obtaining additional reviews. In particular, the engine can guide businesses through various aspects of review solicitation, and can also automatically make decisions on the behalf of those businesses. Recommendations regarding review requests can be presented to users in a variety of ways. For example, interface 600 of FIG. 6 can present a suggestion that additional reviews be requested, if applicable. As another example, periodic assessments can be made on behalf of a business, and an administrator of the business alerted via email when additional reviews should be solicited.
FIG. 20 illustrates an embodiment of a reputation platform that includes a review request engine. Platform 2000 is an embodiment of platform 102. Other components (e.g. as depicted in FIGS. 2 and/or 4 as being included in platform 102) can also be included in platform 2000 as applicable. As will be described in more detail below, review request engine 2002 is configured to perform a variety of tasks. For example, review request engine 2002 can determine which sites (e.g., site 110 or site 112) a given business would benefit from having additional reviews on. In various embodiments, platform 102 performs these determinations at least in part by determining how a business's reputation score would change (whether positive or negative) based on simulating the addition of new reviews to various review sites. Further, review request engine 2002 can determine which specific individuals should be targeted as potential reviewers, and can facilitate contacting those individuals, including by suggesting templates/language to use in the requests, as well as the timing of those requests.
Targeting Review Placement
As explained above (e.g., in the section titled “Additional Examples of Scoring Embodiments”), one factor that can be considered in determining a reputation score for a business is the “review distribution” of the business's reviews. As one example, suppose a restaurant has a review distribution as follows: Of the total number of reviews of the restaurant that are known to platform 102, 10% of those reviews appear on travel-oriented review site 112, 50% of those reviews appear on general purpose review site 110, and 40% of those reviews appear (collectively) elsewhere. In various embodiments, review request engine 2002 is configured to compare the review distribution of the business to one or more target distributions and use the comparison to recommend the targeting of additional reviews.
A variety of techniques can be used to determine the target distributions used by review request engine 2002. For example, as will be described in more detail below, in some embodiments, reputation platform 102 is configured to determine industry-specific review benchmarks. The benchmarks can reflect industry averages or medians, and can also reflect outliers (e.g., focusing on data pertaining to the top 20% of businesses in a given industry). Further, for a single industry, benchmarks can be calculated for different regions (e.g., one for Restaurants-West Coast and one for Restaurants-Mid West). The benchmark information determined by platform 102 can be used to determine target distributions for a business. Benchmark information can also be provided to platform 102 (e.g., by a third party), rather than or in addition to platform 102 determining the benchmark information itself. In some embodiments, a universal target distribution (e.g., equal distribution across all review sites, or specific predetermined distributions) is used globally across all industries.
If a business has a review distribution that is significantly different from a target distribution (e.g., the industry-specific benchmark), the “review distribution” component of the business's reputation score will be negatively impacted. In various embodiments, review request engine 2002 uses a business's review distribution and one or more target distributions to determine on which site(s) additional reviews should be sought.
FIG. 21 illustrates an embodiment of a process for targeting review placement. In some embodiments process 2100 is performed by review request engine 2002. The process begins at 2102 when an existing distribution of reviews for an entity is evaluated across a plurality of review sites. A determination is made, at 2104, that the existing distribution should be adjusted. Finally, at 2106, an indicator of at least one review site on which placement of at least one additional review should be targeted is provided as output.
One example of process 2100 is as follows: Once a week, the review distribution for a single location dry cleaner (“Mary's Dry Cleaning”) is determined by platform 102. In particular, it is determined that approximately 30% of Mary's reviews appear on site 110, approximately 30% appear on site 112, and 40% of Mary's reviews appear elsewhere (2102). Suppose a target distribution for a dry cleaning business is: 70 % site 110, 10 % site 112, and 20% remainder. Mary's review distribution is significantly different from the target, and so, at 2104 a determination is made that adjustments to the distribution should be sought. At 2106, review request engine 2002 provides as output an indication that Mary's could use significantly more reviews on site 110. The output can take a variety of forms. For example, platform 102 can send an email alert to the owner of Mary's Dry Cleaning informing her that she should visit platform 102 to help correct the distribution imbalance. As another example, the output can be used internally to platform 2002, such as by feeding it as input into a process such as process 2500.
As will be described in more detail below, in some embodiments, the target distribution is multivariate, and includes, in addition to a proportion of reviews across various sites, information such as target timeliness for the reviews, a review volume, and/or a target average score (whether on a per-site basis, or across all applicable sites). Multivariate target distributions can also be used in process 2100. For example, suppose that after a few weeks of requesting reviews (e.g., using process 2100), the review distribution for Mary's Dry Cleaning is 68 % site 110, 12 % site 112, and 20% remainder (2102). The site proportions in her current review distribution are quite close to the target. However, other aspects of her review distribution may nonetheless deviate significantly from aspects of a multivariate target and need adjusting to bring up her reputation score. For example, the industry target may be a total of 100 reviews (i.e., total review volume) and Mary's Dry Cleaning may only have 80 total reviews. Or, the industry target average age of review may be six months, while the average age for Mary's Dry Cleaning is nine months. Decisions made at 2104 to adjust the existing review distribution can take into account such non-site-specific aspects as well. In some embodiments these additional aspects of a target distribution are included in the distribution itself (e.g., within a multivariate distribution). In other embodiments, the additional information is stored separately (e.g. in a flat file) but is nonetheless used in conjunction with process 2100 when determining which sites to target for additional reviews. Additional information regarding multivariate distribution targets is provided below (e.g., in the section titled “Industry Review Benchmarking”).
Another example of process 2100 is as follows: Once a week, the review distribution of each location of a ten-location franchise is determined (2102). Comparisons against targets can be done individually on behalf of each location, e.g., with ten comparisons being performed against a single, industry-specific target. Comparisons can also be performed between the locations. For example, of the ten locations, the location having the review distribution that is closest to the industry-specific target can itself be used to create a review target for the other stores. The review distributions of the other stores can be compared against the review distributions of the top store, instead of or in addition to being compared against the industry target.
In some embodiments, additional processing is performed in conjunction with process 2100. For example, as part of (or prior to) portion 2102 of the process, a determination can be made as to whether or not the entity has a presence on (e.g., has a registered account with) each of the sites implicated in the target distribution. If an entity is expected to have a non-zero number of reviews on a given site (in accordance with the target distribution), having a presence on that site is needed. As one example, a car dealer business should have an account on review site 114 (a car dealer review site). A restaurant need not have an account on the site, and indeed may not qualify for an account on the site. If the car dealer business does not have an account with site 114, a variety of actions can be taken by platform 102. As one example, an alert that the car dealer is not registered with a site can be emailed to an administrator of the car dealer's account on platform 102. As another example, the output provided at 2106 can include, e.g., in a prominent location, a recommendation that the reader of the output register for an account with site 114. In some embodiments, platform 102 is configured to register for an account on (or otherwise obtain a presence on) the site, on behalf of the car dealer.
Industry Review Benchmarking
As discussed above, review request engine 2002 can use a variety of target distributions, obtained in a variety of ways, in performing process 2100. Two examples of target distributions are depicted in FIGS. 22 and 23, respectively.
The target distributions shown in FIG. 22 are stored as groups of lines (2202, 2204) in a single flat file, where an empty line is used as a delimiter between industry records. The first line (e.g., 2206) indicates the industry classification (e.g., Auto Dealership). The second line (e.g., 2208) indicates a target review volume across all websites (e.g., 80). The third line (e.g., 2210) indicates the industry average review rating, normalized to a 5 point scale (e.g., 3.5). The fourth line (e.g., 2212) indicates for how long of a period of time a review will be considered “fresh” (e.g., 1 year) and thus count in the calculation of a business in that industry's reputation score. In some embodiments, in addition to or instead of a specific freshness value, a decay factor is included, that is used to reduce the impact of a particular review in the calculation of a business's reputation score over time. The remaining lines of the group (2214-2218) indicate what percentage of reviews should appear on which review sites. For example, 40% of reviews should appear on general purpose review site 110; 10% of reviews should appear on travel review site 112; and 50% of reviews should appear on a review site focused on auto dealers.
As shown in FIG. 22, different industries can have different values in their respective records. For example, a target review volume for restaurants is 100 (2220), the industry average review rating is 4 (2222), and the freshness value is two years (2224). The target review distribution is also different.
The target distributions depicted in FIG. 22 can be used to model the impact that additional reviews would have for a business. For example, for a given car dealer business, simulations of additional reviews (e.g., five additional positive reviews obtained on site 110 vs. three additional positive reviews obtained on site 112) can be run, and a modeled reputation score (e.g., using techniques described in “Example Score Generation” above) determined. Whichever simulation results in the highest reputation score can be used to generate output at 2106 in process 2100.
FIG. 23 illustrates another example of a target distribution. For a given business, the first two columns of table 2300 list an industry (2302) and sub-industry (2304). The next column lists the target review volume (2306). The remaining columns provide target review proportions with respect to each of sites 2308-2324. As shown in FIG. 23, many of the cells in the table are empty, indicating that, for a given type of business, only a few review sites significantly impact the reputations of those businesses. For example, while car dealers and car rental businesses are both impacted by reviews on sites 110-114 (2308-2312), reviews on site 2322 (a dealer review site) are important to car dealers, but not important to car rental businesses (or entirely different industries, such as restaurants). As another example, reviews of hospitals appearing on a health review site 2314 are almost as important as reviews appearing on site 110. However, reviews appearing on site 2314 are considerably less important to elder care businesses, while reviews on a niche nursing review site 2318 matter for nursing homes but not hospitals.
A small subset of data that can be included in a distribution (also referred to herein as an industry table) is depicted in FIG. 23. In various embodiments, hundreds of rows (i.e., industries/sub-industries) and hundreds of columns (i.e., review sites) are included in the table. Further, additional types of information can be included in table 2300, such as freshness values, review volume over a period of time (e.g., three reviews per week), decay factors, average scores, etc.
As previously explained, target distributions can be provided to platform 102 in a variety of ways. As one example, an administrator of platform 102 can manually configure the values in the file depicted in FIG. 22. As another example, the top business in each category (i.e., the business having the highest reputation score) can be used as a model, and its values copied into the appropriate area of file depicted in FIG. 22, whether manually or programmatically. As yet another example, process 2400 can be used to generate target distribution 2300.
FIG. 24 illustrates an embodiment of a process for performing an industry review benchmark. In some embodiments, process 2400 is performed by industry benchmarking module 2006 to create/maintain industry table 2300. For example, benchmarking module 2006 can be configured to execute process 2400 once a month. Benchmarking module 2006 can also execute process 2400 more frequently, and/or can execute process 2400 at different times with respect to different industries (e.g., with respect to automotive industries one day each week and with respect to restaurants another day each week), selectively updating portions of table 2300 instead of the entire table at once. In some embodiments, process 2400 is performed multiple times, resulting in multiple tables. For example, platform 102 can be configured to generate region-specific tables.
The process begins at 2402 when review data is received. As one example, at 2402, industry benchmarker 2006 queries database 214 for information pertaining to all automotive sales reviews. For each automotive sales business (e.g., a total of 16,000 dealers), summary information such as each dealer's current reputation score, current review distribution, and current review volume is received at 2402.
At 2404, the received data is analyzed to determine one or more benchmarks. As one example, benchmarker 2006 can be configured to average the information received at 2402 into a set of industry average information (i.e., the average reputation score for a business in the industry; the averaged review distribution; and the average review volume). Benchmarker 2006 can also be configured to consider only a portion of the information received at 2402 when determining a benchmark, and/or can request information for a subset of businesses at 2402. As one example, instead of determining an industry average at 2404, benchmarker 2006 can consider the information pertaining to only those businesses having reputation scores in the top 20% of the industry being benchmarked. In some embodiments, multiple benchmarks are considered (e.g., in process 2100) when making determinations. For example, both an industry average benchmark, and a “top 20%” benchmark can be considered (e.g., by being averaged themselves) when determining a target distribution for a business.
In some embodiments, additional processing is performed at 2404 and/or occurs after 2404. For example, a global importance of a review site (e.g., its Page Rank or Alexa Rank) is included as a factor in the target distribution, or is used to weight a review site's values in table 2300.
In various embodiments, the industry benchmarked during process 2400 is segmented and multiple benchmarks are determined (e.g., one benchmark for each segment, along with an industry-wide benchmark). As one example, suppose the industry being benchmarked is Fast Food Restaurants. In some embodiments, in addition to an industry-wide benchmark, benchmarks are determined for various geographic sub-regions. One reason for performing regional benchmarking is that different populations of people may rely on different review websites for review information. For example, individuals on the West Coast may rely heavily on site 112 for reviews of restaurants, while individuals in the Mid West may rely heavily on a different site. In order to improve its reputation score, a restaurant located in Ohio will likely benefit from a review distribution that more closely resembles that of other Mid Western restaurants than a nationwide average distribution.
Reviewer Recommendation
FIG. 25 illustrates an embodiment of a process for recommending potential reviewers. In some embodiments, process 2500 is performed by review request engine 2002. The process begins at 2502 when a list of potential reviewers is received. The list can be received in a variety of ways. As one example, a list of potential reviewers can be received at 2502 in response to, or in conjunction with, the processing performed at 2106. As another example, a business, such as a car dealership, can periodically provide platform 102 a list of new customers (i.e., those people who have recently purchased cars) including those customers' email addresses (at 2502). As yet another example, a business can provide to platform 102 a comprehensive list of all known customers (e.g., those subscribed to the business's email newsletters and/or gleaned from past transactions). In some embodiments, customer email addresses are stored in database 214 (2008), and a list of reviewers is received at 2502 in response to a query of database 214 being performed.
At 2504, a determination is made that at least one individual on the received list should be targeted with a review request. A variety of techniques can be used to make this determination. As one example, all potential reviewers received at 2502 could be targeted (e.g., because the list received at 2502 includes an instruction that all members be targeted). As another example, suppose as a result of process 2100, a determination was made that a business would benefit from more reviews on Google Places. At 2504, any members of the list received at 2502 that have Google email addresses (i.e., @gmail.com addresses) are selected at 2504. One reason for such a selection is that the individuals with @gmail.com addresses will be more likely to write reviews on Google Places (because they already have accounts with Google). A similar determination can be made at 2504 with respect to other domains, such as by selecting individuals with @yahoo.com addresses when additional reviews on Yahoo! Local are recommended.
Whether or not an individual has already registered with a review site can also be determined (and therefore used at 2504) in other ways as well. For example, some review sites may provide an API that allows platform 102 to confirm whether an individual with a particular email address has an account with that review site. The API might return a “yes” or “no” response, and may also return a user identifier if applicable (e.g., responding with “CoolGuy22” when presented with a particular individual's email address). As another example, where the site does not provide such an API, a third party service may supply mappings between email addresses and review site accounts to platform 102. As yet another example, the automobile dealer could ask the purchaser for a list of review sites the user has accounts on and/or can present the customer with a list of review sites and ask the customer to indicate which, if any, the customer is registered with.
In various embodiments, any review site accounts/identifiers determined to be associated with the customer are stored in database 214 in a profile for the individual. Other information pertinent to the individual can also be included in the profile, such as the number of reviews the user has written across various review sites, the average rating per review, and verticals (e.g., health or restaurants) associated with those reviews.
Additional/alternate processing is performed at 2504 in various embodiments. As one example, database 214 can be queried for information pertaining to each of the potential reviewers received at 2502 and an analysis can be performed on the results. Individuals who have a history of writing positive reviews in general, of writing positive reviews in the same vertical, of writing positive reviews in a different vertical, of frequently writing reviews, of writing high quality reviews (e.g., having a certain minimum length or including multimedia) irrespective of whether the review itself is positive, can be selected. Individuals with no histories and/or with any negative aspects to their review histories can be removed from consideration, as applicable. In some embodiments, an examination of the potential reviewer (e.g., an analysis of his or her existing reviews) is performed on demand, in conjunction with the processing of 2504. In other embodiments, reviewer evaluations are performed asynchronously, and previously-performed assessments (e.g., stored in database 214) are used in evaluating potential reviewers at 2504.
In various embodiments, review request engine 2002 is configured to predict a likelihood that a potential reviewer will author a review and to determine a number of reviews to request to arrive at a target number of reviews. For example, suppose a company would benefit from an additional five reviews on site 110 and that there is a 25% chance that any reviewer requested will follow through with a review. In some embodiments, engine 2002 determines that twenty requests should be sent (i.e., to twenty individuals selected from the list received at 2502). Further, various thresholding rules can be employed by platform 102 when performing the determination at 2504. For example, a determination may have been made (e.g., as an outcome of process 2100) that a business would benefit from fifty additional reviews being posted to site 110. However, it may also be the case that site 110 employs anti-gaming features to identify and neutralize excessive/suspicious reviews. In some embodiments, platform determines limits on the number of requests to be made and/or throttles the rate at which they should be made at 2504.
At 2506, transmission of a review request to a potential reviewer is facilitated. The processing of 2506 can be performed in a variety of ways. As one example, all potential reviewers determined at 2504 can be emailed identical review request messages by platform 102, in accordance with a template 2010 stored on platform 102. Information such as the name of the business to be reviewed, and the identity of each potential reviewer is obtained from database 214 and used to fill in appropriate fields of the template. In various embodiments, different potential reviewers of a given business receive different messages from platform 102. For example, the message can include a specific reference to one or more particular review site(s), e.g., where the particular reviewer has an account. Thus one potential reviewer might receive a message including the phrase, “please review us on Site 110,” while another might receive a message including the phrase, “please review us on Site 112.” In various embodiments, multiple review sites are mentioned in the request, and the position of the respective site varies across the different requests sent to different potential reviewers. For example, the request can include a region such as region 1804 as depicted in FIG. 18. The ordering of the sites can be based on factors such as the concentration of new reviews needed to maximize a business's score increase, and/or factors such as where the potential reviewer already has an account and/or is otherwise most likely to complete a review.
Where statistical information is known about the potential reviewer (e.g., stored in database 214 is information that the reviewer typically writes reviews in the evening or in the morning), that information can be used in conjunction with facilitating the transmission of the review request (e.g., such that the review is sent at the time of day most likely to result in the recipient writing a review). Where statistical information is not known about the specific potential reviewer, statistical information known about other individuals can be used for decision-making Different potential reviewers can also be provided messages in different formats. For example, some reviewers can be provided with review request messages via email, while other reviewers can be provided with review requests via social networking websites, via postal mail, or other appropriate contact methods.
In various embodiments, A/B testing is employed by platform 102 in message transmission. For example, a small number of requests can be sent—some at one time of day and the others at a different time of day (or sent on different days of week, or with different messaging). Follow-up engine 2004 can be configured to determine, after a period of time (e.g., 24 hours) how many of the targeted reviewers authored reviews, and to use that information as feedback in generating messages for additional potential reviewers. Other information pertaining to the message transmission (and its reception) can also be tracked. For example, message opens and message click throughs (and their timing) can be tracked and stored in database 214 (2012).
Follow-Up Determination
FIG. 26 illustrates an embodiment of a process for determining a follow-up action. In some embodiments, process 2600 is performed by platform 102. The process begins at 2602 when a transmission of a review request is facilitated. In some embodiments, portion 2506 of process 2500, and portion 2602 of process 2600 are the same.
At 2604, a determination is made that the potential reviewer, to whom the review request was transmitted at 2602, has not responded to the request by creating a review. In some embodiments, portion 2604 of process 2600 is performed by follow-up engine 2004. As one example, when an initial review request is sent (e.g., at 2506), information (2012) associated with that request is stored in database 214. Follow-up engine 2004 periodically monitors appropriate review sites to determine whether the potential reviewer has created a review. If engine 2004 determines that a review was authored, in some embodiments, no additional processing is performed by follow-up engine 2004 (e.g., beyond noting that a review has been created and collecting statistical information about the review, such as the location of the review, and whether the review is positive or negative). In other embodiments, platform 102 takes additional actions, such as by sending the reviewer a thank you email. In the event it is determined that no review has been created (2604), follow-up engine 2004 determines a follow-up action to take regarding the review request.
A variety of follow-up actions can be taken, and cam be based on a variety of factors. As one example, follow-up engine 2004 can determine, from information 2012 (or any other appropriate source), whether the potential reviewer opened the review request email. The follow-up engine can also determine whether the potential reviewer clicked on any links included in the email. Follow-up engine 2004 can select different follow-up actions based on these determinations. For example, if the potential reviewer did not open the email, one appropriate follow-up action is to send a second request, with a different subject line (i.e., in the hopes the potential reviewer will now open the message). If the potential reviewer opened the email, but didn't click on any links, an alternate message can be included in a follow-up request. If the potential reviewer opened the email and clicked on a link (but did not author a review) another appropriate action can be selected by follow-up engine 2004 as applicable, such as by featuring a different review site, or altering the message included in the request. Another example of a follow-up action includes contacting the potential reviewer using a different contact method than the originally employed one. For example, where a request was originally sent to a given potential reviewer via email, follow-up engine 2004 can determine that a follow-up request be sent to the potential reviewer via a social network, or via a physical postcard. Another example of a follow-up action includes contacting the potential reviewer at a different time of day than was employed in the original request (e.g., if the request was originally sent in the morning, send a follow-up request in the evening).
In various embodiments, follow-up engine 2004 is configured to determine a follow-up schedule. For example, based on historical information (whether about the potential reviewer, or based on information pertaining to other reviewers), follow-up engine 2004 may determine that a reminder request (asking that the potential reviewer write a review) should be sent on a particular date and/or at a particular time to increase the likelihood of a review being authored by the potential reviewer. Follow-up engine can also determine other scheduling optimizations, such as how many total times requests should be made before being abandoned, and/or what the conditions are for ceasing to ask the potential reviewer for a review. In various embodiments, A/B testing is employed (e.g., with respect to a few potential reviewers that did not write reviews) by follow-up engine 2004 to optimize follow-up actions.
FIG. 27 illustrates a portion of an interface as rendered in a browser. In particular, interface 2700 provides feedback (e.g., to a business owner) regarding two six-week periods of a review request campaign that includes follow-up. As shown, the current campaign has led to approximately twice as many “click throughs” (2702) while not resulting in any additional “opt-outs” (2704). Further, the current campaign has resulted in nearly triple the number of reviews (2706) being written.
Stimulating Reviews at a Point of Sale
One problem for some businesses, such as fast food restaurants, is that visiting such restaurants and receiving the expected quality of service/food is sufficiently routine/mundane that most people will not bother to write a positive review of their experience on a site, such as site 112. Only where people experience a significant problem will they be sufficiently motivated to author a review, leading to the overall review that is likely unfairly negative.
FIG. 28 illustrates an embodiment of a process for stimulating reviews. In some embodiments, process 2800 is performed on a device (e.g., one having interface 2900). The process begins at 2802 when a user is prompted to provide a review at a point of sale. In various embodiments, businesses make available devices that visitors can use to provide feedback while they are at the business. For example, a visitor can be handed a tablet and asked for feedback prior to leaving. As another example, a kiosk can be placed on premise and visitors can be asked to visit and interact with the kiosk.
Illustrated in FIG. 29 is an interface 2900 to such devices. In region 2902, the visitor is asked to provide a rating. In region 2904, the visitor is asked to provide additional feedback. And, in region 2906, the visitor is asked to provide an email address and identify other information, such as the purpose of the visitor's visit. In region 2908, the visitor is offered an incentive for completing the review (but is not required to provide a specific type of review (e.g., positive review)). When the visitor has completed filling out the information asked in interface 2900, the user is asked to click button 2910 to submit the review. When the visitor clicks button 2910, the device receives the review data (at 2804 of process 2800). Finally, at 2806, the device transmits the visitor's review data to platform 102.
In various embodiments, platform 102 is configured to evaluate the review data. If the review data indicates that the visitor is unhappy (e.g., a score of one or two), a remedial action can be taken, potentially while the visitor is still in the store. For example, a manager can be alerted that the visitor is unhappy and can attempt to make amends in person. As another example, the manager can write to the visitor as soon as possible, potentially helping resolve/diffuse the visitor's negativity prior to the visitor reaching a computer (e.g., at home or at work) and submitting a negative review to site 112. In various embodiments, platform 102 is configured to accept business-specific rules regarding process 2900. For example, a representative of a business can specify that, for that business, “negative” is a score of one through three (i.e., including neutral reviews) or that a “positive” is a score of 4.5 or better. The business can also specify which actions should be taken—e.g., by having a manager alerted to positive reviews (not just negative reviews).
If the review data indicates that the visitor is happy (e.g., a score of four or five), a different action can be taken. As one example, platform 102 can automatically contact the visitor (via the visitor's self-supplied email address), provide a copy of the visitor's review information (supplied via interface 2900), and ask that the visitor post the review to a site such as site 110 or site 112. As another example, if the visitor is still interacting with the device at the time, platform 102 can instruct the device to ask the visitor for permission to post the review on the visitor's behalf. As needed, the device, and/or platform 102 can facilitate the posting (e.g., by obtaining the user's credentials for a period of time).
Themes
In various embodiments, techniques described herein are used to identify products, services, or other aspects of a business that reviewers perceive positively or negatively. These perceptions are also referred to herein as “themes.” One example of a theme is “rude.” Another example of a theme is “salty fries.”
FIG. 30 illustrates an example of an interface as rendered in a browser. In particular, interface 3000 is an embodiment of a dashboard display (e.g., displayed to Alice when she clicks on link 3002). As will be described in more detail below, a variety of techniques can be used to determine themes that are common across reviews, as well as their sentiment (e.g., positive, negative, or neutral). In various embodiments, system 102 is configured to use a rating accompanying a review when assigning sentiment, rather than (or in addition to) an underlying connotation of a term.
As one example, the phrase, “sales tactics” might carry a negative (or neutral) connotation in typical conversational use. If an author of a five (out of five) star review uses the expression, however, the author is likely indicating that “sales tactics” were a positive thing encountered about the business being reviewed. As another example, the term, “rude,” has a negative connotation in typical conversational use. Its presence in a five star review can indicate that rudeness at a given establishment is not a problem. As yet another example, the term, “cheap,” can have a positive or neutral connotation (e.g., indicating something is inexpensive) but can also have a negative connotation (e.g., “cheap meat” or “cheap quality”). A rating accompanying a review can be used to determine whether “cheap” is being used as a pejorative term. As yet another example, the phrase, “New Mexico is not known for its sushi,” would typically be considered to express a negative sentiment (e.g., when analyzed using traditional sentiment analysis techniques). Where the phrase appears in a 5 star review, however, the author is likely expressing delight at having found a good sushi restaurant in New Mexico. Using the techniques described herein, the review author's sentiment (positive) will accurately be reflected in determining sentiment for a theme, such as “food” for the sushi restaurant being reviewed.
In the example shown in FIG. 30, Alice is viewing an overview map of all ACME stores that indicates how the stores are perceived with respect to customer service. In some embodiments, each of the headings included in region 3036 is an example of a theme (e.g., “Environment” and “Speed”). In other embodiments, themes are the most common terms with respect to a given category (e.g., with “Knowledgeable” and “Rude” being examples of themes in the category of customer service). In some embodiments, both the keywords, and any parents of the keywords in a hierarchy are considered to be themes—with some themes being more specific (e.g., “dirty floor”) than others (e.g. “cleanliness”).
As indicated in region 3004, across all of the 2,000 ACME stores in the United States, the staff at ACME is perceived positively as being nice (3006), knowledgeable (3008), and providing a good returns process (3010). The areas in which ACME is perceived most negatively (with respect to customer service) are that the staff is rude (3012), the checkout process has issues (3014), and that the employees are too busy (3016). The positive and negative terms listed in region 3004 are examples of themes having their indicated respective sentiments.
If Alice clicks on region 3020, she will see the most prevalent positive and negative terms associated with the value provided by ACME. If she clicks on region 3018, she will see the most prevalent positive and negative overall terms associated with ACME, across all reviews. In some embodiments, the types of themes that are presented in interface 3000 are pre-selected—whether based on a template, based on the selections of an administrator, or otherwise selected, such as based on the industry of the reviewed entity. A car dealership, for example, can be evaluated with respect to “parts department” oriented themes, while a restaurant can be evaluated with respect to “food” oriented themes (without evaluating the restaurant with respect to parts or the dealership with respect to food). Both types of business can be evaluated with respect to common business elements (e.g., “cleanliness” and/or “value”). As another example, Alice can customize which types of themes are presented in interface 3000. In other embodiments, which themes are presented in interface 3000 depends, at least in part, on the review information associated with the entity. For example, as will be described in more detail below, themes can be organized into hierarchies. Those themes in the hierarchy that are more prevalent in reviews can be surfaced automatically in addition to/instead of being included (e.g., in region 3036) by default.
Interface 3000 depicts, in region 3022, the top rated states (with respect to customer service) and the most common positive (3024) and negative (3026) terms that appear in their respective reviews. If Alice clicks on icon 3038, the bottom ranked states (and their terms) will be displayed first.
Map 3028 depicts, based on color, whether the stores in a given state are viewed, with respect to customer service, positively (e.g., 3030), negatively (e.g., 3032), or neutrally (e.g., 3034). Suppose Alice clicks on California (3032). She will then be presented with interface 3100 as illustrated in FIG. 31, which includes a more detailed view for the ACME stores in that state. As with region 3036 of FIG. 30, region 3102 depicts summary information with respect to overall perception (3104), and perception within six specific areas (3106). In particular, region 3104 shows that ACME's California stores are ranked 39th in the country, and that overall, the most positive aspects of the California stores are that shopping at them is fast and convenient, and that the stores have a good selection. Overall, the most negative aspects of the California stores is that employees are rude, shoppers are kept waiting, and the stores are dirty.
In region 3108, the highest ranked stores in California are listed, along with their respective most prevalent positive and negative terms. If Alice clicks on icon 3110, the worst ranked stores will be listed first. Alice can see the individual reviews mentioning a given term, for a given store, by clicking on the term shown in region 3108. As one example, suppose Alice would like to see the reviews that mentioned ACME's “friendly” clerks at the store located on Highway 1. She clicks on region 3112 and is presented with the popup displayed in interface 3200 in FIG. 32.
According to region 3206 of interface 3200, a total of 21 reviews of the ACME store located at 140 Highway 1 in California contain the word “friendly.” The reviews are sorted in reverse date order, and the term, “friendly,” is highlighted in each review (e.g., at 3202 and 3204).
In some cases, particularly where information for a specific location is reviewed, surprising results may occur. As one example, a given store may have an employee (e.g., “Jeff”) who is mentioned multiple times in reviews. Using the techniques described herein (e.g., the NLP processing techniques described below), keywords such as “Jeff” will surface as themes. Where the theme has a positive sentiment, this can indicate that Jeff is a great employee. Where the theme has a negative sentiment, this can indicate that Jeff is a problematic employee. As will be described in more detail below, smoothing techniques can be applied so that where a company has received only a handful of reviews about Jeff, he will not surface as a “theme.” As another example, in most parts of the United States, a review of a hotel or an apartment that includes the word, cockroach, is highly likely to be expressing negative sentiment. Typical people only think about/mention cockroaches when they have had a negative experience. In the SouthEast, however, the mere presence of the term, cockroach, does not mean that the reviewer is authoring a negative review. In a region full of palmetto bugs, the author might be commenting favorably on how the hotel manager or landlord has managed the presence of such creatures.
FIG. 33 illustrates an alternate example of a popup display of reviews including a term. In particular, interface 3300 shows, to an administrator of a car dealership franchise's account on platform 102, reviews at various locations that include the term, “tactic.” As indicated by the star ratings accompanying the reviews, the term, “tactic” is present in both positive (e.g., 3302) and negative (e.g., 3304) reviews.
Returning to FIG. 30, if Alice clicks on tab 3040, she will be presented with interface 3400 as shown in FIG. 34. Interface 3400 displays, for each ACME store, numerical indications of each store's average rating with respect to each theme (or category of themes, as applicable). If Alice clicks on tab 3042 of interface 3400, she will see ACME's data compared against the data of competitor convenience stores. In various embodiments, Alice can specify what types of competitor data should be shown. For example, Alice can compare ACME's ratings with respect to given themes against industry averages and/or against specific competitors. This can be particularly insightful in certain industries, such as telephone carriers, or airlines, where people frequently write reviews only when they are upset. Themes of “broken charger” or “lost baggage” are likely to be surfaced, with negative sentiment, for any business in the industry. Being able to determine whether the number of complaints/severity of negative sentiment pertaining to baggage handling is higher or lower than as compared to complaints made about competitors may be more useful to a representative of a company than merely knowing that people are unhappy about a given aspect.
Further, Alice can specify location constraints on the competitor information—such as by specifying that she would like to compare all ACME stores against competitor stores in Denver. She can also specify that she would like to compare ACME California stores against the industry average in California (or the industry average in Texas). In some embodiments, additional tabs are included in interface 3400, for example, ones allowing Alice to compare ACME stores against one another (e.g., based on geography) and also to compare the same stores over time (e.g., determining what the most positively and negatively perceived themes were in one year vs. another for a store, a group of stores, and/or competitor/industry information).
Returning to FIG. 31, if Alice clicks on one of the addresses listed in column 3114, she will be presented with interface 3500 as shown in FIG. 35. Interface 3500 displays, for the specific ACME store she clicks on, the top positive terms and negative terms for the store (across each of the themes), associated reviews, and scores. Additional information is also presented, such as the store's rank across all other ACME stores (3502).
Assigning Sentiment to Themes
FIG. 36 illustrates an embodiment of a process for assigning sentiment to themes. In some embodiments, process 3600 is performed by theme engine 434. In portions of the following discussion, a single review will be described. However, portions of process 3600 can be repeated with respect to several, or all, reviews of an entity, whether in parallel, or in sequence. The process begins at 3602 when reputation data is received. In particular, a review having text and an accompanying score is received at 3602. One example of review text is, “The toiletries are the best thing at Smurfson Hotels,” with a score provided by the author of the review of 5.
In some embodiments, reputation data is received by system 102 in conjunction with the processing performed at 506 in process 500. In this scenario, process 3600 is performed when/as data is ingested into system 102. In some embodiments, process 3600 is performed asynchronously to process 500. For example, process 3600 can be performed nightly, weekly, or in response to an arbitrary triggering event (examples of which are described above in conjunction with discussion of FIGS. 4 and 5).
At 3604, a determination of one or more keywords is made, using the review's text. A variety of techniques can be used to make the determination at 3604. As one example, every word in the review (i.e., “The,” “toiletries,” “are,” . . . ) can be treated as a distinct keyword. As another example, varying amounts of natural language processing (NLP) can be employed. For example, articles or other parts of speech can be skipped, only those words that are nouns and adjectives can be extracted as keywords, stemming/normalization can be applied, etc. Additional detail regarding the use of NLP in various embodiments is described in more detail below.
In some embodiments, ontologies 436 are used in determining keywords at 3604. Ontologies can be created by an administrator, obtained from a third party (e.g., a parts listing), and/or can be at least partially automatically generated from existing review data (e.g., by performing term frequency analysis, NLP, etc.). In some embodiments, users of system 102 can customize/supplement the ontologies used. For example, if a particular business offers trademarked products for sale, those trademarked goods can be included in an ontology associated with that business. As another example, a master set of terms can be used (e.g., for all/major business types), and refinement sets combined with the master set as applicable (e.g., refinements for hotels; refinements for restaurants). In some cases, such refinements may be added to the master set(s) and used for processing reviews. In other cases, some refinements may override portions of the master set(s). As yet another example, blacklists (whether global, industry specific, or specific to a given company) can be used to exclude certain terms from consideration as keywords at 3604. Examples of excerpts of ontologies are depicted in FIGS. 37A and 37B.
FIG. 37A is an excerpt of an ontology for use in processing reviews of medical practices. The ontology includes substitutions (e.g., synonyms and typo corrections), and is hierarchical. For example, if a reviewer uses the term “physician,” “doc,” “MD,” or “docktor,” in a review (3702), theme engine 434 will substitute the term, “doctor” in its processing (i.e., as if the author had used the term, doctor). Substitutions are indicated in FIG. 37A as pairs where the right item appears in lowercase. In the case of an ontology for a car dealer, terms such as “car,” “cars,” “automobile,” “automobiles,” and “autos,” could similarly be collapsed.
Other terms are not necessarily synonyms (though they can be), but refer to or are associated with the same concept within a hierarchy (also referred to herein as a “category” and a “type of theme”). As one example, review comments that refer to the “lobby,” “reception,” “waiting area,” and “magazines” (3704) each refer to an aspect the front portion of a medical practice. As another example (not shown), the terms “price,” “bargain,” “ripoff,” “cost,” “charged,” and “bill” can all be treated as references to the value provided by a business.
The hierarchical relationship between terms in the ontology is indicated in FIG. 37A as pairs where the right item is denoted in uppercase. As shown in region 3706, any reviews pertaining to “PARKING,” “BATHROOM,” or “LOBBY,” pertain (more generally) to the “ENVIRONMENT” of a medical practice.
FIG. 37B is an excerpt of an ontology for use in processing reviews of a specific restaurant. Some of the terms associated with the “FOOD” category are common ingredients, such as “mayo” (3708) and “pickle” (3710). Other entries are generic names for menu items such as “apple pie” (3712) and yet other entries are trademarked names for items unique to the specific restaurant, such as “BlueCool” and “SpiffBurger” (3714). Yet other “FOOD” words are not nouns, but are instead adjectives that reflect how people perceive food, such as that it is “bland,” “burnt,” “salty,” and “watery” (3716). The remaining examples of “FOOD” words shown in FIG. 37B are even more conceptual, such as “addictive” and “artery” (clogging) (3718). Terms associated with other categories are also shown, such as terms pertaining to the environment at the restaurant and the service provided by the restaurant. Note that in some cases, antonyms are included in the ontology. For example, both “clean” and “dirty” (3720 and 3722) are categorized as pertaining to “ENVIRONMENT.” And, both “polite” and “rude” (3724 and 3726) are categorized as pertaining to “SERVICE.”
The lists of words included in ontologies 37A and 37B are example excerpts. In practice, ontologies can include significantly more terms. As one example, an ontology for use with car repair businesses could include, by name, every part of a car (e.g., to help analyze reviews referring to specific parts, such as “my gasket broke,” or “I needed a replacement carburetor”). Further, the same term can be differently associated with different themes, such as based on industry usage. As one example, “patient” in the ontology of FIG. 37A (3730) is placed in a “PATIENT” hierarchy—referring to the customer of a doctor. “Patient” in the ontology of FIG. 37B (3728) is placed in the “SERVICE” hierarchy—referring to the patience of staff (or the patience of patrons).
Returning to the process of FIG. 36, once keywords are determined (3604), sentiment is assigned for or more themes associated with the keywords based at least in part on the review score. A variety of techniques can be used to assign sentiment. One example is discussed in conjunction with FIG. 38.
FIG. 38 illustrates an example of sentiment being assigned to themes based on three reviews. In particular, the ontology shown in FIG. 37B is used to identify keywords in the reviews (i.e., the processing of 3604). Those terms appearing in the ontology have been underlined in FIG. 38. Attached to each underlined term (with dotted lines) is a pair of terms and values. Using term 3802 as an example, the term, “SpiffBurger” was located in review A. Review A is a 3 star review. For Review A, the term, “SpiffBurger,” is assigned 3 stars, as is the “FOOD” category to which it belongs. The term, “pickles,” is also assigned 3 stars, as is the “FOOD” category to which “pickles” belongs. Thus, each term included in the review that is also in the ontology shown in FIG. 38 is assigned a value that corresponds to the overall review rating provided by the author of the review (i.e., “3 stars,” or “neutral”). Further, any parents/grandparents in the hierarchy (i.e., “FOOD”) of those terms are also assigned the overall review rating (i.e., for Review A, “FOOD” receives a value of “3 stars” or “neutral”).
Review B is a 2 star review. In Review B, in addition to terms associated with FOOD, terms associated with ENVIRONMENT are present. Each of the underlined terms is assigned a value that corresponds to the overall review rating provided by the author of the review (i.e., “2 stars” or “negative”). Further, “FOOD” and “ENVIRONMENT” are also assigned a score of 2.
Review C is a 5 star review. In review C, in addition to terms associated with FOOD, terms associated with VALUE and SERVICE are present. Each of the underlined terms, and those categories to which the terms belong, are assigned a value of 5. Note that the reviewed “SpiffBurger” was not to the reviewer's liking. However, it (and FOOD) received a score of “5 stars” (or “positive”) because the overall review was a 5.
As mentioned above, a variety of techniques can be used to assign sentiment to themes (3606). As one example, the point value assigned to each term (e.g., “SpiffBurger”) and to any parents of a term (e.g., “FOOD”) could be summed and then subjected to additional processing such as normalization and/or the application of thresholds. Using the example of FIG. 38, suppose each mention is assigned the rating score of the review in which it appears, and then an average across all mentions is taken. The theme, “SpiffBurger,” would have a (positive) sentiment score of 4: (3 points awarded from the first review, 5 points awarded from the third review, and an average of 8/2=4). The term, “apple pie” would have a (negative) sentiment score of 2: (2 points awarded from the second review (a single review)). The term, “pickles,” would have a (neutral) score of 3: (3 points awarded from the first review (a single review)). Since the terms “apple pie” and “pickles” only appear in single review, respectively, in some embodiments those terms are excluded from being considered “themes,” because an insufficient number of reviewers have seen fit to comment on them.
The score for the concept, FOOD, can also be determined in a variety of ways. As one example, because two distinct food items are mentioned in the first review, the value for FOOD could be counted twice (i.e., 3+3 (for review A)+2+2 (for review B)+5 (for review C)/5 mentions). As another example, multiple mentions within a single review of a term (or its parent categories, by extension) could be collapsed into a single instance. In this scenario, FOOD would receive a total raw score of (3+2+5)/3. FIG. 39 illustrates an example of a process for assigning a sentiment to a theme. In particular, process 3900 can be used to assign a sentiment to the theme, FOOD, based on the presence of keywords such as “SpiffBurger” and “salty” across multiple reviews.
Returning to process 3600, after the scores have been computed, those themes with the highest scores are the most “positive” themes, and those with the lowest scores are the most “negative” themes. Additional approaches to assigning sentiment are described below.
Smoothing of Positivity
A variety of alternate and/or more sophisticated scoring approaches can also be used to assign sentiment to themes at 3606. As one example, every keyword extracted from a set of reviews (e.g., per 3604) can be given a “Positivity score” based on the number of “Pos”itive (4 or 5 stars), “Neut”ral (3 stars), and “Neg”ative (1 or 2 stars) reviews as follows:
Positivity=(5+Pos+0.5*Neut)/(10+Pos+Neut+Neg).
This counts each Pos review as 1 positive vote and each Neut review as ½ of a positive vote. A presumption exists that each item begins with 5 positive votes and 5 negative votes. That way, items with a high percentage of positive or negative reviews will not return extreme values of positivity if the number of reviews is small. A table of example positivity calculations is shown in FIG. 40.
FIGS. 41A-41C are portions of tables of themes and scores for an example restaurant. The first column in each table lists keyword/parent categorizations (e.g., obtained at 3604 for all reviews of the restaurant). The second column of each table lists the number of positive reviews in which the term (or its child) appears. The third column of each table lists the number of neutral reviews in which the term (or its child) appears. The fourth column of each table lists the number of negative reviews in which the term (or its child) appears. The fifth column of each table lists the total number of reviews in which the term (or its child) appears. The final column is a positivity calculation for the term, (e.g., in accordance with the formula given above or other appropriate techniques).
FIG. 41A lists the most common themes across all reviews of the restaurant, irrespective of sentiment. The table is sorted on column five. Terms related to “FOOD” (4102) were the most prevalent (present in a total of 628 reviews: 212 positive, 134 neutral, and 282 negative). “FOOD” has a positivity score of 0.45.
FIG. 41B lists the most prevalent negative themes in reviews, as sorted by positivity score. The most notorious aspect of the restaurant is its “management,” (4104) which appears in a single positive review, three neutral reviews, and thirty-eight negative reviews. The next most notorious aspect of the restaurant is the rudeness of its employees (4106).
FIG. 41C lists the most prevalent positive themes in reviews, as sorted by positivity score. Reviewers like the restaurant's “Tuesday” offerings the most (4108), followed by the beers the restaurant has on tap (4110).
In some embodiments, additional processing is performed prior to using information such as is shown in FIGS. 41A-41C as input to interfaces/reports such as are shown in FIG. 30. As one example, an administrator reviewing the table shown in FIG. 41C may decide some of the terms, such as “yum” (4112) and “yummy” (4114) should be collapsed into a single term (e.g., “yum”) or merged with an existing term (e.g., “tasty”). The administrator might also decide that certain terms aren't probative (i.e., are vacuous terms) and should be removed (e.g. “yum” and “yummy” should be ignored). Additional examples of vacuous terms include terms such as “experience,” “day,” and “time.” Such modifications can be accomplished in a variety of ways. For example, the administrator can edit the ontology to map “yum” and “yummy” to tasty. The administrator can also create or edit an existing blacklist to include those terms, so that they are not used as themes in the future. In some embodiments, system 102 makes available an interface that allows an end user, such as Alice, to manipulate which terms are included (e.g., in an ontology) or excluded (e.g., in a blacklist) without needing administrator privileges.
Natural Language Processing
In some embodiments, theme engine 434 is configured to use NLP, such as to identify themes and to perform review deduplication. As one example, theme engine 434 can be configured to use the GATE modules ANNIE and OpenNLP, in conjunction with performing additional NLP processing.
FIG. 42 illustrates an example of a sentence included in a review. The sentence, “The toiletries are the best thing at Smurfson Hotels” is processed by three NLP engines. The processing performed by ANNIE is shown in region 4202. Each line represents a “token,” a unit of meaning which is a word or a phrase that has a single meaning. “Surface” is the word exactly as it appears in the review. “Lemma” is the dictionary form of the word (e.g., the single form of a noun or infinitive of a verb). “POS” is the Part of Speech, from a set of tags in the Penn Treebank Tag Set. “Entity” is the Named Entity type, which is given only to proper nouns. These types are: Person, Location, Organization, Date, JobTitle, or Unknown. Instead of or in addition to using existing keyword ontologies, in some embodiments theme engine 434 is configured to use NLP techniques to identify keywords. For example, the output of ANNIE can be used to generate a list of keywords, e.g., based on parts of speech, and used by theme engine 434 in conjunction with process 3600 or 3900.
The processing performed by OpenNLP is shown in region 4204. The “S” line represents a clause, which is a larger unit of structure that has at least a subject and a predicate, a thing doing something. The remaining lines are phrases, which serve distinct roles in the clause. These are shown preceded by tags which are also from the Penn Treebank Tag Set. The indentation shows the hierarchical structure by which a phrase is a component of another phrase.
Finally, additional processing performed by theme engine 434 is shown in region 4206. The analysis performed in region 4206 turns the OpenNLP analysis into “Subject Verb Object” structure. In the example shown, the “Agent” is similar to the subject of a clause, the “Predicate” is similar to the verb, and the “Patient” is similar to the direct object. Additional examples of processing performed on two additional sentences is shown in FIGS. 43 and 44.
Deduplication
In some embodiments, theme engine 434 is configured to perform deduplication on reviews (e.g., prior to determining sentiments for themes). Deduplication can be performed to minimize the ability of reviewers to spam system 102 with duplicate reviews/reviews that reuse phrases. A business might seek to bolster its reputation by creating several artificial positive reviews for itself. A business might also seek to discredit a competitor by creating several artificial negative reviews for the competitor. Duplicate reviews may be wholesale copies of one another, or may have slight alterations, e.g. a different introduction or conclusion, but with common sentences/clauses.
In some embodiments, deduplication is performed as follows. An identifier is assigned to each specific sentence and clause. One way to do this is to use a low-level Java operator that hashes each string such that any two arbitrary strings will not have the same resulting hashes. Each item extracted from a review is assigned a hash for the sentence from which it was derived, and, if a clause structure is successfully identified, another hash is generated for the clause.
Extractions from the sample sentences depicted in FIGS. 42-44 are shown in FIG. 45. In various embodiments, when processes such as process 3600 and 3900 are performed, and/or when the data feeding reports such as are shown in interface 3000 is collected, review deduplication is performed. In particular, items are counted on the basis of the number of occurrences that are unique in all fields. Therefore, six extractions for NOM-Smurfson Hotels-neut with different hash codes count as six such items. If either hash code is the same for the six extractions, they will only be counted as a single item, preventing duplicate text from being counted multiple times.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.

Claims (17)

What is claimed is:
1. A system, comprising:
a processor configured to:
receive reputation data extracted from at least one data source, wherein the reputation data includes user-authored reviews pertaining to an entity and wherein a review comprises text and an accompanying rating;
for a first review included in the reputation data, identify a theme in the first review that is associated with an aspect of the entity at least in part by determining at least one keyword using the first review's text;
assign a first sentiment for the identified theme at least in part by using the first review's accompanying rating;
assign a sentiment for a parent theme of the identified theme at least in part by using the first review's accompanying rating;
for a second review included in the reputation data that includes the determined at least one keyword and in which the theme is identified, assign a second sentiment for the identified theme based at least in part on the second review's accompanying rating; and
provide as output a report that includes the identified theme, a combination of the respective first and second sentiments for that identified theme, the parent theme, and the sentiment for the parent theme; and
a memory coupled to the processor and configured to provide the processor with instructions.
2. The system of claim 1 wherein assigning the first sentiment for the identified theme includes assigning a sentiment for the determined at least one keyword.
3. The system of claim 1 wherein the combination comprises an aggregation.
4. The system of claim 1 wherein the at least one keyword determined included in the first review is a first keyword and wherein, for a third review that includes a second keyword that is associated with the identified theme and is included in the reputation data, the processor is configured to:
assign a third sentiment for the identified theme based at least in part on the third review's accompanying rating, wherein the report includes a combination of the respective first, second, and third sentiments for the identified theme.
5. The system of claim 1 wherein the processor is further configured to detect whether the review data includes cloned reviews.
6. The system of claim 5 wherein the detection is based at least in part on a determination that the first review and a second review include an identical clause.
7. The system of claim 1 wherein determining the at least one keyword includes applying a blacklist filter.
8. The system of claim 1 wherein the processor is further configured to generate a report that indicates, for a given entity, a set of keywords present in a plurality of user-authored reviews.
9. The system of claim 1 wherein the processor is further configured to generate a report that indicates, for the entity, a comparison of a sentiment score for a given keyword compared against an industry average sentiment score for the keyword.
10. The system of claim 1 wherein the processor is further configured to generate a report that indicates, for the entity, a comparison of the entity's score for a given keyword against a score of a competitor for the keyword.
11. The system of claim 1 wherein the processor is further configured to generate a report that indicates, for two locations associated with the entity, a comparison of the first location's score for a given keyword against the second locations' score for the keyword.
12. The system of claim 1 wherein the processor is further configured to locate the determined at least one keyword within a hierarchy of keywords and to assign a sentiment to at least one parent of the keyword in the hierarchy based at least in part on the first review's accompanying rating.
13. The system of claim 1 wherein the processor is further configured to perform positivity smoothing prior to generating a report.
14. The system of claim 1 wherein assigning a first sentiment includes assigning a sentiment score.
15. The system recited in claim 1, wherein assigning the sentiment includes assigning a value to the identified theme, the value corresponding at least in part to the first review's accompanying rating.
16. A method, comprising:
receiving reputation data extracted from at least one data source, wherein the reputation data includes user-authored reviews pertaining to an entity and wherein a review comprises text and an accompanying rating;
for a first review included in the reputation data, identifying a theme in the first review that is associated with an aspect of the entity at least in part by determining, using a processor, at least one keyword using the first review's text;
assigning a first sentiment for the identified theme at least in part by using the first review's accompanying rating;
assigning a sentiment for a parent them of the identified theme at least in part by using the first review's accompanying rating;
for a second review included in the reputation data that includes the determined at least one keyword and in which the theme is identified, assigning a second sentiment for the identified theme based at least in part on the second review's accompanying rating; and
providing as output a report that includes the identified theme, a combination of the respective first and second sentiments for that identified theme, the parent theme, and the sentiment for the parent theme.
17. A computer program product embodied in a non-transitory tangible computer readable storage medium and comprising computer instructions for:
receiving reputation data extracted from at least one data source, wherein the reputation data includes user-authored reviews pertaining to an entity and wherein a review comprises text and an accompanying rating;
for a first review included in the reputation data, identifying a theme in the first review that is associated with an aspect of the entity at least in part by determining at least one keyword using the first review's text;
assigning a first sentiment for the identified theme at least in part by using the first review's accompanying rating;
assigning a sentiment for a parent theme of the identified theme at least in part by using the first review's accompanying rating;
for a second review included in the reputation data that includes the determined at least one keyword and in which the theme is identified, assign a second sentiment for the identified theme based at least in part on the second review's accompanying rating; and
providing as output a report that includes the identified theme, a combination of the respective first and second sentiments for that identified theme, the parent theme, and the sentiment for the parent theme.
US13/842,159 2012-06-29 2013-03-15 Assigning sentiment to themes Active US8918312B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/842,159 US8918312B1 (en) 2012-06-29 2013-03-15 Assigning sentiment to themes

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261666586P 2012-06-29 2012-06-29
US201261747340P 2012-12-30 2012-12-30
US13/842,159 US8918312B1 (en) 2012-06-29 2013-03-15 Assigning sentiment to themes

Publications (1)

Publication Number Publication Date
US8918312B1 true US8918312B1 (en) 2014-12-23

Family

ID=52101907

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/842,376 Active US11093984B1 (en) 2012-06-29 2013-03-15 Determining themes
US13/842,159 Active US8918312B1 (en) 2012-06-29 2013-03-15 Assigning sentiment to themes
US17/364,643 Pending US20220027395A1 (en) 2012-06-29 2021-06-30 Determining themes

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/842,376 Active US11093984B1 (en) 2012-06-29 2013-03-15 Determining themes

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/364,643 Pending US20220027395A1 (en) 2012-06-29 2021-06-30 Determining themes

Country Status (1)

Country Link
US (3) US11093984B1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120246093A1 (en) * 2011-03-24 2012-09-27 Aaron Stibel Credibility Score and Reporting
US20140046958A1 (en) * 2012-07-10 2014-02-13 Todd Tucker Content management system
US20140324885A1 (en) * 2013-04-25 2014-10-30 Trent R. McKenzie Color-based rating system
US20140379682A1 (en) * 2013-06-19 2014-12-25 Alibaba Group Holding Limited Comment ranking by search engine
US20150356179A1 (en) * 2013-07-15 2015-12-10 Yandex Europe Ag System, method and device for scoring browsing sessions
US20160267071A1 (en) * 2015-03-12 2016-09-15 International Business Machines Corporation Entity Metadata Attached to Multi-Media Surface Forms
US9477704B1 (en) * 2012-12-31 2016-10-25 Teradata Us, Inc. Sentiment expression analysis based on keyword hierarchy
WO2017177222A1 (en) * 2016-04-08 2017-10-12 BPU International, Inc. A system and method for searching and matching content over social networks relevant to an individual
US9922352B2 (en) * 2016-01-25 2018-03-20 Quest Software Inc. Multidimensional synopsis generation
US10235336B1 (en) * 2016-09-14 2019-03-19 Compellon Incorporated Prescriptive analytics platform and polarity analysis engine
US20200151278A1 (en) * 2018-11-13 2020-05-14 Bizhive, Llc Online reputation monitoring and intelligence gathering
US10831790B2 (en) * 2018-01-25 2020-11-10 International Business Machines Corporation Location based data mining comparative analysis index
US11003708B2 (en) 2013-04-25 2021-05-11 Trent R. McKenzie Interactive music feedback system
US11068758B1 (en) 2019-08-14 2021-07-20 Compellon Incorporated Polarity semantics engine analytics platform
US20210342864A1 (en) * 2020-04-30 2021-11-04 Robert Bosch Gmbh System and method for evaluating black-box recommendation systems in infotainment systems
US11423077B2 (en) 2013-04-25 2022-08-23 Trent R. McKenzie Interactive music feedback system
US11544307B2 (en) * 2018-04-26 2023-01-03 Panasonic Intellectual Property Corporation Of America Personnel selecting device, personnel selecting system, personnel selecting method, and recording medium
US20230134796A1 (en) * 2021-10-29 2023-05-04 Glipped, Inc. Named entity recognition system for sentiment labeling
US11675790B1 (en) * 2022-04-01 2023-06-13 Meltwater News International Holdings Gmbh Computing company competitor pairs by rule based inference combined with empirical validation
US11743544B2 (en) 2013-04-25 2023-08-29 Trent R McKenzie Interactive content feedback system
US20230289377A1 (en) * 2022-03-11 2023-09-14 Tredence Inc. Multi-channel feedback analytics for presentation generation

Citations (204)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819258A (en) 1997-03-07 1998-10-06 Digital Equipment Corporation Method and apparatus for automatically generating hierarchical categories from large document collections
US5857179A (en) 1996-09-09 1999-01-05 Digital Equipment Corporation Computer method and apparatus for clustering documents and automatic generation of cluster keywords
US5873081A (en) 1997-06-27 1999-02-16 Microsoft Corporation Document filtering via directed acyclic graphs
US5956693A (en) 1996-07-19 1999-09-21 Geerlings; Huib Computer system for merchant communication to customers
US5987457A (en) 1997-11-25 1999-11-16 Acceleration Software International Corporation Query refinement method for searching documents
US6006218A (en) 1997-02-28 1999-12-21 Microsoft Methods and apparatus for retrieving and/or processing retrieved information as a function of a user's estimated knowledge
US6178419B1 (en) 1996-07-31 2001-01-23 British Telecommunications Plc Data access system
US6182066B1 (en) 1997-11-26 2001-01-30 International Business Machines Corp. Category processing of query topics and electronic document content topics
WO2001046868A2 (en) 1999-12-22 2001-06-28 Accenture Llp A method for a graphical user interface search filter generator
US6324650B1 (en) 1998-03-16 2001-11-27 John W.L. Ogilvie Message content protection and conditional disclosure
US20020016910A1 (en) 2000-02-11 2002-02-07 Wright Robert P. Method for secure distribution of documents over electronic networks
US20020099598A1 (en) 2001-01-22 2002-07-25 Eicher, Jr. Daryl E. Performance-based supply chain management system and method with metalerting and hot spot identification
US20020111847A1 (en) 2000-12-08 2002-08-15 Word Of Net, Inc. System and method for calculating a marketing appearance frequency measurement
US6484068B1 (en) 2001-07-24 2002-11-19 Sony Corporation Robot apparatus and method for controlling jumping of robot device
US20020174230A1 (en) 2001-05-15 2002-11-21 Sony Corporation And Sony Electronics Inc. Personalized interface with adaptive content presentation
US20020178381A1 (en) 2001-05-22 2002-11-28 Trend Micro Incorporated System and method for identifying undesirable content in responses sent in reply to a user request for content
US20030014402A1 (en) 1999-06-25 2003-01-16 Sealand Michael D. System and method for transacting retrieval of real estate property listings using a remote client interfaced over an information network
US20030014633A1 (en) 2001-07-12 2003-01-16 Gruber Thomas Robert Method and system for secure, authorized e-mail based transactions
US6510432B1 (en) 2000-03-24 2003-01-21 International Business Machines Corporation Methods, systems and computer program products for archiving topical search results of web servers
US6513031B1 (en) 1998-12-23 2003-01-28 Microsoft Corporation System for improving search area selection
US6532459B1 (en) 1998-12-15 2003-03-11 Berson Research Corp. System for finding, identifying, tracking, and correcting personal information in diverse databases
US20030069874A1 (en) 1999-05-05 2003-04-10 Eyal Hertzog Method and system to automate the updating of personal information within a personal information management application and to synchronize such updated personal information management applications
US20030093260A1 (en) 2001-11-13 2003-05-15 Koninklijke Philips Electronics N.V. Apparatus and method for program selection utilizing exclusive and inclusive metadata searches
US20030135725A1 (en) 2002-01-14 2003-07-17 Schirmer Andrew Lewis Search refinement graphical user interface
US20030147536A1 (en) 2002-02-05 2003-08-07 Andivahis Dimitrios Emmanouil Secure electronic messaging system requiring key retrieval for deriving decryption keys
US6611825B1 (en) 1999-06-09 2003-08-26 The Boeing Company Method and system for text mining using multidimensional subspaces
US20030164849A1 (en) 2002-03-01 2003-09-04 Iparadigms, Llc Systems and methods for facilitating the peer review process
US20030172014A1 (en) 2000-09-01 2003-09-11 Chris Quackenbush System and method for online valuation and analysis
US20030208388A1 (en) 2001-03-07 2003-11-06 Bernard Farkas Collaborative bench mark based determination of best practices
US20030229668A1 (en) 2002-06-07 2003-12-11 Malik Dale W. Systems and methods for delivering time sensitive messages over a distributed network
US6678690B2 (en) 2000-06-12 2004-01-13 International Business Machines Corporation Retrieving and ranking of documents from database description
US20040019846A1 (en) 2002-07-24 2004-01-29 Xerox Corporation System and method for managing document retention of shared documents
US20040019584A1 (en) 2002-03-18 2004-01-29 Greening Daniel Rex Community directory
US20040032420A1 (en) 2002-08-13 2004-02-19 Allen Bradley J. Interactive benchmarking system
US20040063111A1 (en) 2000-08-25 2004-04-01 Toshikazu Shiba Method for protecting personal information
US20040078363A1 (en) 2001-03-02 2004-04-22 Takahiko Kawatani Document and information retrieval method and apparatus
US20040082839A1 (en) 2002-10-25 2004-04-29 Gateway Inc. System and method for mood contextual data output
US20040088308A1 (en) 2002-08-16 2004-05-06 Canon Kabushiki Kaisha Information analysing apparatus
US20040093414A1 (en) 2002-08-26 2004-05-13 Orton Kevin R. System for prevention of undesirable Internet content
US6754874B1 (en) 2002-05-31 2004-06-22 Deloitte Development Llc Computer-aided system and method for evaluating employees
US20040122926A1 (en) 2002-12-23 2004-06-24 Microsoft Corporation, Redmond, Washington. Reputation system for web services
US6766316B2 (en) 2001-01-18 2004-07-20 Science Applications International Corporation Method and system of ranking and clustering for document indexing and retrieval
US20040153466A1 (en) 2000-03-15 2004-08-05 Ziff Susan Janette Content development management system and method
US6775677B1 (en) 2000-03-02 2004-08-10 International Business Machines Corporation System, method, and program product for identifying and describing topics in a collection of electronic documents
US20040169678A1 (en) 2002-11-27 2004-09-02 Oliver Huw Edward Obtaining user feedback on displayed items
US20040267717A1 (en) 2003-06-27 2004-12-30 Sbc, Inc. Rank-based estimate of relevance values
US20050005168A1 (en) 2003-03-11 2005-01-06 Richard Dick Verified personal information database
US20050050009A1 (en) 2003-08-18 2005-03-03 Martha Gardner Method and system for assessing and optimizing crude selection
US20050071632A1 (en) 2003-09-25 2005-03-31 Pauker Matthew J. Secure message system with remote decryption service
US20050114313A1 (en) 2003-11-26 2005-05-26 Campbell Christopher S. System and method for retrieving documents or sub-documents based on examples
US20050160062A1 (en) 2004-01-16 2005-07-21 Howard W. B. Method to report personal security information about a person
US20050177457A1 (en) 2001-11-29 2005-08-11 Sheltz Steven P. Computerized method for the solicitation and sales of transactions
US20050177559A1 (en) 2004-02-03 2005-08-11 Kazuo Nemoto Information leakage source identifying method
US20050203795A1 (en) 2004-03-11 2005-09-15 Kristin Witzenburg Method for providing discounted media placement and marketing services to a plurality of advertisers
US20050216443A1 (en) 2000-07-06 2005-09-29 Streamsage, Inc. Method and system for indexing and searching timed media information based upon relevance intervals
US20050234877A1 (en) 2004-04-08 2005-10-20 Yu Philip S System and method for searching using a temporal dimension
US20050251536A1 (en) 2004-05-04 2005-11-10 Ralph Harik Extracting information from Web pages
US20050256866A1 (en) 2004-03-15 2005-11-17 Yahoo! Inc. Search system and methods with integration of user annotations from a trust network
US6968333B2 (en) 2000-04-02 2005-11-22 Tangis Corporation Soliciting information based on a computer user's context
US20060004716A1 (en) 2004-07-01 2006-01-05 Microsoft Corporation Presentation-level content filtering for a search result
US6985896B1 (en) 1999-02-03 2006-01-10 Perttunen Cary D Browsing methods, articles and apparatus
US20060015942A1 (en) 2002-03-08 2006-01-19 Ciphertrust, Inc. Systems and methods for classification of messaging entities
US20060026593A1 (en) 2004-07-30 2006-02-02 Microsoft Corporation Categorizing, voting and rating community threads
US20060047725A1 (en) 2004-08-26 2006-03-02 Bramson Steven J Opt-in directory of verified individual profiles
US20060042483A1 (en) 2004-09-02 2006-03-02 Work James D Method and system for reputation evaluation of online users in a social networking scheme
US20060074920A1 (en) 2002-02-13 2006-04-06 Marcus Wefers Method, software application and system for providing benchmarks
US7028026B1 (en) 2002-05-28 2006-04-11 Ask Jeeves, Inc. Relevancy-based database retrieval and display techniques
US20060116896A1 (en) 2004-08-12 2006-06-01 Fowler James F User-maintained contact information data system
US20060123348A1 (en) 2000-04-07 2006-06-08 Ross Brian D System and method for facilitating the pre-publication peer review process
US20060149708A1 (en) 2002-11-11 2006-07-06 Lavine Steven D Search method and system and system using the same
US7076558B1 (en) 2002-02-27 2006-07-11 Microsoft Corporation User-centric consent management system and method
US20060152504A1 (en) 2005-01-11 2006-07-13 Levy James A Sequential retrieval, sampling, and modulated rendering of database or data net information using data stream from audio-visual media
US20060161524A1 (en) 2005-01-14 2006-07-20 Learning Technologies, Inc. Reputation based search
US20060173828A1 (en) 2005-02-01 2006-08-03 Outland Research, Llc Methods and apparatus for using personal background data to improve the organization of documents retrieved in response to a search query
US20060174343A1 (en) 2004-11-30 2006-08-03 Sensory Networks, Inc. Apparatus and method for acceleration of security applications through pre-filtering
US20060190475A1 (en) 2004-12-20 2006-08-24 Norman Shi Group polling for consumer review
US20060212931A1 (en) 2005-03-02 2006-09-21 Markmonitor, Inc. Trust evaluation systems and methods
US7117207B1 (en) 2002-09-11 2006-10-03 George Mason Intellectual Properties, Inc. Personalizable semantic taxonomy-based search agent
US20060242554A1 (en) 2005-04-25 2006-10-26 Gather, Inc. User-driven media system in a computer network
US7130777B2 (en) 2003-11-26 2006-10-31 International Business Machines Corporation Method to hierarchical pooling of opinions from multiple sources
US20060253458A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Determining website reputations using automatic testing
US20060253580A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Website reputation product architecture
US20060253578A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations during user interactions
US20060253582A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations within search results
US20060253423A1 (en) 2005-05-07 2006-11-09 Mclane Mark Information retrieval system and method
US20060253584A1 (en) * 2005-05-03 2006-11-09 Dixon Christopher J Reputation of an entity associated with a content item
US20060253583A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations based on website handling of personal information
US20060271524A1 (en) 2005-02-28 2006-11-30 Michael Tanne Methods of and systems for searching by incorporating user-entered information
US20060287980A1 (en) 2005-06-21 2006-12-21 Microsoft Corporation Intelligent search results blending
US20060294086A1 (en) 2005-06-28 2006-12-28 Yahoo! Inc. Realtime indexing and search in large, rapidly changing document collections
US20070027707A1 (en) 2005-08-01 2007-02-01 Murray Frank H System and methods for interactive selection of a reviewer of media content
US20070073660A1 (en) 2005-05-05 2007-03-29 Daniel Quinlan Method of validating requests for sender reputation information
US20070078670A1 (en) 2005-09-30 2007-04-05 Dave Kushal B Selecting high quality reviews for display
US20070101419A1 (en) 2005-10-31 2007-05-03 Dawson Colin S Apparatus, system, and method for providing electronically accessible personal information
US20070112760A1 (en) 2005-11-15 2007-05-17 Powerreviews, Inc. System for dynamic product summary based on consumer-contributed keywords
US20070121843A1 (en) 2005-09-02 2007-05-31 Ron Atazky Advertising and incentives over a social network
US20070121596A1 (en) 2005-08-09 2007-05-31 Sipera Systems, Inc. System and method for providing network level and nodal level vulnerability protection in VoIP networks
US20070124297A1 (en) 2005-11-29 2007-05-31 John Toebes Generating search results based on determined relationships between data objects and user connections to identified destinations
US20070130126A1 (en) 2006-02-17 2007-06-07 Google Inc. User distributed search results
US20070136430A1 (en) 2005-12-13 2007-06-14 Microsoft Corporation Delivery confirmation for e-mail
US20070150562A1 (en) 2001-10-12 2007-06-28 Stull Edward L System and method for data quality management and control of heterogeneous data sources
US20070192423A1 (en) 2006-02-04 2007-08-16 Karlson Bruce L Document reminder system
US7289971B1 (en) 1996-07-22 2007-10-30 O'neil Kevin P Personal information security and exchange tool
US20070288468A1 (en) 2006-06-09 2007-12-13 Ebay Inc. Shopping context engine
US20070294281A1 (en) 2006-05-05 2007-12-20 Miles Ward Systems and methods for consumer-generated media reputation management
US20080015928A1 (en) 2006-07-11 2008-01-17 Grayboxx, Inc. Business rating method
US20080021890A1 (en) 2004-10-29 2008-01-24 The Go Daddy Group, Inc. Presenting search engine results based on domain name related reputation
US20080033781A1 (en) 2006-07-18 2008-02-07 Jonah Holmes Peretti System and method for online product promotion
US20080065472A1 (en) 2003-12-15 2008-03-13 Edward Patrick Method and apparatus for automatically performing an online content distribution campaign
US20080071602A1 (en) 2006-08-31 2008-03-20 Yahoo! Inc. Enhanced user reviews
US20080077517A1 (en) 2006-09-22 2008-03-27 Robert Grove Sappington Reputation, Information & Communication Management
US20080077577A1 (en) 2006-09-27 2008-03-27 Byrne Joseph J Research and Monitoring Tool to Determine the Likelihood of the Public Finding Information Using a Keyword Search
US20080082687A1 (en) 2006-09-28 2008-04-03 Ryan Kirk Cradick Method, system, and computer program product for implementing collaborative correction of online content
US20080104030A1 (en) 2006-10-27 2008-05-01 Yahoo! Inc., A Delaware Corporation System and Method for Providing Customized Information Based on User's Situation Information
US20080109245A1 (en) 2006-11-03 2008-05-08 Sezwho Inc. Method and system for managing domain specific and viewer specific reputation on online communities
US20080109491A1 (en) 2006-11-03 2008-05-08 Sezwho Inc. Method and system for managing reputation profile on online communities
US20080133488A1 (en) * 2006-11-22 2008-06-05 Nagaraju Bandaru Method and system for analyzing user-generated content
US20080165972A1 (en) 2007-01-08 2008-07-10 I-Fax.Com Inc. Method and system for encrypted email communication
US20080215589A1 (en) 2006-11-10 2008-09-04 Getingate, Inc. System, Method, and Computer-Readable Medium for Collection and Distribution of User-Supplied Comments Associated with Network and Local Content
US20080281807A1 (en) 2007-05-11 2008-11-13 Siemens Aktiengesellschaft Search engine
US20080288277A1 (en) 2006-01-10 2008-11-20 Mark Joseph Fasciano Methods for encouraging charitable social networking
US20080288276A1 (en) 2007-05-18 2008-11-20 Xenosurvey, Inc. Method, Process and System for Survey Data Acquisition and Analysis
US20080306899A1 (en) 2007-06-07 2008-12-11 Gregory Michelle L Methods, apparatus, and computer-readable media for analyzing conversational-type data
US20090012828A1 (en) 2007-03-09 2009-01-08 Commvault Systems, Inc. Computer systems and methods for workflow automation
US20090070325A1 (en) 2007-09-12 2009-03-12 Raefer Christopher Gabriel Identifying Information Related to a Particular Entity from Electronic Sources
US7519562B1 (en) * 2005-03-31 2009-04-14 Amazon Technologies, Inc. Automatic identification of unreliable user ratings
US20090100005A1 (en) 2007-10-12 2009-04-16 Microsoft Corporation Mapping network addresses to geographical locations
US20090106236A1 (en) 2007-07-25 2009-04-23 Us News R&R, Llc Method for scoring products, services, institutions, and other items
US20090119268A1 (en) 2007-11-05 2009-05-07 Nagaraju Bandaru Method and system for crawling, mapping and extracting information associated with a business using heuristic and semantic analysis
US20090119258A1 (en) 2007-11-05 2009-05-07 William Petty System and method for content ranking and reviewer selection
US20090157667A1 (en) * 2007-12-12 2009-06-18 Brougher William C Reputation of an Author of Online Content
US7552068B1 (en) 2000-03-02 2009-06-23 Amazon Technologies, Inc. Methods and systems of obtaining consumer reviews
US20090177691A1 (en) * 2008-01-03 2009-07-09 Gary Manfredi Multi-level reputation based recommendation system and method
US20090177988A1 (en) 2008-01-08 2009-07-09 International Business Machines Corporation Generating data queries using a graphical selection tree
US20090193011A1 (en) * 2008-01-25 2009-07-30 Sasha Blair-Goldensohn Phrase Based Snippet Generation
US20090193328A1 (en) * 2008-01-25 2009-07-30 George Reis Aspect-Based Sentiment Summarization
US7600017B2 (en) 2000-10-11 2009-10-06 Buzzmetrics, Ltd. System and method for scoring electronic messages
US20090265251A1 (en) 2007-11-30 2009-10-22 Nearbynow Systems and Methods for Searching a Defined Area
US20090265332A1 (en) * 2008-04-18 2009-10-22 Biz360 Inc. System and Methods for Evaluating Feature Opinions for Products, Services, and Entities
US20090282019A1 (en) * 2008-05-12 2009-11-12 Threeall, Inc. Sentiment Extraction from Consumer Reviews for Providing Product Recommendations
US20090281870A1 (en) * 2008-05-12 2009-11-12 Microsoft Corporation Ranking products by mining comparison sentiment
US7631032B1 (en) 1998-01-30 2009-12-08 Net-Express, Ltd. Personalized internet interaction by adapting a page format to a user record
US20090307762A1 (en) 2008-06-05 2009-12-10 Chorus Llc System and method to create, save, and display web annotations that are selectively shared within specified online communities
US7634810B2 (en) 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US20090319342A1 (en) 2008-06-19 2009-12-24 Wize, Inc. System and method for aggregating and summarizing product/topic sentiment
US7653646B2 (en) 2001-05-14 2010-01-26 Ramot At Tel Aviv University Ltd. Method and apparatus for quantum clustering
US7664669B1 (en) 1999-11-19 2010-02-16 Amazon.Com, Inc. Methods and systems for distributing information within a dynamically defined community
US20100076968A1 (en) 2008-05-27 2010-03-25 Boyns Mark R Method and apparatus for aggregating and presenting data associated with geographic locations
US20100100950A1 (en) 2008-10-20 2010-04-22 Roberts Jay B Context-based adaptive authentication for data and services access in a network
US20100121849A1 (en) * 2008-11-13 2010-05-13 Buzzient, Inc. Modeling social networks using analytic measurements of online social media content
US20100169317A1 (en) * 2008-12-31 2010-07-01 Microsoft Corporation Product or Service Review Summarization Using Attributes
US20100198839A1 (en) 2009-01-30 2010-08-05 Sujoy Basu Term extraction from service description documents
US7779360B1 (en) 2007-04-10 2010-08-17 Google Inc. Map user interface
US20100211308A1 (en) 2009-02-19 2010-08-19 Microsoft Corporation Identifying interesting locations
US7792816B2 (en) 2007-02-01 2010-09-07 Icosystem Corporation Method and system for fast, generic, online and offline, multi-source text analysis and visualization
US20100250515A1 (en) 2009-03-24 2010-09-30 Mehmet Kivanc Ozonat Transforming a description of services for web services
US7809602B2 (en) 2006-08-31 2010-10-05 Opinionlab, Inc. Computer-implemented system and method for measuring and reporting business intelligence based on comments collected from web page users using software associated with accessed web pages
US20100257184A1 (en) 2006-12-20 2010-10-07 Victor David Uy Method and apparatus for scoring electronic documents
US7813986B2 (en) 2005-03-25 2010-10-12 The Motley Fool, Llc System, method, and computer program product for scoring items based on user sentiment and for determining the proficiency of predictors
US20100262454A1 (en) 2009-04-09 2010-10-14 SquawkSpot, Inc. System and method for sentiment-based text classification and relevancy ranking
US20100262601A1 (en) 2009-04-08 2010-10-14 Dumon Olivier G Methods and systems for assessing the quality of an item listing
US20100313252A1 (en) 2009-06-08 2010-12-09 Erie Trouw System, method and apparatus for creating and using a virtual layer within a web browsing environment
US20100325107A1 (en) 2008-02-22 2010-12-23 Christopher Kenton Systems and methods for measuring and managing distributed online conversations
US7870025B2 (en) 2001-09-20 2011-01-11 Intuit Inc. Vendor comparison, advertising and switching
US20110016118A1 (en) 2009-07-20 2011-01-20 Lexisnexis Method and apparatus for determining relevant search results using a matrix framework
US20110047035A1 (en) 2009-08-18 2011-02-24 Gidwani Bahar N Systems, Methods, and Media for Evaluating Companies Based On Social Performance
US20110078049A1 (en) 2009-09-30 2011-03-31 Muhammad Faisal Rehman Method and system for exposing data used in ranking search results
US20110099036A1 (en) 2009-10-26 2011-04-28 Patrick Sarkissian Systems and methods for offering, scheduling, and coordinating follow-up communications regarding test drives of motor vehicles
US20110112901A1 (en) 2009-05-08 2011-05-12 Lance Fried Trust-based personalized offer portal
US7962461B2 (en) 2004-12-14 2011-06-14 Google Inc. Method and system for finding and aggregating reviews for a product
US20110153551A1 (en) 2007-01-31 2011-06-23 Reputationdefender, Inc. Identifying and Changing Personal Information
US7970872B2 (en) 2007-10-01 2011-06-28 Accenture Global Services Limited Infrastructure for parallel programming of clusters of machines
US20110173056A1 (en) 2009-07-14 2011-07-14 D Alessio Dennis Computerized systems and processes for promoting businesses
US20110251977A1 (en) 2010-04-13 2011-10-13 Michal Cialowicz Ad Hoc Document Parsing
US20110270705A1 (en) 2010-04-29 2011-11-03 Cheryl Parker System and Method for Geographic Based Data Visualization and Extraction
US20110296179A1 (en) 2010-02-22 2011-12-01 Christopher Templin Encryption System using Web Browsers and Untrusted Web Servers
US20120023332A1 (en) 2010-07-23 2012-01-26 Anchorfree, Inc. System and method for private social networking
US20120059848A1 (en) 2010-09-08 2012-03-08 Yahoo! Inc. Social network based user-initiated review and purchase related information and advertising
US8135669B2 (en) 2005-10-13 2012-03-13 Microsoft Corporation Information access with usage-driven metadata feedback
US20120066233A1 (en) 2010-09-11 2012-03-15 Chandana Hiranjith Fonseka System and methods for mapping user reviewed and rated websites to specific user activities
US8170958B1 (en) 2009-01-29 2012-05-01 Intuit Inc. Internet reputation manager
US8185531B2 (en) 2008-07-24 2012-05-22 Nahava Inc. Method and apparatus for partitioning high-dimension vectors for use in a massive index tree
US20120130917A1 (en) 2010-11-24 2012-05-24 Nils Forsblom Adjustable priority retailer ranking system
US20120197950A1 (en) 2011-01-30 2012-08-02 Umeshwar Dayal Sentiment cube
US20120197903A1 (en) * 2011-01-31 2012-08-02 Yue Lu Objective-function based sentiment
US20120197816A1 (en) * 2011-01-27 2012-08-02 Electronic Entertainment Design And Research Product review bias identification and recommendations
US8255248B1 (en) 2006-07-20 2012-08-28 Intuit Inc. Method and computer program product for obtaining reviews of businesses from customers
US20120221479A1 (en) 2011-02-25 2012-08-30 Schneck Iii Philip W Web site, system and method for publishing authenticated reviews
US20120226627A1 (en) 2011-03-04 2012-09-06 Edward Ming-Yu Yang System and method for business reputation scoring
US20120245924A1 (en) * 2011-03-21 2012-09-27 Xerox Corporation Customer review authoring assistant
US20120260209A1 (en) * 2011-04-11 2012-10-11 Credibility Corp. Visualization Tools for Reviewing Credibility and Stateful Hierarchical Access to Credibility
US20120260201A1 (en) 2011-04-07 2012-10-11 Infosys Technologies Ltd. Collection and analysis of service, product and enterprise soft data
US20120304027A1 (en) * 2009-10-05 2012-11-29 Ross John Stenfort Sending failure information from a solid state drive (ssd) to a host device
US20120303419A1 (en) 2011-05-24 2012-11-29 Oracle International Corporation System providing automated feedback reminders
US20120323842A1 (en) 2011-05-16 2012-12-20 Izhikevich Eugene M System and methods for growth, peer-review, and maintenance of network collaborative resources
US20130007014A1 (en) 2011-06-29 2013-01-03 Michael Benjamin Selkowe Fertik Systems and methods for determining visibility and reputation of a user on the internet
US8352405B2 (en) 2011-04-21 2013-01-08 Palo Alto Research Center Incorporated Incorporating lexicon knowledge into SVM learning to improve sentiment classification
US8356025B2 (en) * 2009-12-09 2013-01-15 International Business Machines Corporation Systems and methods for detecting sentiment-based topics
US20130085804A1 (en) 2011-10-04 2013-04-04 Adam Leff Online marketing, monitoring and control for merchants
US8417713B1 (en) * 2007-12-05 2013-04-09 Google Inc. Sentiment detection as a ranking signal for reviewable entities
US8438469B1 (en) * 2005-09-30 2013-05-07 Google Inc. Embedded review and rating information
US20130124653A1 (en) 2011-11-16 2013-05-16 Loopa Llc Searching, retrieving, and scoring social media
US8498990B2 (en) * 2005-04-14 2013-07-30 Yosi Heber System and method for analyzing, generating suggestions for, and improving websites
US20130218640A1 (en) 2012-01-06 2013-08-22 David S. Kidder System and method for managing advertising intelligence and customer relations management data

Family Cites Families (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6854007B1 (en) 1998-09-17 2005-02-08 Micron Technology, Inc. Method and system for enhancing reliability of communication with electronic messages
US6633851B1 (en) 1999-10-01 2003-10-14 B-50.Com, Llc Systems and methods for generating custom reports based on point-of-sale data
US6901406B2 (en) 1999-12-29 2005-05-31 General Electric Capital Corporation Methods and systems for accessing multi-dimensional customer data
US7130808B1 (en) 1999-12-29 2006-10-31 The Product Engine, Inc. Method, algorithm, and computer program for optimizing the performance of messages including advertisements in an interactive measurable medium
US20020169835A1 (en) 2000-12-30 2002-11-14 Imarcsgroup.Com,Llc E-mail communications system, method and program
US8744904B2 (en) 2001-05-31 2014-06-03 Goldman, Sachs & Co. Employee performance monitoring system
US7076533B1 (en) 2001-11-06 2006-07-11 Ihance, Inc. Method and system for monitoring e-mail and website behavior of an e-mail recipient
US7072947B1 (en) 2001-11-06 2006-07-04 Ihance, Inc. Method and system for monitoring e-mail and website behavior of an e-mail recipient
US7444658B1 (en) 2001-12-14 2008-10-28 At&T Intellectual Property I, L.P. Method and system to perform content targeting
US20040215479A1 (en) 2003-01-31 2004-10-28 Exacttarget, Llc Dynamic content electronic mail marketing system and method
US20070016435A1 (en) 2004-08-05 2007-01-18 William Bevington Visualization tool
US20060064502A1 (en) 2004-09-22 2006-03-23 Transaxtions Llc Using Popular IDs To Sign On Creating A Single ID for Access
US8452667B1 (en) 2004-10-28 2013-05-28 Netwaiter, LLC System and method for online management of restaurant orders
US20060143066A1 (en) 2004-12-23 2006-06-29 Hermann Calabria Vendor-driven, social-network enabled review syndication system
US20060200459A1 (en) 2005-03-03 2006-09-07 The E-Firm Tiered access to integrated rating system
US20060253537A1 (en) 2005-05-04 2006-11-09 Ragy Thomas Method and system for providing automated email optimization
US7827052B2 (en) 2005-09-30 2010-11-02 Google Inc. Systems and methods for reputation management
US7558769B2 (en) 2005-09-30 2009-07-07 Google Inc. Identifying clusters of similar reviews and displaying representative reviews from multiple clusters
US7996252B2 (en) 2006-03-02 2011-08-09 Global Customer Satisfaction System, Llc Global customer satisfaction system
US20070294124A1 (en) 2006-06-14 2007-12-20 John Charles Crotts Hospitality performance index
US8862591B2 (en) * 2006-08-22 2014-10-14 Twitter, Inc. System and method for evaluating sentiment
US7979302B2 (en) 2006-10-17 2011-07-12 International Business Machines Corporation Report generation method and system
US20080104059A1 (en) 2006-11-01 2008-05-01 Dininginfo Llc Restaurant review search system and method for finding links to relevant reviews of selected restaurants through the internet by use of an automatically configured, sophisticated search algorithm
US7917754B1 (en) 2006-11-03 2011-03-29 Intuit Inc. Method and apparatus for linking businesses to potential customers through a trusted source network
US20080120411A1 (en) 2006-11-21 2008-05-22 Oliver Eberle Methods and System for Social OnLine Association and Relationship Scoring
US20080183561A1 (en) 2007-01-26 2008-07-31 Exelate Media Ltd. Marketplace for interactive advertising targeting events
US20080189190A1 (en) 2007-02-01 2008-08-07 Jeff Ferber Proxy server and api extension for online stores
US20080215571A1 (en) 2007-03-01 2008-09-04 Microsoft Corporation Product review search
US7996210B2 (en) 2007-04-24 2011-08-09 The Research Foundation Of The State University Of New York Large-scale sentiment analysis
US20080312988A1 (en) 2007-06-14 2008-12-18 Akzo Nobel Coatings International B.V. Performance rating of a business
KR100928324B1 (en) 2007-10-02 2009-11-25 주식회사 아이브이넷 Operation method of frame buffer memory for recovering compressed video and decoding device suitable for this
US20090265307A1 (en) 2008-04-18 2009-10-22 Reisman Kenneth System and method for automatically producing fluent textual summaries from multiple opinions
US20090319359A1 (en) 2008-06-18 2009-12-24 Vyrl Mkt, Inc. Social behavioral targeting based on influence in a social network
US8024324B2 (en) 2008-06-30 2011-09-20 International Business Machines Corporation Information retrieval with unified search using multiple facets
EP2297685A1 (en) 2008-07-04 2011-03-23 Yogesh Chunilal Rathod Methods and systems for brands social networks (bsn) platform
US20100064246A1 (en) 2008-09-11 2010-03-11 Scott Gluck Method and system for interfacing and dissemination of election-related information
US20100106557A1 (en) 2008-10-24 2010-04-29 Novell, Inc. System and method for monitoring reputation changes
US20100153181A1 (en) 2008-12-11 2010-06-17 Georgia Tech Research Corporation Systems and methods for providing information services
US10764748B2 (en) 2009-03-26 2020-09-01 Qualcomm Incorporated Apparatus and method for user identity authentication in peer-to-peer overlay networks
US8315895B1 (en) 2009-10-05 2012-11-20 Intuit Inc. Method and system for obtaining review updates within a review and rating system
US20110137705A1 (en) * 2009-12-09 2011-06-09 Rage Frameworks, Inc., Method and system for automated content analysis for a business organization
US8676597B2 (en) 2009-12-28 2014-03-18 General Electric Company Methods and systems for mapping healthcare services analytics for volume and trends
US9760802B2 (en) 2010-01-27 2017-09-12 Ebay Inc. Probabilistic recommendation of an item
US20110209072A1 (en) 2010-02-19 2011-08-25 Naftali Bennett Multiple stream internet poll
US8370278B2 (en) * 2010-03-08 2013-02-05 Microsoft Corporation Ontological categorization of question concepts from document summaries
US8738418B2 (en) 2010-03-19 2014-05-27 Visa U.S.A. Inc. Systems and methods to enhance search data with transaction based data
US20110231225A1 (en) 2010-03-19 2011-09-22 Visa U.S.A. Inc. Systems and Methods to Identify Customers Based on Spending Patterns
US20110307307A1 (en) 2010-06-09 2011-12-15 Akram Benmbarek Systems and methods for location based branding
US8504486B1 (en) 2010-09-17 2013-08-06 Amazon Technologies, Inc. Collection and provision of long-term customer reviews
US20120191546A1 (en) 2011-01-25 2012-07-26 Digital River, Inc. Email Strategy Templates System and Method
US20120215584A1 (en) 2011-02-18 2012-08-23 Leapset, Inc. Tracking off-line commerce and online activity
US9202200B2 (en) 2011-04-27 2015-12-01 Credibility Corp. Indices for credibility trending, monitoring, and lead generation
US8838438B2 (en) * 2011-04-29 2014-09-16 Cbs Interactive Inc. System and method for determining sentiment from text content
US20120290606A1 (en) * 2011-05-11 2012-11-15 Searchreviews LLC Providing sentiment-related content using sentiment and factor-based analysis of contextually-relevant user-generated data
US9824199B2 (en) 2011-08-25 2017-11-21 T-Mobile Usa, Inc. Multi-factor profile and security fingerprint analysis
US8650143B2 (en) 2011-08-30 2014-02-11 Accenture Global Services Limited Determination of document credibility
US8694413B1 (en) 2011-09-29 2014-04-08 Morgan Stanley & Co. Llc Computer-based systems and methods for determining interest levels of consumers in research work product produced by a research department
US20130085803A1 (en) 2011-10-03 2013-04-04 Adtrak360 Brand analysis
US8880420B2 (en) 2011-12-27 2014-11-04 Grubhub, Inc. Utility for creating heatmaps for the study of competitive advantage in the restaurant marketplace
US8595050B2 (en) 2011-12-27 2013-11-26 Grubhub, Inc. Utility for determining competitive restaurants
US8996425B1 (en) 2012-02-09 2015-03-31 Audible, Inc. Dynamically guided user reviews
US9477749B2 (en) * 2012-03-02 2016-10-25 Clarabridge, Inc. Apparatus for identifying root cause using unstructured data
US8494973B1 (en) 2012-03-05 2013-07-23 Reputation.Com, Inc. Targeting review placement
US8566146B1 (en) 2012-05-10 2013-10-22 Morgan Stanley & Co. Llc Computer-based systems and method for computing a score for contacts of a financial services firm indicative of resources to be deployed by the financial services firm for the contacts to maximize revenue for the financial services firm

Patent Citations (210)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956693A (en) 1996-07-19 1999-09-21 Geerlings; Huib Computer system for merchant communication to customers
US7289971B1 (en) 1996-07-22 2007-10-30 O'neil Kevin P Personal information security and exchange tool
US6178419B1 (en) 1996-07-31 2001-01-23 British Telecommunications Plc Data access system
US5857179A (en) 1996-09-09 1999-01-05 Digital Equipment Corporation Computer method and apparatus for clustering documents and automatic generation of cluster keywords
US6006218A (en) 1997-02-28 1999-12-21 Microsoft Methods and apparatus for retrieving and/or processing retrieved information as a function of a user's estimated knowledge
US5819258A (en) 1997-03-07 1998-10-06 Digital Equipment Corporation Method and apparatus for automatically generating hierarchical categories from large document collections
US5873081A (en) 1997-06-27 1999-02-16 Microsoft Corporation Document filtering via directed acyclic graphs
US5987457A (en) 1997-11-25 1999-11-16 Acceleration Software International Corporation Query refinement method for searching documents
US6182066B1 (en) 1997-11-26 2001-01-30 International Business Machines Corp. Category processing of query topics and electronic document content topics
US7631032B1 (en) 1998-01-30 2009-12-08 Net-Express, Ltd. Personalized internet interaction by adapting a page format to a user record
US6324650B1 (en) 1998-03-16 2001-11-27 John W.L. Ogilvie Message content protection and conditional disclosure
US6532459B1 (en) 1998-12-15 2003-03-11 Berson Research Corp. System for finding, identifying, tracking, and correcting personal information in diverse databases
US6513031B1 (en) 1998-12-23 2003-01-28 Microsoft Corporation System for improving search area selection
US6985896B1 (en) 1999-02-03 2006-01-10 Perttunen Cary D Browsing methods, articles and apparatus
US20030069874A1 (en) 1999-05-05 2003-04-10 Eyal Hertzog Method and system to automate the updating of personal information within a personal information management application and to synchronize such updated personal information management applications
US6611825B1 (en) 1999-06-09 2003-08-26 The Boeing Company Method and system for text mining using multidimensional subspaces
US20030014402A1 (en) 1999-06-25 2003-01-16 Sealand Michael D. System and method for transacting retrieval of real estate property listings using a remote client interfaced over an information network
US7664669B1 (en) 1999-11-19 2010-02-16 Amazon.Com, Inc. Methods and systems for distributing information within a dynamically defined community
US7778890B1 (en) 1999-11-19 2010-08-17 Amazon Technologies, Inc. Methods and systems for distributing information within a dynamically defined community
WO2001046868A2 (en) 1999-12-22 2001-06-28 Accenture Llp A method for a graphical user interface search filter generator
US20020016910A1 (en) 2000-02-11 2002-02-07 Wright Robert P. Method for secure distribution of documents over electronic networks
US6775677B1 (en) 2000-03-02 2004-08-10 International Business Machines Corporation System, method, and program product for identifying and describing topics in a collection of electronic documents
US7552068B1 (en) 2000-03-02 2009-06-23 Amazon Technologies, Inc. Methods and systems of obtaining consumer reviews
US20040153466A1 (en) 2000-03-15 2004-08-05 Ziff Susan Janette Content development management system and method
US6510432B1 (en) 2000-03-24 2003-01-21 International Business Machines Corporation Methods, systems and computer program products for archiving topical search results of web servers
US6968333B2 (en) 2000-04-02 2005-11-22 Tangis Corporation Soliciting information based on a computer user's context
US20060123348A1 (en) 2000-04-07 2006-06-08 Ross Brian D System and method for facilitating the pre-publication peer review process
US6678690B2 (en) 2000-06-12 2004-01-13 International Business Machines Corporation Retrieving and ranking of documents from database description
US20050216443A1 (en) 2000-07-06 2005-09-29 Streamsage, Inc. Method and system for indexing and searching timed media information based upon relevance intervals
US20040063111A1 (en) 2000-08-25 2004-04-01 Toshikazu Shiba Method for protecting personal information
US20030172014A1 (en) 2000-09-01 2003-09-11 Chris Quackenbush System and method for online valuation and analysis
US7600017B2 (en) 2000-10-11 2009-10-06 Buzzmetrics, Ltd. System and method for scoring electronic messages
US20020111847A1 (en) 2000-12-08 2002-08-15 Word Of Net, Inc. System and method for calculating a marketing appearance frequency measurement
US6766316B2 (en) 2001-01-18 2004-07-20 Science Applications International Corporation Method and system of ranking and clustering for document indexing and retrieval
US20020099598A1 (en) 2001-01-22 2002-07-25 Eicher, Jr. Daryl E. Performance-based supply chain management system and method with metalerting and hot spot identification
US20040078363A1 (en) 2001-03-02 2004-04-22 Takahiko Kawatani Document and information retrieval method and apparatus
US20030208388A1 (en) 2001-03-07 2003-11-06 Bernard Farkas Collaborative bench mark based determination of best practices
US7653646B2 (en) 2001-05-14 2010-01-26 Ramot At Tel Aviv University Ltd. Method and apparatus for quantum clustering
US20020174230A1 (en) 2001-05-15 2002-11-21 Sony Corporation And Sony Electronics Inc. Personalized interface with adaptive content presentation
US20020178381A1 (en) 2001-05-22 2002-11-28 Trend Micro Incorporated System and method for identifying undesirable content in responses sent in reply to a user request for content
US7640434B2 (en) 2001-05-31 2009-12-29 Trend Micro, Inc. Identification of undesirable content in responses sent in reply to a user request for content
US20030014633A1 (en) 2001-07-12 2003-01-16 Gruber Thomas Robert Method and system for secure, authorized e-mail based transactions
US6484068B1 (en) 2001-07-24 2002-11-19 Sony Corporation Robot apparatus and method for controlling jumping of robot device
US7870025B2 (en) 2001-09-20 2011-01-11 Intuit Inc. Vendor comparison, advertising and switching
US20070150562A1 (en) 2001-10-12 2007-06-28 Stull Edward L System and method for data quality management and control of heterogeneous data sources
US20030093260A1 (en) 2001-11-13 2003-05-15 Koninklijke Philips Electronics N.V. Apparatus and method for program selection utilizing exclusive and inclusive metadata searches
US20050177457A1 (en) 2001-11-29 2005-08-11 Sheltz Steven P. Computerized method for the solicitation and sales of transactions
US20030135725A1 (en) 2002-01-14 2003-07-17 Schirmer Andrew Lewis Search refinement graphical user interface
US20030147536A1 (en) 2002-02-05 2003-08-07 Andivahis Dimitrios Emmanouil Secure electronic messaging system requiring key retrieval for deriving decryption keys
US20060074920A1 (en) 2002-02-13 2006-04-06 Marcus Wefers Method, software application and system for providing benchmarks
US7076558B1 (en) 2002-02-27 2006-07-11 Microsoft Corporation User-centric consent management system and method
US20030164849A1 (en) 2002-03-01 2003-09-04 Iparadigms, Llc Systems and methods for facilitating the peer review process
US20060015942A1 (en) 2002-03-08 2006-01-19 Ciphertrust, Inc. Systems and methods for classification of messaging entities
US20040019584A1 (en) 2002-03-18 2004-01-29 Greening Daniel Rex Community directory
US7028026B1 (en) 2002-05-28 2006-04-11 Ask Jeeves, Inc. Relevancy-based database retrieval and display techniques
US6754874B1 (en) 2002-05-31 2004-06-22 Deloitte Development Llc Computer-aided system and method for evaluating employees
US20030229668A1 (en) 2002-06-07 2003-12-11 Malik Dale W. Systems and methods for delivering time sensitive messages over a distributed network
US20040019846A1 (en) 2002-07-24 2004-01-29 Xerox Corporation System and method for managing document retention of shared documents
US20040032420A1 (en) 2002-08-13 2004-02-19 Allen Bradley J. Interactive benchmarking system
US20040088308A1 (en) 2002-08-16 2004-05-06 Canon Kabushiki Kaisha Information analysing apparatus
US20040093414A1 (en) 2002-08-26 2004-05-13 Orton Kevin R. System for prevention of undesirable Internet content
US7117207B1 (en) 2002-09-11 2006-10-03 George Mason Intellectual Properties, Inc. Personalizable semantic taxonomy-based search agent
US20040082839A1 (en) 2002-10-25 2004-04-29 Gateway Inc. System and method for mood contextual data output
US20060149708A1 (en) 2002-11-11 2006-07-06 Lavine Steven D Search method and system and system using the same
US20040169678A1 (en) 2002-11-27 2004-09-02 Oliver Huw Edward Obtaining user feedback on displayed items
US20040122926A1 (en) 2002-12-23 2004-06-24 Microsoft Corporation, Redmond, Washington. Reputation system for web services
US20050005168A1 (en) 2003-03-11 2005-01-06 Richard Dick Verified personal information database
US20040267717A1 (en) 2003-06-27 2004-12-30 Sbc, Inc. Rank-based estimate of relevance values
US20050050009A1 (en) 2003-08-18 2005-03-03 Martha Gardner Method and system for assessing and optimizing crude selection
US20050071632A1 (en) 2003-09-25 2005-03-31 Pauker Matthew J. Secure message system with remote decryption service
US20050114313A1 (en) 2003-11-26 2005-05-26 Campbell Christopher S. System and method for retrieving documents or sub-documents based on examples
US7130777B2 (en) 2003-11-26 2006-10-31 International Business Machines Corporation Method to hierarchical pooling of opinions from multiple sources
US20080065472A1 (en) 2003-12-15 2008-03-13 Edward Patrick Method and apparatus for automatically performing an online content distribution campaign
US20050160062A1 (en) 2004-01-16 2005-07-21 Howard W. B. Method to report personal security information about a person
US20050177559A1 (en) 2004-02-03 2005-08-11 Kazuo Nemoto Information leakage source identifying method
US20050203795A1 (en) 2004-03-11 2005-09-15 Kristin Witzenburg Method for providing discounted media placement and marketing services to a plurality of advertisers
US20050256866A1 (en) 2004-03-15 2005-11-17 Yahoo! Inc. Search system and methods with integration of user annotations from a trust network
US20050234877A1 (en) 2004-04-08 2005-10-20 Yu Philip S System and method for searching using a temporal dimension
US20050251536A1 (en) 2004-05-04 2005-11-10 Ralph Harik Extracting information from Web pages
US20060004716A1 (en) 2004-07-01 2006-01-05 Microsoft Corporation Presentation-level content filtering for a search result
US20060026593A1 (en) 2004-07-30 2006-02-02 Microsoft Corporation Categorizing, voting and rating community threads
US20060116896A1 (en) 2004-08-12 2006-06-01 Fowler James F User-maintained contact information data system
US20060047725A1 (en) 2004-08-26 2006-03-02 Bramson Steven J Opt-in directory of verified individual profiles
US20060042483A1 (en) 2004-09-02 2006-03-02 Work James D Method and system for reputation evaluation of online users in a social networking scheme
US20080021890A1 (en) 2004-10-29 2008-01-24 The Go Daddy Group, Inc. Presenting search engine results based on domain name related reputation
US20060174343A1 (en) 2004-11-30 2006-08-03 Sensory Networks, Inc. Apparatus and method for acceleration of security applications through pre-filtering
US7634810B2 (en) 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US7962461B2 (en) 2004-12-14 2011-06-14 Google Inc. Method and system for finding and aggregating reviews for a product
US20060190475A1 (en) 2004-12-20 2006-08-24 Norman Shi Group polling for consumer review
US20060152504A1 (en) 2005-01-11 2006-07-13 Levy James A Sequential retrieval, sampling, and modulated rendering of database or data net information using data stream from audio-visual media
US20060161524A1 (en) 2005-01-14 2006-07-20 Learning Technologies, Inc. Reputation based search
US20060173828A1 (en) 2005-02-01 2006-08-03 Outland Research, Llc Methods and apparatus for using personal background data to improve the organization of documents retrieved in response to a search query
US20060271524A1 (en) 2005-02-28 2006-11-30 Michael Tanne Methods of and systems for searching by incorporating user-entered information
US20060212931A1 (en) 2005-03-02 2006-09-21 Markmonitor, Inc. Trust evaluation systems and methods
US7813986B2 (en) 2005-03-25 2010-10-12 The Motley Fool, Llc System, method, and computer program product for scoring items based on user sentiment and for determining the proficiency of predictors
US7519562B1 (en) * 2005-03-31 2009-04-14 Amazon Technologies, Inc. Automatic identification of unreliable user ratings
US8498990B2 (en) * 2005-04-14 2013-07-30 Yosi Heber System and method for analyzing, generating suggestions for, and improving websites
US20060242554A1 (en) 2005-04-25 2006-10-26 Gather, Inc. User-driven media system in a computer network
US20060253580A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Website reputation product architecture
US20060253458A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Determining website reputations using automatic testing
US20060253578A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations during user interactions
US20060253582A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations within search results
US20060253584A1 (en) * 2005-05-03 2006-11-09 Dixon Christopher J Reputation of an entity associated with a content item
US20060253583A1 (en) 2005-05-03 2006-11-09 Dixon Christopher J Indicating website reputations based on website handling of personal information
US20070073660A1 (en) 2005-05-05 2007-03-29 Daniel Quinlan Method of validating requests for sender reputation information
US20060253423A1 (en) 2005-05-07 2006-11-09 Mclane Mark Information retrieval system and method
US20060287980A1 (en) 2005-06-21 2006-12-21 Microsoft Corporation Intelligent search results blending
US20060294086A1 (en) 2005-06-28 2006-12-28 Yahoo! Inc. Realtime indexing and search in large, rapidly changing document collections
US20060294085A1 (en) 2005-06-28 2006-12-28 Rose Daniel E Using community annotations as anchortext
US20070112761A1 (en) 2005-06-28 2007-05-17 Zhichen Xu Search engine with augmented relevance ranking by community participation
US20070027707A1 (en) 2005-08-01 2007-02-01 Murray Frank H System and methods for interactive selection of a reviewer of media content
US20070121596A1 (en) 2005-08-09 2007-05-31 Sipera Systems, Inc. System and method for providing network level and nodal level vulnerability protection in VoIP networks
US20070121843A1 (en) 2005-09-02 2007-05-31 Ron Atazky Advertising and incentives over a social network
US20070078670A1 (en) 2005-09-30 2007-04-05 Dave Kushal B Selecting high quality reviews for display
US8438469B1 (en) * 2005-09-30 2013-05-07 Google Inc. Embedded review and rating information
US8135669B2 (en) 2005-10-13 2012-03-13 Microsoft Corporation Information access with usage-driven metadata feedback
US20070101419A1 (en) 2005-10-31 2007-05-03 Dawson Colin S Apparatus, system, and method for providing electronically accessible personal information
US20070112760A1 (en) 2005-11-15 2007-05-17 Powerreviews, Inc. System for dynamic product summary based on consumer-contributed keywords
US20070124297A1 (en) 2005-11-29 2007-05-31 John Toebes Generating search results based on determined relationships between data objects and user connections to identified destinations
US20070136430A1 (en) 2005-12-13 2007-06-14 Microsoft Corporation Delivery confirmation for e-mail
US20080288277A1 (en) 2006-01-10 2008-11-20 Mark Joseph Fasciano Methods for encouraging charitable social networking
US20070192423A1 (en) 2006-02-04 2007-08-16 Karlson Bruce L Document reminder system
US20070130126A1 (en) 2006-02-17 2007-06-07 Google Inc. User distributed search results
US20070294281A1 (en) 2006-05-05 2007-12-20 Miles Ward Systems and methods for consumer-generated media reputation management
US20070288468A1 (en) 2006-06-09 2007-12-13 Ebay Inc. Shopping context engine
US20080015928A1 (en) 2006-07-11 2008-01-17 Grayboxx, Inc. Business rating method
US20080033781A1 (en) 2006-07-18 2008-02-07 Jonah Holmes Peretti System and method for online product promotion
US8255248B1 (en) 2006-07-20 2012-08-28 Intuit Inc. Method and computer program product for obtaining reviews of businesses from customers
US7809602B2 (en) 2006-08-31 2010-10-05 Opinionlab, Inc. Computer-implemented system and method for measuring and reporting business intelligence based on comments collected from web page users using software associated with accessed web pages
US20080071602A1 (en) 2006-08-31 2008-03-20 Yahoo! Inc. Enhanced user reviews
US20080077517A1 (en) 2006-09-22 2008-03-27 Robert Grove Sappington Reputation, Information & Communication Management
US20080077577A1 (en) 2006-09-27 2008-03-27 Byrne Joseph J Research and Monitoring Tool to Determine the Likelihood of the Public Finding Information Using a Keyword Search
US20080082687A1 (en) 2006-09-28 2008-04-03 Ryan Kirk Cradick Method, system, and computer program product for implementing collaborative correction of online content
US20080104030A1 (en) 2006-10-27 2008-05-01 Yahoo! Inc., A Delaware Corporation System and Method for Providing Customized Information Based on User's Situation Information
US20080109245A1 (en) 2006-11-03 2008-05-08 Sezwho Inc. Method and system for managing domain specific and viewer specific reputation on online communities
US20080109491A1 (en) 2006-11-03 2008-05-08 Sezwho Inc. Method and system for managing reputation profile on online communities
US20080215589A1 (en) 2006-11-10 2008-09-04 Getingate, Inc. System, Method, and Computer-Readable Medium for Collection and Distribution of User-Supplied Comments Associated with Network and Local Content
US20080133488A1 (en) * 2006-11-22 2008-06-05 Nagaraju Bandaru Method and system for analyzing user-generated content
US7930302B2 (en) * 2006-11-22 2011-04-19 Intuit Inc. Method and system for analyzing user-generated content
US20100257184A1 (en) 2006-12-20 2010-10-07 Victor David Uy Method and apparatus for scoring electronic documents
US20080165972A1 (en) 2007-01-08 2008-07-10 I-Fax.Com Inc. Method and system for encrypted email communication
US20110153551A1 (en) 2007-01-31 2011-06-23 Reputationdefender, Inc. Identifying and Changing Personal Information
US7792816B2 (en) 2007-02-01 2010-09-07 Icosystem Corporation Method and system for fast, generic, online and offline, multi-source text analysis and visualization
US20090012828A1 (en) 2007-03-09 2009-01-08 Commvault Systems, Inc. Computer systems and methods for workflow automation
US7779360B1 (en) 2007-04-10 2010-08-17 Google Inc. Map user interface
US20080281807A1 (en) 2007-05-11 2008-11-13 Siemens Aktiengesellschaft Search engine
US20080288276A1 (en) 2007-05-18 2008-11-20 Xenosurvey, Inc. Method, Process and System for Survey Data Acquisition and Analysis
US20080306899A1 (en) 2007-06-07 2008-12-11 Gregory Michelle L Methods, apparatus, and computer-readable media for analyzing conversational-type data
US20090106236A1 (en) 2007-07-25 2009-04-23 Us News R&R, Llc Method for scoring products, services, institutions, and other items
US20090070325A1 (en) 2007-09-12 2009-03-12 Raefer Christopher Gabriel Identifying Information Related to a Particular Entity from Electronic Sources
US7970872B2 (en) 2007-10-01 2011-06-28 Accenture Global Services Limited Infrastructure for parallel programming of clusters of machines
US20090100005A1 (en) 2007-10-12 2009-04-16 Microsoft Corporation Mapping network addresses to geographical locations
US20090119268A1 (en) 2007-11-05 2009-05-07 Nagaraju Bandaru Method and system for crawling, mapping and extracting information associated with a business using heuristic and semantic analysis
US20090119258A1 (en) 2007-11-05 2009-05-07 William Petty System and method for content ranking and reviewer selection
US20090265251A1 (en) 2007-11-30 2009-10-22 Nearbynow Systems and Methods for Searching a Defined Area
US8417713B1 (en) * 2007-12-05 2013-04-09 Google Inc. Sentiment detection as a ranking signal for reviewable entities
US20090157667A1 (en) * 2007-12-12 2009-06-18 Brougher William C Reputation of an Author of Online Content
US20090177691A1 (en) * 2008-01-03 2009-07-09 Gary Manfredi Multi-level reputation based recommendation system and method
US20090177988A1 (en) 2008-01-08 2009-07-09 International Business Machines Corporation Generating data queries using a graphical selection tree
US20090193328A1 (en) * 2008-01-25 2009-07-30 George Reis Aspect-Based Sentiment Summarization
US20090193011A1 (en) * 2008-01-25 2009-07-30 Sasha Blair-Goldensohn Phrase Based Snippet Generation
US20100325107A1 (en) 2008-02-22 2010-12-23 Christopher Kenton Systems and methods for measuring and managing distributed online conversations
US20090265332A1 (en) * 2008-04-18 2009-10-22 Biz360 Inc. System and Methods for Evaluating Feature Opinions for Products, Services, and Entities
US20090282019A1 (en) * 2008-05-12 2009-11-12 Threeall, Inc. Sentiment Extraction from Consumer Reviews for Providing Product Recommendations
US20090281870A1 (en) * 2008-05-12 2009-11-12 Microsoft Corporation Ranking products by mining comparison sentiment
US20100076968A1 (en) 2008-05-27 2010-03-25 Boyns Mark R Method and apparatus for aggregating and presenting data associated with geographic locations
US20090307762A1 (en) 2008-06-05 2009-12-10 Chorus Llc System and method to create, save, and display web annotations that are selectively shared within specified online communities
US20090319342A1 (en) 2008-06-19 2009-12-24 Wize, Inc. System and method for aggregating and summarizing product/topic sentiment
US8185531B2 (en) 2008-07-24 2012-05-22 Nahava Inc. Method and apparatus for partitioning high-dimension vectors for use in a massive index tree
US20100100950A1 (en) 2008-10-20 2010-04-22 Roberts Jay B Context-based adaptive authentication for data and services access in a network
US20100121849A1 (en) * 2008-11-13 2010-05-13 Buzzient, Inc. Modeling social networks using analytic measurements of online social media content
US20100169317A1 (en) * 2008-12-31 2010-07-01 Microsoft Corporation Product or Service Review Summarization Using Attributes
US8170958B1 (en) 2009-01-29 2012-05-01 Intuit Inc. Internet reputation manager
US20100198839A1 (en) 2009-01-30 2010-08-05 Sujoy Basu Term extraction from service description documents
US20100211308A1 (en) 2009-02-19 2010-08-19 Microsoft Corporation Identifying interesting locations
US20100250515A1 (en) 2009-03-24 2010-09-30 Mehmet Kivanc Ozonat Transforming a description of services for web services
US20100262601A1 (en) 2009-04-08 2010-10-14 Dumon Olivier G Methods and systems for assessing the quality of an item listing
US20100262454A1 (en) 2009-04-09 2010-10-14 SquawkSpot, Inc. System and method for sentiment-based text classification and relevancy ranking
US20110112901A1 (en) 2009-05-08 2011-05-12 Lance Fried Trust-based personalized offer portal
US20100313252A1 (en) 2009-06-08 2010-12-09 Erie Trouw System, method and apparatus for creating and using a virtual layer within a web browsing environment
US20110173056A1 (en) 2009-07-14 2011-07-14 D Alessio Dennis Computerized systems and processes for promoting businesses
US20110016118A1 (en) 2009-07-20 2011-01-20 Lexisnexis Method and apparatus for determining relevant search results using a matrix framework
US20110047035A1 (en) 2009-08-18 2011-02-24 Gidwani Bahar N Systems, Methods, and Media for Evaluating Companies Based On Social Performance
US20110078049A1 (en) 2009-09-30 2011-03-31 Muhammad Faisal Rehman Method and system for exposing data used in ranking search results
US20120304027A1 (en) * 2009-10-05 2012-11-29 Ross John Stenfort Sending failure information from a solid state drive (ssd) to a host device
US20110099036A1 (en) 2009-10-26 2011-04-28 Patrick Sarkissian Systems and methods for offering, scheduling, and coordinating follow-up communications regarding test drives of motor vehicles
US8356025B2 (en) * 2009-12-09 2013-01-15 International Business Machines Corporation Systems and methods for detecting sentiment-based topics
US20110296179A1 (en) 2010-02-22 2011-12-01 Christopher Templin Encryption System using Web Browsers and Untrusted Web Servers
US20110251977A1 (en) 2010-04-13 2011-10-13 Michal Cialowicz Ad Hoc Document Parsing
US20110270705A1 (en) 2010-04-29 2011-11-03 Cheryl Parker System and Method for Geographic Based Data Visualization and Extraction
US20120023332A1 (en) 2010-07-23 2012-01-26 Anchorfree, Inc. System and method for private social networking
US20120059848A1 (en) 2010-09-08 2012-03-08 Yahoo! Inc. Social network based user-initiated review and purchase related information and advertising
US20120066233A1 (en) 2010-09-11 2012-03-15 Chandana Hiranjith Fonseka System and methods for mapping user reviewed and rated websites to specific user activities
US20120130917A1 (en) 2010-11-24 2012-05-24 Nils Forsblom Adjustable priority retailer ranking system
US20120197816A1 (en) * 2011-01-27 2012-08-02 Electronic Entertainment Design And Research Product review bias identification and recommendations
US20120197950A1 (en) 2011-01-30 2012-08-02 Umeshwar Dayal Sentiment cube
US8725781B2 (en) * 2011-01-30 2014-05-13 Hewlett-Packard Development Company, L.P. Sentiment cube
US20120197903A1 (en) * 2011-01-31 2012-08-02 Yue Lu Objective-function based sentiment
US20120221479A1 (en) 2011-02-25 2012-08-30 Schneck Iii Philip W Web site, system and method for publishing authenticated reviews
US20120226627A1 (en) 2011-03-04 2012-09-06 Edward Ming-Yu Yang System and method for business reputation scoring
US20120245924A1 (en) * 2011-03-21 2012-09-27 Xerox Corporation Customer review authoring assistant
US20120260201A1 (en) 2011-04-07 2012-10-11 Infosys Technologies Ltd. Collection and analysis of service, product and enterprise soft data
US20120260209A1 (en) * 2011-04-11 2012-10-11 Credibility Corp. Visualization Tools for Reviewing Credibility and Stateful Hierarchical Access to Credibility
US8352405B2 (en) 2011-04-21 2013-01-08 Palo Alto Research Center Incorporated Incorporating lexicon knowledge into SVM learning to improve sentiment classification
US20120323842A1 (en) 2011-05-16 2012-12-20 Izhikevich Eugene M System and methods for growth, peer-review, and maintenance of network collaborative resources
US20120303419A1 (en) 2011-05-24 2012-11-29 Oracle International Corporation System providing automated feedback reminders
US20130007014A1 (en) 2011-06-29 2013-01-03 Michael Benjamin Selkowe Fertik Systems and methods for determining visibility and reputation of a user on the internet
US20130085804A1 (en) 2011-10-04 2013-04-04 Adam Leff Online marketing, monitoring and control for merchants
US20130124653A1 (en) 2011-11-16 2013-05-16 Loopa Llc Searching, retrieving, and scoring social media
US20130218640A1 (en) 2012-01-06 2013-08-22 David S. Kidder System and method for managing advertising intelligence and customer relations management data

Non-Patent Citations (24)

* Cited by examiner, † Cited by third party
Title
Boatwright et al. "Reviewing the Reviewers: The Impact of Individual Film Critics on Box Office Performance", Springer Science, Business Media, LLC 2007.
Chris Piepho, "Getting Your Business Reviewed", Jul. 1, 2010, blog on smallbusinessshift.com.
Daranyi et al., Svensk Biblioteksforskning; Automated Text Categorization of Bibliographic Records; Boras Academic Digital Archieve (BADA); artice peer reviewed [on-line], Hogskolan I Boras, vol. 16, Issue 2, pp. 1-14 as paginated or 16-29 as unpaginated of 47 pages, 2007 [retrieved on Nov. 6, 2012].
Jason Falls, "Venuelabs Unveils Klout-like Business Scores", http://www.socialmediaexplorer.com/digital-marketing/venuelabs-unveils-klout-like-business-scores, Nov. 14, 2011.
Kermit Pattison, "Managing an Online Reputation", NYTimes.com, Jul. 30, 2009.
Korfiatis et al., "The Impact of Readability on the Usefulness of Online Product Reviews: A Case Study on an Online Bookstore", Emerging Technologies and Information Systems for the Knowledge Society Lecture Notes in Computer Science vol. 5288, 2008, pp. 423-432.
Lake, Laura, "Google Maps—Is Your Business Listed Accurately?", Sep. 1, 2009, <<http://marketing.about.com/b/2009/09/01/google-maps-are-you-listed-accurately.htm>>, p. 3.
Lake, Laura, "Google Maps-Is Your Business Listed Accurately?", Sep. 1, 2009, >, p. 3.
Lini S. Kabada, "Good-Rep Merchants Seek to Restore Your Online Image", Privacy Journal, 37.5 (Marcy 2011), 1-2, 7.
Liu et al., "Personalized Web Search by Mapping User Queries to Categories," CIKM, '02, McLean, Virginia, Nov. 4-6, 2002, pp. 558-565.
Logi DevNet, "Using Google Map Regions" <<http://devnet.logixml.com/rdPage.aspx?rdReport=Article&dnDocID=1055>>, May 1, 2009. p. 10.
Logi DevNet, "Using Google Map Regions" >, May 1, 2009. p. 10.
Mike Blumenthal, Selected Blogs on Reputation management, from http://blumenthals.com, Mar. 2010.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority for International Application No. PCT/US2012/043392, mailed Jan. 25, 2013, 10 pages.
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority for International Application No. PCT/US2012/044668, dated Dec. 21, 2012, 11pages.
Pretschner et al., "Ontology Based Personalized Search," Proc. 11th IEEE International Conference on Tools with Artificial Intelligence, Chicago, Illinois, Nov. 1999, pp. 391-398.
Salz, Peggy Anne, "BooRah Takes Wraps Off New Service & Model; Is the Money in Mobile Search Syndication?", Jun. 10, 2008, <<http://www.mobilegroove.com/boorah-takes-wraps-off-new-service-is-the-money-in-mobile-search-syndication-940>>, p. 5.
Sarah Perez, "Venulabs is Launching VenueRank, A ‘Klout for Storefronts’", <<http://techcrunch.com/2011/11/02/venuelabs-is-launching-venurank-a-klout-for-storefronts/>>, Nov. 2, 2011, p. 3.
Sarah Perez, "Venulabs is Launching VenueRank, A 'Klout for Storefronts'", >, Nov. 2, 2011, p. 3.
Sugiyama et al., "Adaptive Web Search Based on User Profile Constructed Without Any Effort from Users," ACM, New York, NY, May 17-22, 2004, pp. 675-684.
Tull et al., "Marketing Management", Macmillan Publishing Company, New York, 1990, Chapter 15.
ValueVine: Peck, Jason, "Valuevine Connect: Location-Based Analytics", http://socialmediatoday.com/jasonpeck/270429/w valuevine-connect-location-based-analytics, published Feb. 15, 2011; Falls, Jason, "Value Vine Brings Location-Based, Review Site Analytics to Franchise Tool", www.socialmediaexplorer.com/social-media-marketing/valuevine-connect-X launches/, published Feb. 15, 2011.
Venuelabs Press Release, "Introducing VenueRank", http://venuelabs.com/introducing-venuerank/, Nov. 2, 2011.
Venuelabs, "Valuevine Launches Executive Dashboard for Multi-Location Businesses", http://venuelabs.com/valuevine-launches-location-analytics-product/, Feb. 15, 2011.

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120246093A1 (en) * 2011-03-24 2012-09-27 Aaron Stibel Credibility Score and Reporting
US20140046958A1 (en) * 2012-07-10 2014-02-13 Todd Tucker Content management system
US9477704B1 (en) * 2012-12-31 2016-10-25 Teradata Us, Inc. Sentiment expression analysis based on keyword hierarchy
US11423077B2 (en) 2013-04-25 2022-08-23 Trent R. McKenzie Interactive music feedback system
US10102224B2 (en) * 2013-04-25 2018-10-16 Trent R. McKenzie Interactive music feedback system
US11743544B2 (en) 2013-04-25 2023-08-29 Trent R McKenzie Interactive content feedback system
US20140324885A1 (en) * 2013-04-25 2014-10-30 Trent R. McKenzie Color-based rating system
US10795929B2 (en) 2013-04-25 2020-10-06 Trent R. McKenzie Interactive music feedback system
US11003708B2 (en) 2013-04-25 2021-05-11 Trent R. McKenzie Interactive music feedback system
US20140379682A1 (en) * 2013-06-19 2014-12-25 Alibaba Group Holding Limited Comment ranking by search engine
US10242105B2 (en) * 2013-06-19 2019-03-26 Alibaba Group Holding Limited Comment ranking by search engine
US20150356179A1 (en) * 2013-07-15 2015-12-10 Yandex Europe Ag System, method and device for scoring browsing sessions
US20160267071A1 (en) * 2015-03-12 2016-09-15 International Business Machines Corporation Entity Metadata Attached to Multi-Media Surface Forms
US10009297B2 (en) * 2015-03-12 2018-06-26 International Business Machines Corporation Entity metadata attached to multi-media surface forms
US9922352B2 (en) * 2016-01-25 2018-03-20 Quest Software Inc. Multidimensional synopsis generation
WO2017177222A1 (en) * 2016-04-08 2017-10-12 BPU International, Inc. A system and method for searching and matching content over social networks relevant to an individual
US10956429B1 (en) * 2016-09-14 2021-03-23 Compellon Incorporated Prescriptive analytics platform and polarity analysis engine
US11461343B1 (en) * 2016-09-14 2022-10-04 Clearsense Acquisition 1, Llc Prescriptive analytics platform and polarity analysis engine
US10235336B1 (en) * 2016-09-14 2019-03-19 Compellon Incorporated Prescriptive analytics platform and polarity analysis engine
US10831790B2 (en) * 2018-01-25 2020-11-10 International Business Machines Corporation Location based data mining comparative analysis index
US11250037B2 (en) * 2018-01-25 2022-02-15 International Business Machines Corporation Location based data mining comparative analysis index
US11544307B2 (en) * 2018-04-26 2023-01-03 Panasonic Intellectual Property Corporation Of America Personnel selecting device, personnel selecting system, personnel selecting method, and recording medium
US20200151278A1 (en) * 2018-11-13 2020-05-14 Bizhive, Llc Online reputation monitoring and intelligence gathering
US11068758B1 (en) 2019-08-14 2021-07-20 Compellon Incorporated Polarity semantics engine analytics platform
US11663839B1 (en) 2019-08-14 2023-05-30 Clearsense Acquisition 1, Llc Polarity semantics engine analytics platform
US20210342864A1 (en) * 2020-04-30 2021-11-04 Robert Bosch Gmbh System and method for evaluating black-box recommendation systems in infotainment systems
US20230134796A1 (en) * 2021-10-29 2023-05-04 Glipped, Inc. Named entity recognition system for sentiment labeling
US20230289377A1 (en) * 2022-03-11 2023-09-14 Tredence Inc. Multi-channel feedback analytics for presentation generation
US11675790B1 (en) * 2022-04-01 2023-06-13 Meltwater News International Holdings Gmbh Computing company competitor pairs by rule based inference combined with empirical validation

Also Published As

Publication number Publication date
US20220027395A1 (en) 2022-01-27
US11093984B1 (en) 2021-08-17

Similar Documents

Publication Publication Date Title
US20220027395A1 (en) Determining themes
US10997638B1 (en) Industry review benchmarking
Hwang et al. Understanding user experiences of online travel review websites for hotel booking behaviours: An investigation of a dual motivation theory
Purnawirawan et al. A meta-analytic investigation of the role of valence in online reviews
Tang Mine your customers or mine your business: the moderating role of culture in online word-of-mouth reviews
Tsao et al. eWOM persuasiveness: do eWOM platforms and product type matter?
US10505885B2 (en) Intelligent messaging
Moriya et al. Little change seen in part-time employment as a result of the Affordable Care Act
US11397996B2 (en) Social match platform apparatuses, methods and systems
Lee et al. Online reviews of restaurants: expectation-confirmation theory
Jin et al. Making reservations online: The impact of consumer-written and system-aggregated user-generated content (UGC) in travel booking websites on consumers’ behavioral intentions
US20170061344A1 (en) Identifying and mitigating customer churn risk
WO2022046914A1 (en) Three-party recruiting and matching process involving a candidate, referrer, and hiring entity
Herz et al. Authors overestimate their contribution to scientific work, demonstrating a strong bias
Huifeng et al. Temporal effects of online customer reviews on restaurant visit intention: the role of perceived risk
US20170061343A1 (en) Predicting churn risk across customer segments
US20150095121A1 (en) Methods and systems for recommending decision makers in an organization
Sun et al. Role of gender differences on individuals’ responses to electronic word-of-mouth in social interactions
US20190236718A1 (en) Skills-based characterization and comparison of entities
Jing et al. How service-related factors affect the survival of B2T providers: A sentiment analysis approach
Aziz et al. The consequences of rating inflation on platforms: Evidence from a quasi-experiment
Gellatly et al. Group mate absence, dissimilarity, and individual absence: Another look at “monkey see, monkey do”
US11403570B2 (en) Interaction-based predictions and recommendations for applicants
US20200258095A1 (en) Enterprise reputation evaluation
US20210097493A1 (en) Response rate prediction

Legal Events

Date Code Title Description
AS Assignment

Owner name: REPUTATION.COM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:REHLING, JOHN ANDREW;DIGNAN, THOMAS GERARDO;REEL/FRAME:030022/0235

Effective date: 20130315

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

AS Assignment

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT, CALIFORNIA

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:REPUTATION.COM, INC.;REEL/FRAME:055793/0913

Effective date: 20210330

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AND COLLATERAL AGENT, CALIFORNIA

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:REPUTATION.COM, INC.;REEL/FRAME:055793/0933

Effective date: 20210330

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:REPUTATION.COM, INC.;REEL/FRAME:062254/0865

Effective date: 20221230