WO2020159665A1 - Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur - Google Patents

Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur Download PDF

Info

Publication number
WO2020159665A1
WO2020159665A1 PCT/US2020/012218 US2020012218W WO2020159665A1 WO 2020159665 A1 WO2020159665 A1 WO 2020159665A1 US 2020012218 W US2020012218 W US 2020012218W WO 2020159665 A1 WO2020159665 A1 WO 2020159665A1
Authority
WO
WIPO (PCT)
Prior art keywords
participants
participant
recording
study
screener
Prior art date
Application number
PCT/US2020/012218
Other languages
English (en)
Inventor
Xavier Mestres
Alfonso De La Nuez
Albert Recolons
Francesc Del Castillo
Jordi Ibañez
Anna Barba
Andrew Jensen
Original Assignee
Userzoom Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/730,957 external-priority patent/US11934475B2/en
Priority claimed from US16/730,954 external-priority patent/US11068374B2/en
Application filed by Userzoom Technologies, Inc. filed Critical Userzoom Technologies, Inc.
Priority to EP20747572.4A priority Critical patent/EP3918561A4/fr
Publication of WO2020159665A1 publication Critical patent/WO2020159665A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0203Market surveys; Market polls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces

Definitions

  • the present invention relates to systems and methods for the generation of studies that allow for insight generation for the usability of a website.
  • this type of testing is referred to as“User Experience” or merely“UX” testing.
  • the Internet provides new opportunities for business entities to reach customers via web sites that promote and describe their products or services. Often, the appeal of a web site and its ease of use may affect a potential buyer's decision to purchase the product/service.
  • Focus groups are sometimes used to achieve this goal but the process is long, expensive and not reliable, in part, due to the size and demographics of the focus group that may not be representative of the target customer base.
  • the system and methods includes the selection of participants as either ones supplied by the user, or ones the system provides.
  • the participants the system provides includes screening a large pool of participants by a set of basic metrics (age, gender and income) or by advanced query questions have branched answers. These screener questions may be nested to allow for various participant groups to be generated. After the participants are screened, they may be invited to join the study.
  • the study itself may be a card sorting exercise, survey, tree study, click test, basic navigation, or advanced recorded study.
  • a click test generate a‘heat map’ when the participant is shown a static image and prompted to undergo a task or asked a question. The location and speed the user clicks on the image is used to generate the heat map.
  • the advanced recorded study can present the user with a survey, navigation task, or any other desired activity.
  • the participant can be recorded (audio and/or video) for downstream analysis. For any navigation aspects of the study the participant’s click flow can also be monitored and used to populate a click-flow branched chart.
  • Recordings may be processed for additional analysis.
  • machine learning may analyze video for eye movements and/or emotion, for example.
  • the analysis can also include transcribing the audio, synchronizing the transcription with the video recording, and allowing for automatic clip generation when portions of the transcription are selected.
  • the transcriptions may also be searched by keyword, tagged/annotated and these annotations are likewise searchable.
  • Validation criteria includes time taken to complete a given action, ending up at a particular URL (or class of URLs), based upon question answers, or any combination thereof. These criteria can be classified as either a successful completion of the study, or a failed attempt. Additionally, the validation criteria can also include a decision by the participant to abandon the study.
  • Figure 1 A is an example logical diagram of a system for user experience studies, in accordance with some embodiment
  • Figure IB is a second example logical diagram of a system for user experience studies, in accordance with some embodiment.
  • Figure 1C is a third example logical diagram of a system for user experience studies, in accordance with some embodiment.
  • Figure 2 is an example logical diagram of the usability testing system, in accordance with some embodiment
  • Figure 3 A is a flow diagram illustrating an exemplary process of interfacing with potential candidates and pre-screening participants for the usability testing according to an embodiment of the present invention
  • Figure 3B is a flow diagram of an exemplary process for collecting usability data of a target web site according to an embodiment of the present invention
  • Figure 3C is a flow diagram of an exemplary process for card sorting studies according to an embodiment of the present invention.
  • Figure 4 is a simplified block diagram of a data processing unit configured to enable a participant to access a web site and track participant's interaction with the web site according to an embodiment of the present invention
  • Figure 5 is an example logical diagram of a second substantiation of the usability testing system, in accordance with some embodiment
  • Figure 6 is a logical diagram of the study generation module, in accordance with some embodiment.
  • Figure 7 is a logical diagram of the recruitment engine, in accordance with some embodiment.
  • Figure 8 is a logical diagram of the study administrator, in accordance with some embodiment.
  • Figure 9 is a logical diagram of the research module, in accordance with some embodiment.
  • Figure 10 is a flow diagram for an example process of user experience testing, in accordance with some embodiment
  • Figure 11 is a flow diagram for the example process of study generation, in accordance with some embodiment.
  • Figure 12 is a flow diagram for the example process of study administration, in accordance with some embodiment.
  • Figure 13 is a flow diagram for the example process of insight generation, in accordance with some embodiment.
  • Figures 14-24, 25A-25F, 26, 27A-27D, 28A-28B and 29 are example screenshots of some embodiments of the user experience testing system.
  • the present invention relates to enhancements to traditional user experience testing and subsequent insight generation. While such systems and methods may be utilized with any user experience environment, embodiments described in greater detail herein are directed to providing insights into user experiences in an online/webpage environment. Some descriptions of the present systems and methods will also focus nearly exclusively upon the user experience within a retailer’s website. This is intentional in order to provide a clear use case and brevity to the disclosure, however it should be noted that the present systems and methods apply equally well to any situation where a user experience in an online platform is being studied. As such, the focus herein on a retail setting is in no way intended to artificially limit the scope of this disclosure.
  • the following systems and methods are for improvements in natural language processing and actions taken in response to such message exchanges, within conversation systems, and for employment of domain specific assistant systems that leverage these enhanced natural language processing techniques.
  • the goal of the message conversations is to enable a logical dialog exchange with a recipient, where the recipient is not necessarily aware that they are communicating with an automated machine as opposed to a human user. This may be most efficiently performed via a written dialog, such as email, text messaging, chat, etc. However, given the advancement in audio and video processing, it may be entirely possible to have the dialog include audio or video components as well.
  • usability refers to a metric scoring value for judging the ease of use of a target web site.
  • a client refers to a sponsor who initiates and/or finances the usability study.
  • the client may be, for example, a marketing manager who seeks to test the usability of a commercial web site for marketing (selling or advertising) certain products or services.
  • Participants may be a selected group of people who participate in the usability study and may be screened based on a predetermined set of questions.
  • Remote usability testing or remote usability study refers to testing or study in accordance with which participants (referred to use their computers, mobile devices or otherwise) access a target web site in order to provide feedback about the web site's ease of use, connection speed, and the level of satisfaction the participant experiences in using the web site.
  • Unmoderated usability testing refers to communication with test participants without a moderator, e.g., a software, hardware, or a combined software/hardware system can automatically gather the participants' feedback and records their responses. The system can test a target web site by asking participants to view the web site, perform test tasks, and answer questions associated with the tasks.
  • FIG. 1A is a simplified block diagram of a user testing platform 100A according to an embodiment.
  • Platform 100A is adapted to test a target web site 110.
  • Platform 100A is shown as including a usability testing system 150 that is in communications with data processing units 120, 190 and 195.
  • Data processing units 120, 190 and 195 may be a personal computer equipped with a monitor, a handheld device such as a tablet PC, an electronic notebook, a wearable device such as a cell phone, or a smart phone.
  • Data processing unit 120 includes a browser 122 that enables a user (e.g., usability test participant) using the data processing unit 120 to access target web site 110.
  • Data processing unit 120 includes, in part, an input device such as a keyboard 125 or a mouse 126, and a participant browser 122.
  • data processing unit 120 may insert a virtual tracking code to target web site 110 in real-time while the target web site is being downloaded to the data processing unit 120.
  • the virtual tracking code may be a proprietary JavaScript code, whereby the run-time data processing unit interprets the code for execution.
  • the tracking code collects participants' activities on the downloaded web page such as the number of clicks, key strokes, keywords, scrolls, time on tasks, and the like over a period of time.
  • Data processing unit 120 simulates the operations performed by the tracking code and is in communication with usability testing system 150 via a communication link 135.
  • Communication link 135 may include a local area network, a metropolitan area network, and a wide area network.
  • Such a communication link may be established through a physical wire or wirelessly.
  • the communication link may be established using an Internet protocol such as the TCP/IP protocol.
  • activities of the participants associated with target web site 110 are collected and sent to usability testing system 150 via communication link 135.
  • data processing unit 120 may instruct a participant to perform predefined tasks on the downloaded web site during a usability test session, in which the participant evaluates the web site based on a series of usability tests.
  • the virtual tracking code e.g., a proprietary JavaScript
  • the usability testing may also include gathering performance data of the target web site such as the ease of use, the connection speed, the satisfaction of the user experience. Because the web page is not modified on the original web site, but on the downloaded version in the participant data processing unit, the usability can be tested on any web sites including competitions’ web sites.
  • Data collected by data processing unit 120 may be sent to the usability testing system 150 via communication link 135.
  • usability testing system 150 is further accessible by a client via a client browser 170 running on data processing unit 190.
  • Usability testing system 150 is further accessible by user experience researcher browser 180 running on data processing unit 195.
  • Client browser 170 is shown as being in communications with usability testing system 150 via communication link 175.
  • User experience research browser 180 is shown as being in communications with usability testing system 150 via communications link 185.
  • a client and/or user experience researcher may design one or more sets of questionnaires for screening participants and for testing the usability of a web site. Usability testing system 150 is described in detail below.
  • FIG. 1B is a simplified block diagram of a user testing platformlOOB according to another embodiment of the present invention.
  • Platform 100B is shown as including a target web site 110 being tested by one or more participants using a standard web browser 122 running on data processing unit 120 equipped with a display.
  • Participants may communicate with a usability test system 150 via a communication link 135.
  • Usability test system 150 may communicate with a client browser 170 running on a data processing unit 190.
  • usability test system 150 may communicate with user experience researcher browser running on data processing unit 195.
  • data processing unit 120 may include a configuration of multiple single-core or multi-core processors configured to process instructions, collect usability test data (e.g., number of clicks, mouse movements, time spent on each web page, connection speed, and the like), store and transmit the collected data to the usability testing system, and display graphical information to a participant via an input/output device (not shown).
  • FIG. 1C is a simplified block diagram of a user testing platform lOOC according to yet another embodiment of the present invention.
  • Platform lOOC is shown as including a target web site 130 being tested by one or more participants using a standard web browser 122 running on data processing unit 120 having a display.
  • the target web site 130 is shown as including a tracking program code configured to track actions and responses of participants and send the tracked actions/responses back to the participant's data processing unit 120 through a communication link 115.
  • Communication link 115 may be computer network, a virtual private network, a local area network, a metropolitan area network, a wide area network, and the like.
  • the tracking program is a JavaScript configured to run tasks related to usability testing and sending the test/study results back to participant's data processing unit for display.
  • Data processing unit 120 may collect data associated with the usability of the target web site and send the collected data to the usability testing system 150 via a communication link 135.
  • the testing of the target web site (page) may provide data such as ease of access through the Internet, its attractiveness, ease of navigation, the speed with which it enables a user to complete a transaction, and the like.
  • testing of the target web site provides data such as duration of usage, the number of keystrokes, the user's profile, and the like. It is understood that testing of a web site in accordance with embodiments of the present invention can provide other data and usability metrics. Information collected by the participant’s data processing unit is uploaded to usability testing system 150 via communication link 135 for storage and analysis.
  • FIG. 2 is a simplified block diagram of an exemplary embodiment platform 200 according to one embodiment of the present invention.
  • Platform 200 is shown as including, in part, a usability testing system 150 being in communications with a data processing unit 125 via communications links 135 and 135'.
  • Data processing unit 125 includes, in part, a participant browser 120 that enables a participant to access a target web site 110.
  • Data processing unit 125 may be a personal computer, a handheld device, such as a cell phone, a smart phone or a tablet PC, or an electronic notebook.
  • Data processing unit 125 may receive instructions and program codes from usability testing system 150 and display predefined tasks to participants 120.
  • the instructions and program codes may include a web-based application that instructs participant browser 122 to access the target web site 110.
  • a tracking code is inserted to the target web site 110 that is being downloaded to data processing unit 125.
  • the tracking code may be a JavaScript code that collects participants’ activities on the downloaded target web site such as the number of clicks, key strokes, movements of the mouse, keywords, scrolls, time on tasks and the like performed over a period of time.
  • Data processing unit 125 may send the collected data to usability testing system 150 via communication link 135' which may be a local area network, a metropolitan area network, a wide area network, and the like and enable usability testing system 150 to establish communication with data processing unit 125 through a physical wire or wirelessly using a packet data protocol such as the TCP/IP protocol or a proprietary communication protocol.
  • communication link 135' may be a local area network, a metropolitan area network, a wide area network, and the like and enable usability testing system 150 to establish communication with data processing unit 125 through a physical wire or wirelessly using a packet data protocol such as the TCP/IP protocol or a proprietary communication protocol.
  • Usability testing system 150 includes a virtual moderator software module running on a virtual moderator server 230 that conducts interactive usability testing with a usability test participant via data processing unit 125 and a research module running on a research server 210 that may be connected to a user research experience data processing unit 195.
  • User experience researcher 181 may create tasks relevant to the usability study of a target web site and provide the created tasks to the research server 210 via a communication link 185.
  • One of the tasks may be a set of questions designed to classify participants into different categories or to prescreen participants.
  • Another task may be, for example, a set of questions to rate the usability of a target web site based on certain metrics such as ease of navigating the web site, connection speed, layout of the web page, ease of finding the products (e.g., the organization of product indexes).
  • Yet another tasks may be a survey asking participants to press a“yes” or“no” button or write short comments about participants’ experiences or familiarity with certain products and their satisfaction with the products. All these tasks can be stored in a study content database 220, which can be retrieved by the virtual moderator module running on virtual moderator server 230 to forward to participants 120.
  • Research module running on research server 210 can also be accessed by a client (e.g., a sponsor of the usability test) 171 who, like user experience researchers 181, can design her own questionnaires since the client has a personal interest to the target web site under study.
  • Client 171 can work together with user experience researchers 181 to create tasks for usability testing.
  • client 171 can modify tasks or lists of questions stored in the study content database 220.
  • client 171 can add or delete tasks or questionnaires in the study content database 220.
  • client 171 may be user experience researcher 181.
  • one of the tasks may be open or closed card sorting studies for optimizing the architecture and layout of the target web site.
  • Card sorting is a technique that shows how online users organize content in their own mind.
  • participants create their own names for the categories.
  • a closed card sort participants are provided with a predetermined set of category names.
  • Client 171 and/or user experience researcher 181 can create proprietary online card sorting tool that executes card sorting exercises over large groups of participants in a rapid and cost-effective manner.
  • the card sorting exercises may include up to 100 items to sort and up to 12 categories to group.
  • One of the tasks may include categorization criteria such as asking participants questions“why do you group these items like this?.”
  • Research module on research server 210 may combine card sorting exercises and online questionnaire tools for detailed taxonomy analysis.
  • the card sorting studies are compatible with SPSS applications.
  • the card sorting studies can be assigned randomly to participant 120.
  • User experience (UX) researcher 181 and/or client 171 may decide how many of those card sorting studies each participant is required to complete. For example, user experience researcher 181 may create a card sorting study within 12 tasks, group them in 4 groups of 3 tasks and manage that each participant just has to complete one task of each group.
  • communication link 135' may be a distributed computer network and share the same physical connection as communication link 135. This is, for example, the case where data collecting module 260 locates physically close to virtual moderator module 230, or if they share the usability testing system’s processing hardware.
  • software modules running on associated hardware platforms will have the same reference numerals as their associated hardware platform.
  • virtual moderator module will be assigned the same reference numeral as the virtual moderator server 230, and likewise data collecting module will have the same reference numeral as the data collecting server 260.
  • Data collecting module 260 may include a sample quality control module that screens and validates the received responses, and eliminates participants who provide incorrect responses, or do not belong to a predetermined profile, or do not qualify for the study.
  • Data collecting module 260 may include a“binning” module that is configured to classify the validated responses and stores them into corresponding categories in a behavioral database 270.
  • responses may include gathered web site interaction events such as clicks, keywords, URLs, scrolls, time on task, navigation to other web pages, and the like.
  • virtual moderator server 230 has access to behavioral database 270 and uses the content of the behavioral database to interactively interface with participants 120. Based on data stored in the behavioral database, virtual moderator server 230 may direct participants to other pages of the target web site and further collect their interaction inputs in order to improve the quantity and quality of the collected data and also encourage participants’ engagement.
  • virtual moderator server may eliminate one or more participants based on data collected in the behavioral database. This is the case if the one or more participants provide inputs that fail to meet a predetermined profile.
  • Usability testing system 150 further includes an analytics module 280 that is configured to provide analytics and reporting to queries coming from client 171 or user experience (UX) researcher 181.
  • analytics module 280 is running on a dedicated analytics server that offloads data processing tasks from traditional servers.
  • Analytics server 280 is purpose-built for analytics and reporting and can run queries from client 171 and/or user experience researcher 181 much faster (e.g., 100 times faster) than conventional server system, regardless of the number of clients making queries or the complexity of queries.
  • the purpose- built analytics server 280 is designed for rapid query processing and ad hoc analytics and can deliver higher performance at lower cost, and, thus provides a competitive advantage in the field of usability testing and reporting and allows a company such as UserZoom (or Xperience Consulting, SL) to get a jump start on its competitors.
  • research module 210 virtual moderator module 230, data collecting module 260, and analytics server 280 are operated in respective dedicated servers to provide higher performance.
  • Client (sponsor) 171 and/or user experience research 181 may receive usability test reports by accessing analytics server 280 via respective links 175' and/or 185 '.
  • Analytics server 280 may communicate with behavioral database via a two-way communication link 272.
  • study content database 220 may include a hard disk storage or a disk array that is accessed via iSCSI or Fibre Channel over a storage area network.
  • the study content is provided to analytics server 280 via a link 222 so that analytics server 280 can retrieve the study content such as task descriptions, question texts, related answer texts, products by category, and the like, and generate together with the content of the behavioral database 270 comprehensive reports to client 171 and/or user experience researcher 181.
  • Shown in Figure 2 is a connection 232 between virtual moderator server 230 and behavioral database 270.
  • Behavioral database 270 can be a network atached storage server or a storage area network disk array that includes a two-way communication via link 232 with virtual moderator server 230.
  • Behavioral database 270 is operative to support virtual moderator server 230 during the usability testing session. For example, some questions or tasks are interactively presented to the participants based on data collected. It would be advantageous to the user experience researcher to set up specific questions that enhance the usability testing if participants behave a certain way. If a participant decides to go to a certain web page during the study, the virtual moderator server 230 will pop up corresponding questions related to that page; and answers related to that page will be received and screened by data collecting server 260 and categorized in behavioral database server 270.
  • virtual moderator server 230 operates together with data stored in the behavioral database to proceed the next steps.
  • Virtual moderator server may need to know whether a participant has successfully completed a task, or based on the data gathered in behavioral database 270, present another tasks to the participant.
  • client 171 and user experience researcher 181 may provide one or more sets of questions associated with a target web site to research server 210 via respective communication link 175 and 185.
  • Research server 210 stores the provided sets of questions in a study content database 220 that may include a mass storage device, a hard disk storage or a disk array being in communication with research server 210 through a two-way interconnection link 212.
  • the study content database may interface with virtual moderator server 230 through a communication link 234 and provides one or more sets of questions to participants via virtual moderator server 230.
  • FIG. 3 A is a flow diagram of an exemplary process of interfacing with potential candidates and prescreening participants for the usability testing according to one embodiment of the present invention.
  • the process starts at step 310.
  • potential candidates for the usability testing may be recruited by email, advertisement banners, pop- ups, text layers, overlays, and the like (step 312).
  • the number of candidates who have accepted the invitation to the usability test will be determined at step 314. If the number of candidates reaches a predetermined target number, then other candidates who have signed up late may be prompted with a message thanking for their interest and that they may be considered for a future survey (shown as“quota full” in step 316).
  • the usability testing system further determines whether the participants’ browser comply with a target web site browser. For example, user experience researchers or the client may want to study and measure a web site’s usability with regard to a specific web browser (e.g., Microsoft Edge) and reject all other browsers. Or in other cases, only the usability data of a web site related to Opera or Chrome will be collected, and Microsoft Edge or FireFox will be rejected at step 320.
  • participants will be prompted with a welcome message and instructions are presented to participants that, for example, explain how the usability testing will be performed, the rules to be followed, and the expected duration of the test, and the like.
  • one or more sets of screening questions may be presented to collect profile information of the participants.
  • Questions may relate to participants’ experience with certain products, their awareness with certain brand names, their gender, age, education level, income, online buying habits, and the like.
  • the system further eliminates participants based on the collected information data. For example, only participants who have used the products under study will be accepted or screened out (step 328).
  • a quota for participants having a target profile will be determined. For example, half of the participants must be female, and they must have online purchase experience or have purchased products online in recent years.
  • FIG. 3B is a flow diagram of an exemplary process for gathering usability data of a target web site according to an embodiment of the present invention.
  • the target web site under test will be verified whether it includes a proprietary tracking code.
  • the tracking code is a UserZoom JavaScript code that pop-ups a series of tasks to the pre-screened participants. If the web site under study includes a proprietary tracking code (this corresponds to the scenario shown in Figure 1C), then the process proceeds to step 338. Otherwise, a virtual tracking code will be inserted to participants’ browser at step 336. This corresponds to the scenario described above in Figure 1A.
  • a task is described to participants.
  • the task can be, for example, to ask participants to locate a color printer below a given price.
  • the task may redirect participants to a specific web site such as eBay, HP, or Amazon.com.
  • the progress of each participant in performing the task is monitored by a virtual study moderator at step 342.
  • responses associated with the task are collected and verified against the task quality control rules.
  • the step 344 may be performed by the data collecting module 260 described above and shown in Figure 2.
  • Data collecting module 260 ensures the quality of the received responses before storing them in a behavioral database 270 ( Figure 2).
  • Behavioral database 270 may include data that the client and/or user experience researcher want to determine such as how many web pages a participant viewed before selecting a product, how long it took the participant to select the product and complete the purchase, how many mouse clicks and text entries were required to complete the purchase and the like.
  • a number of participants may be screened out (step 346) during step 344 for non- complying with the task quality control rules and/or the number of participants may be required to go over a series of training provided by the virtual moderator module 230.
  • virtual moderator module 230 determines whether or not participants have completed all tasks successfully. If all tasks are completed successfully (e.g., participants were able to find a web page that contains the color printer under the given price), virtual moderator module 230 will prompt a success questionnaire to participants at step 352. If not, then virtual moderator module 230 will prompt an abandon or error questionnaire to participants who did not complete all tasks successfully to find out the causes that lead to the incompletion. Whether participants have completed all task successfully or not, they will be prompted a final questionnaire at step 356.
  • FIG. 3C is a flow diagram of an exemplary process for card sorting studies according to one embodiment of the present invention.
  • participants may be prompted with additional tasks such as card sorting exercises.
  • Card sorting is a powerful technique for assessing how participants or visitors of a target web site group related concepts together based on the degree of similarity or a number of shared characteristics. Card sorting exercises may be time consuming.
  • participants will not be prompted all tasks but only a random number of tasks for the card sorting exercise.
  • a card sorting study is created within 12 tasks that is grouped in 6 groups of 2 tasks. Each participant just needs to complete one task of each group. It should be appreciated to one person of skill in the art that many variations, modifications, and alternatives are possible to randomize the card sorting exercise to save time and cost.
  • the feedback questionnaire may include one or more survey questions such as a subjective rating of target web site attractiveness, how easy the product can be used, features that participants like or dislike, whether participants would recommend the products to others, and the like.
  • the results of the card sorting exercises will be analyzed against a set of quality control rules, and the qualified results will be stored in the behavioral database 270.
  • the analyze of the result of the card sorting exercise is performed by a dedicated analytics server 280 that provides much higher performance than general-purpose servers to provide higher satisfaction to clients. If participants complete all tasks successfully, then the process proceeds to step 368, where all participants will be thanked for their time and/or any reward may be paid out. Else, if participants do not comply or cannot complete the tasks successfully, the process proceeds to step 366 that eliminates the non-compliant participants.
  • FIG. 4 illustrates an example of a suitable data processing unit 400 configured to connect to a target web site, display web pages, gather participant's responses related to the displayed web pages, interface with a usability testing system, and perform other tasks according to an embodiment of the present invention.
  • System 400 is shown as including at least one processor 402, which communicates with a number of peripheral devices via a bus subsystem 404.
  • peripheral devices may include a storage subsystem 406, including, in part, a memory subsystem 408 and a file storage subsystem 410, user interface input devices 412, user interface output devices 414, and a network interface subsystem 416 that may include a wireless communication port.
  • the input and output devices allow user interaction with data processing system 402.
  • Bus system 404 may be any of a variety of bus architectures such as ISA bus, VESA bus, PCI bus and others.
  • Bus subsystem 404 provides a mechanism for enabling the various components and subsystems of the processing device to communicate with each other. Although bus subsystem 404 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple busses.
  • User interface input devices 412 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a barcode scanner, a touch screen incorporated into the display, audio input devices such as voice recognition systems, microphones, and other types of input devices.
  • use of the term input device is intended to include all possible types of devices and ways to input information to processing device.
  • User interface output devices 414 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
  • the display subsystem may be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), or a projection device.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • output device is intended to include all possible types of devices and ways to output information from the processing device.
  • Storage subsystem 406 may be configured to store the basic programming and data constructs that provide the functionality in accordance with embodiments of the present invention.
  • software modules implementing the functionality of the present invention may be stored in storage subsystem 406. These software modules may be executed by processor(s) 402.
  • Such software modules can include codes configured to access a target web site, codes configured to modify a downloaded copy of the target web site by inserting a tracking code, codes configured to display a list of predefined tasks to a participant, codes configured to gather participant's responses, and codes configured to cause participant to participate in card sorting exercises.
  • Storage subsystem 406 may also include codes configured to transmit participant's responses to a usability testing system.
  • Memory subsystem 408 may include a number of memories including a main random access memory (RAM) 418 for storage of instructions and data during program execution and a read only memory (ROM) 420 in which fixed instructions are stored.
  • File storage subsystem 410 provides persistent (non-volatile) storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a Compact Disk Read Only Memory (CD-ROM) drive, an optical drive, removable media cartridges, and other like storage media.
  • CD-ROM Compact Disk Read Only Memory
  • the other significant components of the user experience testing system 150 includes a study generation module 520, a recruitment engine 530, a study administrator 540 and a research module 550, each of which will be described in greater detail below.
  • Each of the components of the user experience testing systems 150 may be physically or logically coupled, allowing for the output of any given component to be used by the other components as needed.
  • An offline template module 521 provides a system user with templates in a variety of languages (pre-translated templates) for study generation, screener questions and the like, based upon study type. Users are able to save any screener question, study task, etc. for usage again at a later time or in another study.
  • a user may be able to concurrently design an unlimited number of studies, but is limited in the deployment of the studies due to the resource expenditure of participants and computational expense of the study insight generation.
  • a subscription administrator 523 manages the login credentialing, study access and deployment of the created studies for the user.
  • the user is able to have subscriptions that scale in pricing based upon the types of participants involved in a stud, and the number of studies concurrently deployable by the user/client.
  • the translation engine 525 may include machine translation services for study templates and even allow on the fly question translations.
  • a screener module 527 is configured to allow for the generation of screener questions to weed through the participants to only those that are suited for the given study. This may include basic Boolean expressions with logical conditions to select a particular demographic for the study. However, the screener module 527 may also allow for advanced screener capabilities where screener groups and quotas are defined, allowing for advanced logical conditions to segment participants. For example, the study may wish to include a group of 20 women between the ages of 25-45 and a group of men who are between the ages of 40-50 as this may more accurately reflect the actual purchasing demographic for a particular retailer. A single participant screening would be unable to generate this mix of participants, so the advanced screener interface is utilized to ensure the participants selected meet the user’s needs for the particular study.
  • the recruitment engine 530 is responsible for the recruiting and management of participants for the studies.
  • participants are one of three different classes: 1) core panel participants, 2) general panel participants, and 3) client provided participants.
  • the core panel participants are compensated at a greater rate, but must first be vetted for their ability and willingness to provide comprehensive user experience reviews. Significant demographic and personal information can be collected for these core panel participants, which can enable powerful downstream analytics.
  • the core panel vetting engine 531 collects public information automatically for the participants as well as eliciting information from the participant to determine if the individual is a reliable panelists.
  • Traits like honesty and responsiveness may be ascertained by comparing the information derived from public sources to the participant supplied information. Additionally, the participant may provide a video sample of a study. This sample is reviewed for clarity and communication proficiency as part of the vetting process. If a participant is successfully vetted they are then added to a database of available core panelists. Core panelists have an expectation of reduced privacy, and may pre-commit to certain volumes and/or activities.
  • a significantly larger pool of participants in a general panel participant pool is a significantly larger pool of participants in a general panel participant pool.
  • This pool of participants may have activities that they are unwilling to engage in (e.g., audio and video recording for example), and are required to provide less demographic and personal information than core panelists.
  • the general panel participants are generally provided a lower compensation for their time than the core panelists.
  • the general panel participants may be a shared pooling of participants across many user experience and survey platforms. This enables a demographically rich and large pool of individuals to source from.
  • a large panel network 533 manages this general panel participant pool.
  • the user or client may already have a set of participants they wish to use in their testing. For example, if the user experience for an employee benefits portal is being tested, the client will wish to test the study on their own employees rather than the general public.
  • a reimbursement engine 535 is involved with compensating participants for their time (often on a per study basis). Different studies may be‘worth’ differing amounts based upon the requirements (e.g., video recording, surveys, tasks, etc.) or the expected length to completion. Additionally, the compensation between general panelists and core panelists may differ even for the same study. Generally, client supplied participants are not compensated by the reimbursement engine 535 as the compensation (if any) is directly negotiated between the client and the participants. [0075] Turning now to Figure 8, a more detailed view of the study administrator 540 is provided. Unlike many other user experience testing programs, the presently disclosed systems and methods include the ability to record particular activities by the user.
  • a recording enabler 541 allows for the collection of click-flow information, audio collection and even video recording.
  • the recording only occurs during the study in order to preserve participant privacy, and to focus attention on only time periods that will provide insights into the user experience.
  • recording may be disabled to prevent needless data accumulation.
  • Recording only occurs after user acceptance (to prevent running afoul of privacy laws and regulations), and during recording the user may be presented with a clear indication that the session is being recorded. For example the user may be provided a thumbnail image of the video capture, in some embodiments. This provides notice to the user of the video recording, and also indicates video quality and field of view information, thereby allowing them to readjust the camera if needed or take other necessary actions (avoiding harsh backlight, increasing ambient lighting, etc.).
  • the screening engine 543 administers the generated screener questions for the study.
  • Screener questions includes questions to the potential participants that may qualify or disqualify them from a particular study. For example, in a given study, the user may wish to target men between the ages of 21 and 35, for example. Questions regarding age and gender may be used in the screener questions to enable selection of the appropriate participants for the given study. Additionally, based upon the desired participant pool being used, the participants may be pre-screened by the system based upon known demographic data. For the vetted core panelists the amount of personal data known may be significant, thereby focusing in on eligible participants with little to no additional screener questions required. For the general panel population, however, less data is known, and often all but the most rudimentary qualifications may be performed automatically. After this qualification filtering of the participants, they may be subjected to the screener questions as discussed above.
  • a software developer kit SDK
  • the study interceptor 545 manages this interruptive activity.
  • Interruption of the user experience allows for immediate feedback testing or prompts to have the participant do some other activity.
  • the interrupt may be configured to trigger when some event or action is taken, such as the participant visiting a particular URL or meeting a determined threshold (e.g. having two items in their shopping cart).
  • the interruption allows the participant to be either redirected to another parallel user experience, or be prompted to agree to engage in a study or asked to answer a survey or the like.
  • the study may include one or more events to occur in order to validate its successful completion.
  • a task validator 547 tracks these metrics for study completion.
  • task validation falls into three categories: 1) completion of a particular action (such as arriving at a particular URL, URL containing a particular keyword, or the like), 2) completing a task within a time threshold (such as finding a product that meets criteria within a particular time limit), and 3) by question.
  • Questions may include any definition of success the study designer deems relevant. This may include a simple“were you successful in the task?” style question, or a more complex satisfaction question with multiple gradient answers, for example.
  • the research module 550 is provided in greater detail. Compared to traditional user experience study platforms, the present systems and methods particularly excel at providing timely and accurate insights into a user’s experience, due to these research tools.
  • the research module includes basic functionalities, such as playback of any video or audio recordings by the playback module 551.
  • This module may also include a machine transcription of the audio, which is then time synchronized to the audio and/or video file. This allows a user to review and search the transcript (using keywords or the like) and immediately be taken to the relevant timing within the recording. And of the results may be annotated using an annotator 559 as well. This allows, for example the user to select a portion of the written transcription and provide an annotation relevant to the study results. The system then automatically can use the timing data to generate an edited video/audio clip associated with the annotation. If the user later searches the study results for the annotation, this auto-generated clip may be displayed for viewing.
  • the clickstream for the participant is recorded and mapped out as a branched tree, by the click stream analyzer 553. This may be aggregated with other participants’ results for the study, to provide the user an indication of what any specific participant does to complete the assigned task, or some aggregated group generally does.
  • the results aggregator 555 likewise combines task validation findings into aggregate numbers for analysis.
  • All results may be searched and filtered by a filtering engine 557 based upon any delineator. For example, a user may desire to know what the pain points of a given task are, and thus filters the results only by participants that failed to complete the task. Trends in the clickstream for these individuals may illustrate common activities that result in failure to complete the task.
  • the filtering may be by any known dimension (not simply success or failure events of a task). For example, during screening or as part of a survey attending the study, income levels, gender, education, age, shopping preferences, etc. may all be discovered. It is also possible that the participant pool includes some of this information in metadata associated with the participant as well. Any of this information may be used to drill down into the results filtering. For example it may be desired to filter for only participants over a certain age. If after a certain age success rates are found to drop off significantly, for example, it may be that the font sizing is too small, resulting in increased difficulty for people with deteriorating eyesight.
  • any of the results may be subject to annotations.
  • Annotations allow for different user reviewers to collectively aggregate insights that they develop by reviewing the results, and allows for filtering and searching for common events in the results.
  • All of the results activities are additionally ripe for machine learning analysis using deep learning.
  • the known demographic information may be fed into a recursive neural network (RNN) or convoluted neural network (CNN) to identify which features are predictive of a task being completed or not.
  • RNN recursive neural network
  • CNN convoluted neural network
  • Even more powerful is the ability for the clickstream to be fed as a feature set into the neural network to identify trends in click flow activity that are problematic or result in a decreased user experience.
  • FIG 10 a flow diagram of the process of user experience study testing is provided generally at 1000. At a high level this process includes three basic stages: the generation of the study (at 1010) the administration of the study (at 1020) and the generation of the study insights (at 1030). Earlier Figures 3A-C touched upon the study administration, and is intended to be considered one embodiment thereof. [0086] Figure 11 provides a more detailed flow diagram of the study generation 1010.
  • Study templates may come in alternate languages as well, in some embodiments.
  • Study types generally include basic usability testing, surveys, card sort, tree test, click test, live intercept and advanced user insight research.
  • the basic usability test includes audio and/or video recordings for a relatively small number of participants with feedback.
  • a survey leverages large participant numbers with branched survey questions. Surveys may also include randomization and double blind studies.
  • Card sort as discussed in great detail previously, includes open or closed card sorting studies. Tree tests assess the ease in which an item is found in a website menu by measuring where users expect to locate specific information.
  • the time taken to find the item, and rate of successful versus unsuccessful queries into different areas of the tree menu are collected as results.
  • Click test measures first impressions and defines success areas on a static image as a heat map graph.
  • the participant is presented with a static image (this may include a mock layout of a website/screenshot of the webpage, an advertising image, an array of images or any other static image) and is presented a text prompt.
  • the text prompt may include questions such“Which image makes you the hungriest?” or“select the tab where you think deals on televisions are found.”
  • the location and time the user clicks on the static image is recorded for the generation of a heat map. Clicks that take longer (indicating a degree of uncertainty on behalf of the participant) are weighted as less strong, whereas immediate selection indicates that the reaction by the participant is surer.
  • the user may be able to define regions on the static image that are considered‘answers’ to the prompted question. This may allow for larger scale collection of success versus failure metrics, as well as enabling follow-up activities, such as a survey or additional click test, based upon where the participant clicks on the image.
  • advanced research includes a combination of the other methodologies with logical conditions and task validation, and is the subject of much of the below discussions. Each of these study types includes separate saved template designs.
  • Device type is selected next (at 1120).
  • mobile applications enable SDK integration for user experience interruption, when this study type is desired.
  • the device type is important for determining recording ability /camera capability (e.g., a mobile device will have a forward and reverse camera, whereas a laptop is likely to only have a single recording camera, whereas a desktop is not guaranteed to have any recording device) and the display type that is particularly well suited for the given device due to screen size constraints and the like.
  • participant types are selected (at 1140).
  • the selection of participants may include a selection by the user to use their own participants, or rely upon the study system for providing qualifies participants. If the study system is providing the participants, a set of screener questions are generated (at 1150). These screener questions may be saved for later usage as a screener profile. The core participants and larger general panel participants may be screened until the study quota is filled.
  • study requirements are set (at 1160). Study requirements may differ based upon the study type that was previously selected. For example, the study questions are set for a survey style study, or advanced research study. In basic usability studies and research studies the task may likewise be defined for the participants. For tree tests the information being sought is defined and the menu uploaded. For click test the static image is selected for usage. Lastly, the success validation is set (at 1170) for the advanced research study.
  • Study implementation begins with screening of the participants (at 1210). This includes initially filtering all possible participants by known demographic or personal information to determine potentially eligible individuals. For example, basic demographic data such as age range, household income and gender may be known for all participants. Additional demographic data such as education level, political affiliation, geography, race, languages spoken, social network connections, etc. may be compiled over time and incorporated into embodiments, when desired.
  • the screener profile may provide basic threshold requirements for these known demographics, allowing the system to immediately remove ineligible participants from the study. The remaining participants may be provided access to the study, or preferentially invited to the study, based upon participant workload, past performance, and study quota numbers.
  • a limited number (less than 30 participants) video recorded study that takes a long time (greater than 20 minutes) may be provided out on an invitation basis to only core panel participants with proven histories of engaging in these kinds of studies.
  • a large survey requiring a thousand participants that is expected to only take a few minutes may be offered to all eligible participants.
  • participant screening ensures that participants are not presented with studies they would never be eligible for based upon their basic demographic data (reducing participant fatigue and frustration), but still enables the user to configure the studies to target a particular participant based upon very specific criteria (e.g., purchasing baby products in the past week for example).
  • the participant may be presented with the study task (at 1230) which, again, depends directly upon the study type. This may include navigating a menu, finding a specific item, locating a URL, answering survey questions, providing an audio feedback, card sorting, clicking on a static image, or some combination thereof. Depending upon the tasks involved, the clickstream and optionally audio and/or video information may be recorded (at 1240).
  • the task completion is likewise validated (at 1250) if the success criteria is met for the study. This may include task completion in a particular time, locating a specific URL, answering a question, or a combination thereof.
  • Transcription enables searching of the audio recordings by keywords.
  • the transcriptions may be synchronized to the timing of the recording, thus when a portion of the transcription is searched, the recording will be set to the corresponding frames.
  • This allows for easy review of the recording, and allows for automatic clip generation by selecting portions of the transcription to highlight and tag/annotate (at 1330).
  • the corresponding video or audio clip is automatically edited that corresponds to this tag for easy retrieval.
  • the clip can likewise be shared by a public URL for wider dissemination. Any portion of the results, such as survey results and clickstream graphs, may similarly be annotated for simplified review.
  • clickstream data is analyzed (at 1340). This may include the rendering of the clickstream graphical interface showing what various participants did at each stage of their task. As noted before, deep learning neural networks may consume these graphs to identify‘points of confusion’ which are transition points that are predictive of a failed outcome.
  • All the results are filterable (at 1350) allowing for complex analysis across any study dimension.
  • machine learning analysis may be employed, with every dimension of the study being a feature, to identify what elements (or combination thereof) are predictive of a particular outcome. This information may be employed to improve the design of subsequent website designs, menus, search results, and the like.
  • video recording also enables additional analysis not previously available, such as eye movement tracking and image analysis techniques.
  • eye movement tracking and image analysis techniques For example, a number of facial recognition tools are available for emotion detection. Key emotions such as anger, frustration, excitement and contentment may be particularly helpful in determining the user’s experience. A user who exhibits frustration with a task, yet still completes the study task may warrant review despite the successful completion. Results of these advanced machine learning techniques may be automatically annotated into the recording for search by a user during analysis.
  • FIGs 14-24, 25A-25F, 26, 27A-27D, 28A-28B and 29, example screenshots of the operation of the user experience study system are provided.
  • an initial study generation site is provided at 1400 which allows the user to select the type of study they wish to generate.
  • the study types have been previously discussed in significant detail, and allow for template selection based upon the user input.
  • the user Upon selection of the study type, the user is presented with a screen to determine what device type the study will occur on, at 1500 of Figure 15.
  • the device type ensures that the study interface is properly adapted for the screen size and device capabilities.
  • SDK integration enables user experience interruptions.
  • the project/study details are requested from the user, at screen 1600 of Figure 16. This includes assigning an internal and external name for the study, applying relevant labels, inputting notes and other information regarding the goals of the study, and selecting the recording level desired and the participant language. A requirement for consent is also available based upon the jurisdiction the study is deployed in and/or the level of data collected.
  • the participant selection screen 1700 is presented to the user, as seen in Figure 17. Generally the options are for the user to supply their own participant pool, or utilize the pool of participants available to the user experience system. Again, this pool of participants includes the core panel as well as the general panel of participants.
  • a screen navigation link is available. If selected, the user is redirected to a screener interface, seen at 1800 of Figure 18. On the screener interface the user may select questions used to screen potential participants.
  • the questions available include single answer (radial button) answers, drop down single answer questions, multiple answer questions (check box), rating scaled questions, and text answer questions.
  • An example of the generation of a one answer question is provided in relation to the interface 1900 of Figure 19.
  • the user inputs the question text and possible answers. Randomization events may be selected for the answers to avoid errors associated with answer patterns. The logical result of the answer selection allows for early termination, or branched question sequences.
  • the question(s) may be linked together in logical conditions in order to produce a screener group, as seen in interface 2000 of Figure 20. This allows for the questions to be arranged in“or” and“and” clusters.
  • the user is still requested to provide information regarding segments that they wish to be included in the study, as seen in Figure 21 at interface 2100.
  • the segment is provided a name and a number of desired participants. Additionally, basic data such as location, gender, age and household income may be set to ensure that only eligible participants are presented with the study (or invited to join the study).
  • This interface 2100 is simpler than the more invasive screener questions, and is generally employed in the alternative, when a faster and less involved participant list is needed (multiple participant groups are not generated in this embodiment).
  • the user configures the actual tasks or questions involved in the study, as seen in the interface 2200 of Figure 22.
  • the tasks may include navigation tasks (either created from scratch or imported from this or a previous study), as seen, or may include any of the other tasks noted previously by pulling down on the drop-down menu. Other tasks may include click tests, card sorts, surveys, etc.
  • This interface additionally allows the user to indicate the order of the tasks.
  • the task is created, it is configured via a task configuration interface 2300 of Figure 23. For a navigation task as seen, this may include putting the participant at an initialization URL, or merely beginning from a webpage the participant was at in the previous step. The task is titled and described.
  • the task may state“find and update your contact information.”
  • a taskbar description is also described, and can include a success and abandonment selection option.
  • the taskbar may instead only include abandonment as an option, and success is defined by successfully completing the validation requirements.
  • the user has access to a validation tab for configuring validation requirements, and a recording tab, where video recording options are set.
  • the study may be launched and made available to participants for testing.
  • the results of the testing are compiled in a monitoring interface as seen at 2400 of Figure 24.
  • This interface shown the number of participants initially involved. In this case an original number of 24 participants were invited for the study. Of the 24, seven were excluded from the study due to not meeting the screener question requirements. An additional two individuals exited the study (abandoned it) within the welcome screen. The remaining 15 participants completed the study.
  • results of task completion are presented.
  • 2500E of Figure 25E results of task completion are presented.
  • two tasks were presented to the participants, and that 13 of the participants were able to successfully complete the first task, and 14 were able to complete the second task.
  • Each task is then analyzed in greater detail, including effectiveness, and as seen at 2500F of Figure 25F, the number of page views, number of clicks required, and (not sown) timing for task completion are likewise presented. In some cases, these results may be provided with confidence intervals that predict what 95% of individuals who are presented the task will fall within.
  • FIG. 26 provides an example screenshot of a review pane 2600 for the video clips.
  • the left side navigation menu illustrates how the videos can be filtered by tasks, marks, effectiveness, camera options, audio options, ones that have been viewed (or not), video that have been clipped and availability.
  • Figure 27A provides an example screenshot 2700A of a first heat map generated for a webpage screenshot image using a click test.
  • the participants were given a prompt of“Where would you click to get keychains?”
  • the click locations are illustrated.
  • the user defined the“keychain” hyperlink as a successful‘answer’ to the prompt/question.
  • the results from the click test can thus be collected and illustrated in a simple metrics interface as seen in image 2700B of Figure 27B.
  • Figures 27C and 27D provide additional heat map interfaces that capture not only click locations, but also speed of the clicks, as seen at images 2700C and 2700D, respectively.
  • the screenshot of 2700D is a blackout heat map which accentuates the click locations for user visibility.
  • Figure 28A provides a screenshot 2800A of a user uploading a tree structure menu into the system for a tree test. The user is then prompted, when shown only the top level of the tree, to find a particular item in the tree structure. In one example tree test the participants are requested to find olive oil. The results of this click test may be seen in relation to Figure 28B at screenshot 2800B. Here it can be seen, both in terms of percentages and raw numbers of participants, where people looked for the product‘olive oil’. The majority of participants rightfully went to the‘ingredients’ category and then to‘oils’.
  • Figure 29 provides an example screenshot of a dendrogram 2900 from a card sorting or tree test activity. Dendrograms provide results in a tree structure that allows for selection of a node within the decision process to determine what portion of total participants falls below the selected node. In this example 77.7% of participants selected luggage storage, express check in, slippers and private bathrooms during the card sort involved in this example.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in a client-server network environment or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a server computer, a client computer, a virtual machine, a personal computer (PC), a tablet PC, a laptop computer, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, an iPhone, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • machine-readable medium or machine-readable storage medium is shown in an exemplary embodiment to be a single medium, the term“machine-readable medium” and“machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term“machine- readable medium” and“machine-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the presently disclosed technique and innovation.
  • routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as“computer programs.”
  • the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and when read and executed by one or more processing units or processors in a computer, cause the computer to perform operations to execute elements involving the various aspects of the disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

La présente invention concerne des systèmes et des procédés pour générer, administrer et analyser une étude d'expérience d'utilisateur. L'invention comprend la sélection de participants, soit ceux fournis par l'utilisateur, soit ceux fournis par le système. Pour les participants que le système fournit, l'invention comprend le triage d'un large panel de participants par un ensemble de métriques de base (âge, sexe et revenus) ou par des questions interrogatoires avancées qui ont de réponses ramifiées. Ces questions de triage peuvent être imbriquées pour permettre de créer divers groupes de participants. Après que les participants sont triés, ils peuvent être invités à rejoindre l'étude. L'étude même peut être un exercice de tri de cartes, une enquête, une étude d'arbre, un test de clics, une navigation de base ou une étude enregistrée avancée. Des résultats de l'étude peuvent être filtrés selon diverses dimensions de participants et selon divers critères de validation. Une analyse d'enregistrement vidéo inclut des transcriptions pour permettre des recherches et une génération automatique de clips.
PCT/US2020/012218 2019-01-31 2020-01-03 Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur WO2020159665A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20747572.4A EP3918561A4 (fr) 2019-01-31 2020-01-03 Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201962799646P 2019-01-31 2019-01-31
US62/799,646 2019-01-31
US16/730,954 2019-12-30
US16/730,957 US11934475B2 (en) 2010-05-26 2019-12-30 Advanced analysis of online user experience studies
US16/730,954 US11068374B2 (en) 2010-05-26 2019-12-30 Generation, administration and analysis of user experience testing
US16/730,957 2019-12-30

Publications (1)

Publication Number Publication Date
WO2020159665A1 true WO2020159665A1 (fr) 2020-08-06

Family

ID=71841620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/012218 WO2020159665A1 (fr) 2019-01-31 2020-01-03 Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur

Country Status (2)

Country Link
EP (1) EP3918561A4 (fr)
WO (1) WO2020159665A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3963435A4 (fr) * 2019-04-30 2023-01-25 Userzoom Technologies, Inc. Systèmes et méthodes pour améliorer le test d'expérience
EP4014115A4 (fr) * 2019-08-15 2023-11-29 Userzoom Technologies, Inc. Systèmes et procédés pour l'analyse d'essai d'expérience utilisateur avec une accélération d'ia
US11909100B2 (en) 2019-01-31 2024-02-20 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
US11941039B2 (en) 2010-05-26 2024-03-26 Userzoom Technologies, Inc. Systems and methods for improvements to user experience testing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217481A1 (en) * 2015-01-27 2016-07-28 Jacqueline Stetson PASTORE Communication system and server for conducting user experience study
US20170228745A1 (en) * 2016-02-09 2017-08-10 UEGroup Incorporated Tools and methods for capturing and measuring human perception and feelings

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140052853A1 (en) * 2010-05-26 2014-02-20 Xavier Mestres Unmoderated Remote User Testing and Card Sorting
AU2015101408A4 (en) * 2015-09-28 2015-11-05 Beehaviour.Net Pty Ltd Method, system and computer program for recording online browsing behaviour

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217481A1 (en) * 2015-01-27 2016-07-28 Jacqueline Stetson PASTORE Communication system and server for conducting user experience study
US20170228745A1 (en) * 2016-02-09 2017-08-10 UEGroup Incorporated Tools and methods for capturing and measuring human perception and feelings

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3918561A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11941039B2 (en) 2010-05-26 2024-03-26 Userzoom Technologies, Inc. Systems and methods for improvements to user experience testing
US11909100B2 (en) 2019-01-31 2024-02-20 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
EP3963435A4 (fr) * 2019-04-30 2023-01-25 Userzoom Technologies, Inc. Systèmes et méthodes pour améliorer le test d'expérience
EP4014115A4 (fr) * 2019-08-15 2023-11-29 Userzoom Technologies, Inc. Systèmes et procédés pour l'analyse d'essai d'expérience utilisateur avec une accélération d'ia

Also Published As

Publication number Publication date
EP3918561A1 (fr) 2021-12-08
EP3918561A4 (fr) 2022-10-19

Similar Documents

Publication Publication Date Title
US11016877B2 (en) Remote virtual code tracking of participant activities at a website
US11544135B2 (en) Systems and methods for the analysis of user experience testing with AI acceleration
US20190123989A1 (en) Unmoderated remote user testing and card sorting
US11941039B2 (en) Systems and methods for improvements to user experience testing
WO2020159665A1 (fr) Systèmes et procédés de génération, d'administration et d'analyse de tests d'expérience d'utilisateur
US20210407312A1 (en) Systems and methods for moderated user experience testing
US20220083896A9 (en) Systems and methods for improved modelling of partitioned datasets
US11709754B2 (en) Generation, administration and analysis of user experience testing
EP3963435A1 (fr) Systèmes et méthodes pour améliorer le test d'expérience
US11909100B2 (en) Systems and methods for the analysis of user experience testing with AI acceleration
US11934475B2 (en) Advanced analysis of online user experience studies
US20230368226A1 (en) Systems and methods for improved user experience participant selection
US11494793B2 (en) Systems and methods for the generation, administration and analysis of click testing
WO2021030636A1 (fr) Systèmes et procédés pour l'analyse d'essai d'expérience utilisateur avec une accélération d'ia
US20230090695A1 (en) Systems and methods for the generation and analysis of a user experience score
EP4375912A1 (fr) Systèmes et procédés pour une analyse améliorée de résultats d'expérience utilisateur
US20240029103A1 (en) AI-Based Advertisement Prediction and Optimization Using Content Cube Methodology
Nguyen Digital marketing analytics guide for the business of an online influencer. Case: Lavendaire, US
Wang et al. Short and sweet: How product quality uncertainty, review length and richness shape review helpfulness
Al Qudah A framework for adaptive personalised e-advertisements
KR20220147320A (ko) 인공지능 ai 알고리즘을 활용한 행사 이벤트 큐레이팅 서비스 솔루션 제공장치 및 제공방법
Lakkaraju An Analytics-based Framework for Engaging Prospective Students in Higher Education Marketing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20747572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020747572

Country of ref document: EP

Effective date: 20210831