WO2020016646A1 - Procédé d'automatisation et de création de défis, d'appels à l'action, à des interviews et à des questions - Google Patents

Procédé d'automatisation et de création de défis, d'appels à l'action, à des interviews et à des questions Download PDF

Info

Publication number
WO2020016646A1
WO2020016646A1 PCT/IB2019/000665 IB2019000665W WO2020016646A1 WO 2020016646 A1 WO2020016646 A1 WO 2020016646A1 IB 2019000665 W IB2019000665 W IB 2019000665W WO 2020016646 A1 WO2020016646 A1 WO 2020016646A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
user
users
interface
databases
Prior art date
Application number
PCT/IB2019/000665
Other languages
English (en)
Inventor
Amy Balderson Junod
Original Assignee
Amy Balderson Junod
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/178,763 external-priority patent/US20190171653A1/en
Application filed by Amy Balderson Junod filed Critical Amy Balderson Junod
Publication of WO2020016646A1 publication Critical patent/WO2020016646A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the embodiments presented herein relate to automating and creating interview questions that allows an interviewee to control his or her interview by managing the creation of and responses to audio-visual and free-text questions, themes and interviews, documents, photographs and other media both in real time and offline.
  • the embodiments presented herein provide systems and methods for enabling an interviewee to control his or her interview through management of the resultant interview, which includes managing the creation of and responses to asynchronous audio-visual and written questions, themes, interviews, documents, photographs and other media, both in real time online and post-interview off-line.
  • the interviewee can have the ability to control all thematic content associated with his or her interview and may determine the timing of release, audiences, and location of meta-data and storage.
  • the embodiments presented herein enable the automation of responses from the automated interviewer including (1) recording the video, (2) creating meta-data, and (3) enabling the modification of the means and types of interview questions based upon feedback cues received from the interviewee during the interview.
  • feedback cues can include but are not limited to interviewee gestures, tone of voice, spoken words, and body language, and they can encompass future types of internet technology and artificial intelligence.
  • One example embodiment is a method of sourcing content.
  • the example method includes enabling a user to create or deposit content by a subject person in connection with a given topic, and enabling the user to tailor parameters associated with distribution of the content.
  • the parameters include at least an identifier of the subject person in the content and a distribution control indicator selectable to a given number of states, including a zero- distribution state.
  • the method further includes enabling one or more other users to access and contribute to the content via a content collection and distribution channel by selection of the given topic or a different topic linked to the given topic, and facilitating distribution, disablement, or retraction of the content via the distribution channel as a function of at least the distribution control indicator. For example, a user may restrict specific content for release upon his or her death, and limit viewership to a specific group of individuals.
  • the parameters can be stored separately from the content and the content can be replicated across multiple content collection and distribution channels, apps, widgets as a function of the distribution control indicator.
  • the example method can further enable the user or subject person to modify the distribution control indicator to restrict further distribution to a subset of the multiple distribution channels or to disable or retract the content from one or more of the multiple distribution channels.
  • the content collection and distribution channel can include a user-created relational database (or via other future means) accessible by the other users.
  • the example method can further include enabling a user or database manager to import the topic or define a related topic in the relational database in a manner that relates the content to the topic or related topic.
  • the topic or related topic can be a question or related inquiries, and the content can be a video clip or multiple video clips that are associated with the inquiries based on a relationship with at least one inquiry.
  • the parameters can include metadata through which the relational database creates the associations.
  • enabling a user to create or deposit content can include duplicating the content at multiple datastore locations. Enabling the user to tailor parameters associated with distribution of the content can include enabling the user to assign current and/or future ownership of the content. For example, prior to or upon death of the user, instructions can be provided regarding ownership and distribution of the content after death of the user.
  • enabling users to access and contribute to the content can include providing one or more universal resource locator (URL) links to website(s) associated with a datastore for the content, or embedding code on a website that is remote from the datastore.
  • Enabling users to access and contribute to the content can include challenging the one or more users to provide input regarding the topic.
  • Enabling users to access and contribute to the content can include engaging other users by sharing questions or themed challenges, for example, on social media.
  • the example method can further include analyzing amalgamated metadata based on the content created and deposited by one or more users, where amalgamated metadata includes at least a topic identification or trending themes.
  • Another example embodiment is a system for providing interactive collaboration over a computer network.
  • the example system includes a content database, an organizational database, an interface, and a processor.
  • the content database is configured to store digital content created by users.
  • the organizational database is configured to store representations of relationships between the digital content stored in the content database and the users.
  • the interface is configured to provide access to the digital content by other users and to enable the other users to provide feedback related to the digital content in a similar format as the digital content.
  • the processor is in communication with the databases and the interface and configured to implement an intermediary layer between the databases and the interface.
  • the intermediary layer is configured to control which users can access the digital content, to store the feedback from the other users in the database in a manner that is associated with the original digital content, and to provide users with controlled access to the feedback.
  • a user accessing the content can be enabled to provide feedback in the form of video, text, or other media.
  • the system can further include a server hosting the databases and the interface, and the interface can include a website accessible by a uniform resource locator (URL) link.
  • the interface can include embedded code on one or more websites that are remote from the databases. Further, the interface can include embedded code on one or more websites that are not visible on the URL, but accessible through an Administrator Dashboard or user account.
  • URL uniform resource locator
  • the databases can be relational databases, or other future technological structures for organizing and distributing content.
  • Feedback to the digital content can be stored in the content database and representations of relationships can be stored in the organizational database in a manner associated with the digital content in a parent-child relationship.
  • the representations of relationships can be updated in real time, or asynchronously,
  • the processor can be configured to, when providing the digital content to the interface, access the databases to determine additional digital content that is feedback to the digital content and to provide to the interface the feedback along with the digital content.
  • the interface can be configured to present the digital content to a user along with one or more indications of the feedback.
  • the interface provides access via (i) a uniform resource locator (URL) link to a website associated with the organizational database, which provides information (e.g ., identifier tags) used to locate content associated with a specific user and/or content that is stored in the content database, or (ii) embedded code on a website that is remote from the URL.
  • the processor is in communication with the databases and the interface and configured to implement an intermediary layer between the databases and the interface.
  • the intermediary layer is configured to control which users can access the video content, store the video feedback in the database in a parent-child relationship with the video content, and, when providing digital content to the interface, access the databases to determine additional video content that is feedback to the video content.
  • FIG. 1 is an example embodiment of a method.
  • FIG. 2A is an example embodiment of the use of audio processing feedback.
  • FIG. 2B is an example embodiment of the use of visual processing.
  • FIG. 3 is an example embodiment of the search within the question database by topic, author, date, keyword, or tailored search.
  • FIG. 5 is an example embodiment of a user being able to create their own asynchronous interview.
  • FIG. 6 is an example embodiment wherein an end user edits to their asynchronous interview by allowing existing questions into a video playlist.
  • FIGS. 7A-D are example embodiments wherein interviews may be created through recording questions in sequence or later organized in a different sequence, or choose from pre-existing questions to create an interview.
  • FIGS. 8A-B illustrate an example embodiment of the questions database wherein an end-user may choose questions.
  • FIG. 9 is an alternative embodiment of a method of operation.
  • FIG. 10 is an example embodiment wherein an interviewee may follow his or her progress through the interview.
  • FIG. 11 is an example embodiment wherein an interviewee may choose between a variety of thematic interviews to answer.
  • FIGS. 12A-B are example embodiments wherein a user may create questions and widgets, share them on social media, or embed Q&A widgets anywhere on the internet using a uniquely generated embed code.
  • FIG. 13 is an example embodiment wherein an embedded widget displays the question and responses.
  • FIG. 14 is an example embodiment wherein an embedded widget collects, processes, and displays content.
  • FIG. 15A is an example embodiment wherein a system becomes a user-managed relational audiovisual database.
  • the creation of widgets may be individual (one-to-one interaction), one-to-many, or many-to-many.
  • FIG. 15B is an example embodiment wherein a private Q&A widget can collect content from one person or many.
  • FIG. 15C is an example embodiment wherein complete question series can collect content from one or more persons. The embodiments of FIGS. 15A-C all enable a user to create and tailor user-generated relational databases.
  • FIG. 16 is an example embodiment wherein public questions may also be used as one-to-one or one-to-many interactions elsewhere on the internet.
  • FIG. 17 is an example embodiment wherein searchability across the relational database is live across devices.
  • FIG. 18 is an example embodiment wherein a user’s performance may be qualitatively assessed over time.
  • FIG. 20 illustrates an example embodiment of an underlying processes and other aspects of a software system stored in memory.
  • FIG. 21 illustrates an example embodiment of how users can interact with the master database.
  • FIG. 22 is a block diagram illustrating a system for providing interactive collaboration over a computer network, according to an example embodiment.
  • FIG. 23 is a flow diagram illustrating a method of sourcing content, according to an example embodiment.
  • widget refers to an instance of an interview that is accessible to users to view and provide feedback.
  • distributed control indicator refers to information that is used by the system to control which users can access which content.
  • a distribution and control indicator can be, for example, managed by users and/or by Administrators and Super
  • A) Users may create widgets, and content collection and distribution related to widgets may be controlled by users. For example, the user can control the subject theme of the content collected by the widget, and may delete or change the order of content responses and distribution. Individuals may create one or more widgets and each widget is linked to its own content database. Users may be control multiple sets of widgets (sets of tagged content and associated databases) that he or she may place on one or more URLs selected by the user.
  • B) Site Administrators have overall monitoring and control of widget and content placement on and off their own URLs. Site Administrators can control the searchability of users and content.
  • C) Super Administrators have overall monitoring and control of widget and content (collection and distribution) placement on and off all related URLs.
  • a multinational organization may have separate URLs for each country organization and individual brands within each country, each with their own widgets and corresponding databases, linked to the organization’s customizable and searchable databases.
  • the multinational organization’s Super Administrator may amalgamate all of its content, on and off public URLs to analyze and make improvements to the organization’s content collection and distribution strategy.
  • FIG. 1 is an embodiment of an example method.
  • an interviewee accesses a question database 101.
  • the question database is generally stored at an off-site location and is accessible via a remote network.
  • the interviewee may be located in, for example, one country, but accesses the question database that is located on a server in another country.
  • the question database has stored thereon at least one question that will be presented to the interviewee.
  • the question, or questions may be stored in a variety of formats and computer languages, for example .txt format.
  • questions may exist as audiovisual questions, corresponding text questions, and tags.
  • the audiovisual questions may be stored separately from the text versions of the questions and tags.
  • the text versions may be created by voice to text transcriptions.
  • the interviewee may select which language the question(s) should be presented in.
  • the ability to choose from various languages can be implemented by storing versions of the question(s) in multiple languages, including sign languages, in separate, relational databases that are cross-referenceable.
  • An interviewee may locate a specific question, inquiries, challenges, or calls-to- action using search criteria.
  • the interviewee can be asked a question directly by a non-interviewee who has access to the system.
  • the interviewee may answer any question on the question database, irrespective of if he or she has received the question directly.
  • Asking a question of any interviewee that may be using the system to conduct an interview can occur using a“Question” tool, wherein a non-interviewee can record a question using video means, audio means, text, or display means such as by a graphic user interface.
  • search criteria used to locate content can include metadata descriptive information, wherein metadata descriptive information is information relating to characteristics of, in this case, a question, inquiry, challenge or call-to-action and/or responses.
  • metadata descriptive information is information relating to characteristics of, in this case, a question, inquiry, challenge or call-to-action and/or responses.
  • Example of metadata used to locate content from the database includes interviewees whom answered the question, subject matter of the question and responses, language the question was presented in, the person who created the question, etc. Metadata can be auto-generated through a statistical analysis of content, for example through the word cloud.
  • metadata can be created by the person who created the question and/or by the user who responds.
  • the user reviews the various questions to determine if there are questions he desires to answer 103.
  • the interviewee may select the question(s) he desires to answer and place in a digital shopping cart-like feature. If there are no questions the interviewee desires to answer, or if the interviewee desires to create personalized questions, the interviewee may access a Question Wizard 105.
  • the question wizard allows the interviewee to type in a question that he would like to ask, and video record a question the interviewee would like to answer or have answered. Questions created via the question wizard are entered into the Question Database and added and processed in the interviewee’s meta data.
  • the interviewee may customize the question using variables 107.
  • the interviewee may organize the questions in the preferred order of presentation.
  • the interviewee then initiates the interview system 109, wherein the series of selected questions are presented to the interviewee.
  • the questions may be presented visually on a graphic user interface (GUI), for example a computer terminal, the questions may be presented orally, for example a voice overlay“reads” the interview question to the interviewee, or the questions may be presented both orally and visually simultaneously.
  • GUI graphic user interface
  • oral presentation of the question occurs by computer synthesized speech, which“reads” the question to the interviewee.
  • an example from academia the complete, upcoming interview questions are made available to the interviewee so that he or she may pace and understand where they are in the interview process, for example the interviewee can determine the interview is 7 questions long, and he is on question 1 (see FIG. 10).
  • the interviewee answers the questions 109.
  • the interviewee’s responses are then recorded by visual and audio recording equipment available through using their device, such as computer, mobile phone, tablet, etc.
  • the answer and all associated metadata are automatically stored in a remote folder of a remote database associated with the interviewee 111.
  • Each interview performed by an interviewee will be stored as a separate item, whereby the various interviews can be compared to reveal trends in the interviewee’s behavior, speech patterns, overall attention, and length of submitted answers.
  • the interviewee’s data may also be compared and amalgamated across the larger body of content. Qualitative and quantitative progress can then be matched to education competencies, learning requirements, or other educational benefits.
  • the visual and audio recording equipment may be incorporated within the equipment the interviewee is using to conduct the interview presented to him.
  • Each response to a question generates a visual and audio file, combined with relevant text and manually inputted tags.
  • the file Prior to storing the content into the folder, the file can be analyzed 113.
  • Analyzation may consist of“breaking down” the file into separate audio, visual, and text components.
  • the purposes of file analyzation prior to filing is to gauge the visual cues, emotional cues, cultural cues, and audible cues particular to the interviewee. For example, if the interviewee was presented with an interview question that required an“affirmative” or “negative” response, the interviewee’s“body language” will be analyzed visually and compared with other visual responses by other interviewees. The system will then modify future questions based on the“cultural cue.” Culture includes differences between national cultures, or regional differences within a larger culture, for example different regions of various countries.
  • interview questions continue to be provided, answered, and the answers analyzed by visual cues, the interview questions continue to be made more automatically tailored and specific 113. Near or at the beginning of the interview, the interviewee should feel as though he or she is speaking with a person from his own culture. In a preferred embodiment, modification of the interview questions occurs based on visual cues, audible cues, emotional, and other cultural cues. It is believed that interviewees will provide more detailed, introspective interviews as their level of comfort increases because they believe they are speaking with someone who“understands them and their culture.”
  • Meta-data can refer to - but is not limited to - any aspect of the interview that characterizes the interview, for example the name of the interviewee, the language the interview was performed in, subjects covered in the interview date of the interview, etc.
  • the meta-data is auto-generated by the various cue analysis meta-data, for example word cloud generation.
  • Meta-data can be either public, meaning it can be used by others or specifically identified groups of users to categorize an interview, or private, meaning only certain people have access to this information. Examples of private meta-data includes storing key elements, chosen by the interviewee, on servers in locations of their choice, for example the USA, Ireland, or
  • Private meta-data may be stored in a separate location from their content.
  • the interview is then made available to the public on a remote server 117. Access to the interview can be via website access and the like.
  • FIG. 2A is an embodiment of an aspect of the example embodiments disclosed herein, whereby the audio component of an interview can be analyzed to provide feedback, which in turn can be used to modify the interview questions to better suit an interviewees culture and emotional state.
  • the feedback is obtained by determining various cues specific to each interviewee.
  • audio information 201 is collected.
  • the audio information is collected following each answer to an interview question.
  • the audio information may be collected by a microphone using the interviewee’s device to perform the interview.
  • the audio information may be collected using a standalone microphone. Audio pickup, or collection of the audio signal, can be obtained using instrumentation suitable for the human voice. Instrumentation suitable for the present system allows categorization of the audio signal, to result in pattern recognition.
  • an example system includes a digital server processor (DSP) to focus the frequency of the microphone within the human voice range.
  • DSP digital server processor
  • the example system allows a determination of the audio pattern recognition of the interviewee. Areas of pattern recognition includes determining the phonemic 203, prosodic 205, intonation 207, and accent 209 characteristics. In one embodiment, in determining the phonemic characteristic, the example system is suitable for determining speech sounds of the interviewee. In another embodiment, in determining prosodic characteristics, the tune and rhythm of speech including the pitch, loudness, and rhythm of the interviewee utilized during the interview.
  • the pitch of the interviewee is recorded and measured, which may provide insight into the personality and mood of the user.
  • the accent of the interviewee is recorded and measured and integrated into the cultural meta-data master file for the specific interviewee.
  • the language of the interviewee can be identified 211. Identification can occur through a comparison means, whereby the categorized areas of pattern recognition for the interview are compared to previously stored information relating to a human language. The stored information relating to human language forms a baseline. In another embodiment, the interviewee, through the system, can select his or her mother tongue or preferred language.
  • Data from the categorized pattern recognition and determination of the language identification are forwarded to determination cue information including cultural context 235 and emotional context 241.
  • Both the cultural context 235 and emotional context 241 are processed with the visual data, whereby the visual data is obtained via the video recorded during the interview. Processing with the visual data results in a cultural context cue 237 and an emotional context cue 239.
  • the audible data is then used to produce a text transcript 213, wherein it is processed against an existing language database of text 215.
  • the selected language database of text is selected from the identified language of the interviewee 211.
  • the transcribed text 213 may also be used to generate metadata 217.
  • the metadata is utilized for organizing the interview into categories which allows content to be searched and personalize the experience for the user accordingly.
  • the various aspects of the interview including vocabulary 219, context of the use of words 221, intonation employed by the interviewee in the interview 223, background perspective 225, and emotion exhibited during the interview 227 enable analysis and personalization of the experience of and for the user.
  • the results 229 of the feature matching 218, when combined with the results 243 from the emotional cues and cultural cues can be combined 231 to deliver a feedback output 245.
  • the feedback output 245 can be employed to modify future interview questions.
  • FIG. 2B is an embodiment of the visual processing of the interview to result in feedback that will be useful in modifying future interview questions presented to the interviewee.
  • the video data is analyzed for a variety of aspects relating to the performance of the interviewee in answering the question 249, including 251, for example, cultural context, emotional context, body language, eye movement/eye expression, body movement including head movement and upper body movement, mouth expression, and the like.
  • the video data is also analyzed for body size, skin appearance, environmental cues, racial appearance, sexual references and orientation, hair texture, and body shape.
  • the video data can also be analyzed for a variety of aspects relating to other visual aspects 252, including backgrounds/backdrops e.g ., indoors, outdoors), color and style of clothing, and any other visual clues that can be gleaned from the settings of the interviewer or interviewee.
  • the user may choose to make visible time and date stamps, as well as location mapping.
  • the analyses result in data fragments with corresponding tags, wherein each fragment corresponds to the various visual characteristics derived from the video data.
  • the data fragments are then amalgamated and compared against benchmark data fragments 253.
  • the benchmarks data fragments are created from previous analysis of other third-party interviews that have been complied and analyzed as compared to external cultural, emotional and body language research to form “big data” sets.
  • FIG. 3 is an example embodiment wherein an interviewee can use the present system 300 to select at least one interview question from the question database.
  • the interviewee may select from at least one variable (301, 303, 305, 307, 309) for narrowing the various interview questions stored within the question database.
  • All content (writing of questions, manually inputted tags, topics, dates, author, as well as conclusions from visual and audio processing) is searchable metadata 307.
  • searchable metadata 307 is searchable metadata 307.
  • a search bar positioned across the top may allow a user to locate their desired interview questions across the entire site.
  • the selection of variables leads to the presentation of interview questions 311 to the interviewee.
  • the interviewee can select variables including theme, category, location, topic, demographic, etc.
  • the interviewee within the interface, may also choose which questions to answer, (thereby creating their own interviews), the order of questions and corresponding answers, make certain questions and answers private, or make certain questions and answers only available to specific groups within their online community.
  • a user may also create a tailored interview and submit this new interview to another user to answer (see e.g., FIG. 15B and 15C).
  • the interviewee desires to create one or more personalized interview question via the question wizard 400.
  • the user manually writes the question and related tags associated with the content 401.
  • the user may either record directly from the interface or upload an existing video 403.
  • the user may re-record as many times as they wish until they are satisfied with their recording.
  • Once the user presses“submit” the content is entered into the master database.
  • FIG. 5 illustrates an example embodiment wherein the interviewee can create interview questions and multiple-question interviews, including a viewing area 501 and controls 503.
  • recording interviews can include recording both visual and audible questions while inputting relevant text.
  • the recording of the visual and audible questions allows the system to determine visual, audible emotional and cultural cues.
  • Such case analysis provides a feedback mechanism that allows better culturalization of future interview questions.
  • Such culturalization of the interview questions allows the interviewee to feel more comfortable and likely to provide more in-depth, personal responses during the interview.
  • the interview is created, the user is capable of sharing it on social media to promote engagement.
  • FIG. 6 is an example embodiment wherein the interviewee can preview interviews in sequence before answering them, for example, presented as a playlist.
  • the interviewee can have full control of creating their content and related information.
  • the interviewee can add documents, audio recordings, photographs, various media, specific asynchronous audio- visual questions and answers, topics, complete or parts of interviews, delete, re-record, change the order of questions, edit, and replace any content with new content.
  • the user may share the interview with one or more users, specific groups and control the timing of the release or request for responses. Further, with users’ permission, any user may embed any interview on any website across the internet.
  • FIGS. 7A-D illustrate embodiments wherein users may create audiovisual interviews. Questions may be recorded with corresponding user-generated or automatically- created meta-data. Interviews may be edited, and the sequence of the questions changed.
  • interview questions from a third party are presented to an interviewee on a graphic user interface via visually (in writing) and orally. On the graphic user interface, the interviewee can select additional questions to answer during the interview. Further users may manage their replies to questions and enable comments, likes, creation of further metadata, control privacy, rights and publishing settings 7D.
  • FIG. 8A is a visual representation of the interview database, whereby an interviewee may select one or several existing interviews to answer.
  • FIG. 8B is a visual representation of the questions database, whereby an interviewee may select one or several existing questions to answer.
  • FIG. 11 is an example embodiment wherein the interviewee may choose from a variety of existing thematic interviews to answer.
  • a user may choose to share their story by selecting between (1) ask a question, (2) answer a question, (3) create an interview, or (4) answer an interview.
  • FIG. 12A illustrates a Q & A“widget creator.”
  • the widget creator records audiovisual questions, and enables the user to select categories, write the question and include tags, all of which become searchable metadata.
  • FIG. 12B illustrates generating unique embed codes for each widget or question, enabling following of questions and answers wherever they are embedded across a global network, forming a user-generated virtual, audiovisual relational database.
  • the reply updates on the widget as well as on the main site, and simultaneously across the internet wherever that widget has been embedded. The user may determine whether his or her widget is visible on the main URL.
  • FIG. 13 shows an example embodiment embedded on an external website.
  • the Q&A widget appears as an ordinary video. However, when the user moves his or her cursor over the image of the video to press“play,” bubbles appear along the bottom of the video screen, indicating that the video is interactive, with more content or“stories” in addition to the initially-featured video.
  • The“reply” button also appears to indicate to the viewer user that he or she may respond and become an interviewee. A discrete flag may also appear, enabling viewers to report inappropriate content.
  • FIG. 14 illustrates an example form of a widget’s behavior.
  • the embedded widget When the user presses“play,” the embedded widget first plays the question and then each answer plays in chronological (or reverse-chronological) order (according to the user’s instruction). Then, when the user presses“reply” he or she may respond to the question him or herself. This process may take place anywhere on the web where the widget is embedded.
  • the widget may appear as created by a single-user on one website, or as part of a white-labeled platform solution.
  • FIG. 15A illustrates individual user accounts including questions and widgets in separate locations. Questions can appear publicly on the client site, and widgets may be created from questions, and placed anywhere on the web. In this case, when users reply to the embedded widgets, the questions on the client site as well as the embedded widgets update instantly. Payable widgets are not public, and may be created separately from the client site, and embedded anywhere on the web. The user can manage the widget through their personal dashboard.
  • FIG. 15B illustrates an example form of widgets as a user- controlled means of creating audiovisual relational databases by managing his or her widget placement.
  • Payable widgets do not necessarily appear on client websites; rather, they may be placed in situations on the web whereby an individual respondent may answer (one-on-one embodiment), one on many embodiment (placing one widget on one site for a specific audience comprised of one or more potential users) or many on many embodiments by placing the same widgets many places on the web.
  • 15C illustrates an example form of automated interview as a user-controlled means of creating audiovisual relational databases by managing his or her interview placement.
  • Interviews may appear on client websites or be placed in situations on the web whereby an individual respondent may answer (one-on-one embodiment), one on many embodiment (placing the interview on one site for a specific audience comprised of one or more potential users), or many on many embodiments by placing the same interviews simultaneously across many URLs on the web.
  • FIG. 16 illustrates an example client site whereby questions may be created for treatment (replies, sharing) in a public or private environment. Any approved user of the system“social media” may ask or answer any question in the environment. Any question may be shared as a Q&A widget.
  • FIG. 17 illustrates example embodiments across various browsers and devices.
  • FIG. 18 illustrates an example embodiment that presents comparisons of a user’s behavior and performance over time.
  • the system enables responses to the same questions to be compared following training or other intervention to demonstrate qualitative differences in a user’s behavior, speech patterns, overall attention, and length of submitted answers, etc.
  • Statistics may be gathered from these comparisons and displayed as visual graphs to track progress in a certain theme over time. Progress can then be matched to educational competencies, learning requirements, or other benefits.
  • FIG. 19 illustrates an example embodiment of how a user logs-in to an example platform: either by responding to a question or a widget, or by asking a question.
  • a user is not necessarily obliged to sign-in to view content, according to the distribution control indicator of the client user.
  • FIG. 20 illustrates an example embodiment of an underlying processes and other aspects of a software system stored in memory. All aspects of content creation, comparison, and interaction can be amalgamated in the master database.
  • FIG. 21 illustrates an example embodiment of how users can interact with the master database.
  • Each client has its own database, and the master database with
  • FIG. 22 is a block diagram illustrating a system for providing interactive collaboration over a computer network, according to an example embodiment.
  • the example system includes a content database 2210, an organizational database 2212, an interface ( e.g ., 2230 and 2240 via 2220), and a processor 2215.
  • the content database 2210 is configured to store digital content created by at least one user.
  • the interface 2230, 2240 is configured to provide access to the digital content by other users and to enable the other users to provide feedback related to the digital content in a similar format as the digital content.
  • the example system illustrated in FIG. 22 includes a server 2205 hosting a content database 2210 and an organizational database 2212.
  • One example interface is shown as a URL link 2230 on web site 2225.
  • Another example interface 2240 is shown as embedded code 2240 on web site 2235.
  • both web sites 2230 and 2240 are shown as being remote from the databases 2210, 2212.
  • both databases 2210, 2212 are shown as being hosted on a server 2205. In other embodiments, however, either one of the databases 2210, 2212 can be at a different location.
  • the content database 2210 may be at a remote location, or the organization database may be at the location of the interface.
  • the databases 2210, 2212 can be, but are not limited to, relational databases.
  • Feedback to the digital content can be stored in the content database 2210, and representations of relationships can be stored in the organizational database 2212 in a manner associated with the digital content in a parent-child relationship.
  • the processor 2215 can be configured to, when providing the digital content to the interface 2230, 2240, access the databases 2210, 2212 to determine additional digital content that is feedback to the digital content and to provide to the interface 2230, 2240 the feedback along with the digital content.
  • the interfaces 2230, 2240 can be configured to present the digital content to a user along with one or more indications of the feedback.
  • the interfaces 2230, 2240 can be configured to display, in one screen view, video playback of the video content and a plurality of indicators of related videos that have been provided by other users as feedback to the video content.
  • the interfaces 2230, 2240 can be configured to display video playback of a related video in response to user selection of a corresponding indicator.
  • the interfaces 2230, 2240 can be configured to cause the intermediary layer 2220 to authenticate a user and to accept new video content from the user as feedback to the video content in response to selection of a reply indicator by the user.
  • the interface 2230, 2240 can make a call to the intermediary layer (API) 2220.
  • the intermediary layer 2220 accesses the databases 2210, 2212 and sends the requested data back to the interface 2230, 2240.
  • the processor 2215 via the intermediary interface 2220, can provide all other content stored in the content database 2210 that is associated with the requested content, in accordance with the relationships represented in the organizational database 2212.
  • the content may be stored in the databases along with unique identifiers, and relationships between the content can be designated by referring to the unique identifiers e.g ., by designating a parent-child relationship using the unique identifiers).
  • the interfaces 2230, 2240 can display the content using, for example iFrame, and can present the primary video content in a majority of the display, and can present smaller representations of the related video content, for example, along the bottom of the display. If the same widget is featured on both 2230 and 2240 and a response is recorded on either 2230 or 2240, the intermediary layer 2220 updates interfaces 2230 and 2240 simultaneously.
  • FIG. 23 is a flow diagram illustrating a method 2305 of sourcing content, according to an example embodiment.
  • the example method includes enabling 2305 a user to create and/or deposit content by a subject person in connection with a given topic, and enabling 2310 the user to tailor parameters associated with distribution of the content.
  • the parameters include at least an identifier of the subject person in the content and a distribution control indicator selectable to a given number of states, including a zero-distribution state.
  • the method further includes enabling 2315 one or more other users to access and contribute to the content via a distribution channel by selection of the given topic or a different topic linked to the given topic, and facilitating 2320 distribution, disablement, or retraction of the content via the distribution channel as a function of at least the distribution control indicator.
  • the embodiments disclosed herein allow a user, such as a newsroom, to design and record a video question from the newsroom or directly from the scene of breaking news scene.
  • Embodiments can establish a channel based on the content producer’s website: the content producer may launch the opportunity to respond, spread it on the web. The user will then be able to manage the incoming videos to create a compendium of content based on the interviews.
  • the embodiments disclosed herein can create a direct channel between the newsroom or brand owner with their audience, so that fans, followers and viewers can contribute with direct and immediate recordings or uploads of their content.
  • Conference and meeting networking can be continued through the face-to-face interaction long after conferences end. Through the embodiments disclosed herein, both before, during and after conferences, the experience may be anticipated, rated, and relationships maintained beyond the event.
  • Spectators at sporting events can leave personalized experiences as the spectacles unfold bring viewers a unique experience. Sporting events may be experienced in real-time remotely and on-site in personal ways like never before.
  • the example dashboard of the embodiments disclosed herein can make curating volumes of video easy with the click of a mouse.
  • the embodiments disclosed herein allow for capturing and analyzing facial expressions and body language, and compilation of the information with cultural aspects that can provide researchers a wealth of additional data.
  • the embodiments disclosed herein allow the creation of practice interviewing, which a user may participate in prior to conducting a“real” interview.
  • the embodiments disclosed herein can recognize donors by asking them to record their motivation to give. These videos can be featured to motivate other alumni to give.
  • the embodiments disclosed herein can be a catalyst tool behind donors and the project recipients.
  • the embodiments disclosed herein can provide a tool to strengthen links between donor and recipient, as well as a synthesized way of maintaining a master database of video content by topic. Qualitative input can be measured against the cultural database.
  • the embodiments disclosed herein can energize online education programs by providing tailored feedback, collecting input from students in an organized manner through an interface.
  • the embodiments disclosed herein can also set assignments, perform exam verification, record video profiles, video diaries, etc.
  • software for implementing at least a portion of the disclosed systems and methods can be stored as a computer program product, including, for example a non-transitory computer readable medium (e.g, a removable storage medium such as one or more DVD-ROM’s, CD-ROM’s, diskettes, and tapes) that provides at least a portion of the software instructions for the system.
  • a computer program product can be installed by any suitable software installation procedure, as is well known in the art.
  • at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection.
  • the program can be a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g ., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)).
  • a propagation medium e.g ., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)
  • a propagation medium e.g ., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)
  • Such carrier medium or signals can provide at least a portion of the software instructions.
  • the propagated signal can be an analog carrier wave or digital signal carried on the propagated medium.
  • the propagated signal may be a digitized signal propagated over

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne des systèmes et des procédés permettant de rechercher un contenu et des systèmes et des procédés permettant d'assurer une collaboration interactive sur un réseau informatique. Un procédé donné à titre d'exemple consiste à permettre à un utilisateur de déposer un contenu provenant d'une personne sujet en lien avec un thème donné et à permettre à l'utilisateur de personnaliser des paramètres associés à la distribution du contenu. Les paramètres comprennent au moins un identifiant de la personne sujet dans le contenu et un indicateur de commande de distribution, pouvant être sélectionné suivant un nombre donné d'états, comprenant un état de distribution nulle. Le procédé consiste en outre à permettre à un autre utilisateur d'accéder au contenu, par l'intermédiaire d'un canal de collecte et de distribution de contenu par sélection du thème donné ou d'un thème différent lié au thème donné, et à faciliter la distribution, la désactivation ou le retrait du contenu par l'intermédiaire du canal de collecte et de distribution de contenu, sous forme de fonction d'au moins l'indicateur de commande de distribution.
PCT/IB2019/000665 2018-07-17 2019-07-16 Procédé d'automatisation et de création de défis, d'appels à l'action, à des interviews et à des questions WO2020016646A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201816037772A 2018-07-17 2018-07-17
US16/037,772 2018-07-17
US16/178,763 US20190171653A1 (en) 2017-07-17 2018-11-02 Method of automating and creating challenges, calls to action, interviews, and questions
US16/178,763 2018-11-02

Publications (1)

Publication Number Publication Date
WO2020016646A1 true WO2020016646A1 (fr) 2020-01-23

Family

ID=69164309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/000665 WO2020016646A1 (fr) 2018-07-17 2019-07-16 Procédé d'automatisation et de création de défis, d'appels à l'action, à des interviews et à des questions

Country Status (1)

Country Link
WO (1) WO2020016646A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323891A1 (en) * 2011-06-20 2012-12-20 Conifer Research LLC. Systems and methods for arranging participant interview clips for ethnographic research

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323891A1 (en) * 2011-06-20 2012-12-20 Conifer Research LLC. Systems and methods for arranging participant interview clips for ethnographic research

Similar Documents

Publication Publication Date Title
Knoblauch et al. Videography: Analysing video data as a ‘focused’ethnographic and hermeneutical exercise
Pennington Coding of non-text data
Crichton et al. Clipping and coding audio files: A research method to enable participant voice
US20140122595A1 (en) Method, system and computer program for providing an intelligent collaborative content infrastructure
Woermann Focusing ethnography: theory and recommendations for effectively combining video and ethnographic research
WO2013049907A1 (fr) Procédé, système et programme d'ordinateur pour fournir une infrastructure de contenu collaborative intelligente
US20220197931A1 (en) Method Of Automating And Creating Challenges, Calls To Action, Interviews, And Questions
US20160048583A1 (en) Systems and methods for automatically activating reactive responses within live or stored video, audio or textual content
US20140013230A1 (en) Interactive video response platform
Whittaker et al. Design and evaluation of systems to support interaction capture and retrieval
Hogervorst The era of the user. Testimonies in the digital age
Thompson Building a specialised audiovisual corpus
Lang et al. Issues in online focus groups: Lessons learned from an empirical study of peer-to-peer filesharing system users
Haapanen Problematising the restoration of trust through transparency: Focusing on quoting
US20170316807A1 (en) Systems and methods for creating whiteboard animation videos
Bywood Testing the retranslation hypothesis for audiovisual translation: the films of Volker Schlöndorff subtitled into English
Schmidt et al. „Multimodality as Challenge: YouTube Data in Linguistic Corpora.“
Carter et al. Tools to support expository video capture and access
WO2020016646A1 (fr) Procédé d'automatisation et de création de défis, d'appels à l'action, à des interviews et à des questions
Lau et al. Collecting qualitative data via video statements in the digital era
Hooffacker Presentation Forms and Multimodal Formats
Baptiste This scholarship is important: experiences in newspaper historical research of African-American voices on radio
Braga et al. Between Institutional Policies and Ethnographic Gazes: Reflections on Audiovisual Practices in a Brazilian Cultural Heritage Registration Process
US20220159344A1 (en) System and method future delivery of content
Ford Trans new wave cinema

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19762837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19762837

Country of ref document: EP

Kind code of ref document: A1