GB2574587A - System, module and method - Google Patents

System, module and method Download PDF

Info

Publication number
GB2574587A
GB2574587A GB1809300.5A GB201809300A GB2574587A GB 2574587 A GB2574587 A GB 2574587A GB 201809300 A GB201809300 A GB 201809300A GB 2574587 A GB2574587 A GB 2574587A
Authority
GB
United Kingdom
Prior art keywords
content
data
audio
sub
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1809300.5A
Other versions
GB201809300D0 (en
Inventor
Mokades Raphael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rare Recruitment Ltd
Original Assignee
Rare Recruitment Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rare Recruitment Ltd filed Critical Rare Recruitment Ltd
Priority to GB1809300.5A priority Critical patent/GB2574587A/en
Publication of GB201809300D0 publication Critical patent/GB201809300D0/en
Priority to PCT/GB2019/051410 priority patent/WO2019234388A1/en
Publication of GB2574587A publication Critical patent/GB2574587A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Detecting variance in input evaluation data comprising: presenting a sub-set of content of each of a set of data files in sequence to a user, followed by a full set of said content in sequence; displaying a user interface element through which the user can input evaluation data of the presented content; grouping said evaluation data according to aspect descriptors, each descriptor representing an attribute of a subject of the content; storing: first group of evaluation data, comprising the sub-set of content that has one of said descriptors; and second group of evaluation data, comprising the full set of content that has one of said descriptors; retrieving a sub-group of evaluation data comprising said first and second grouped evaluation data; comparing, for each sub-group, the grouped first and second evaluation data and providing variance data; comparing said variance data to a variance threshold and, if the threshold is exceeded, selecting further content for presentation based upon the sub-group for which the variance exceeds the threshold; and determining if a user interface control is operated prior to completion of presentation and, if so, presenting a sub-set of content of a first one of the data files in said sequence.

Description

Store, for each content aspect descriptor, grouped first evaluation data based on evaluation data submitted in relation to content items of the sub-set
Store, for each content aspect descriptor, grouped second evaluation data based on evaluation data submitted in relation to content items of the full set
S226
S232
S228
Fig. 8a
9/37
Retrieve, from data storage, a sub-group of input evaluation data comprising grouped first evaluation data and grouped second evaluation data based on evaluation data submitted in relation to ____content items having a particular aspect descriptor
Obtain variance data by subtracting value of grouped first evaluation data values from value of grouped second evaluation data values for a particular aspect descriptor _____________________▼ __
Store variance data for a particular aspect descriptor
S238
Variance data for last aspect -......NO descriptor obtained?
□yes
v
,.—Variance data for particular aspect descriptor YES
............greater than variance data threshold?___________ '.....
S242
NO
NO
Variance data compared to variance data--------^ threshold for last aspect descriptor?,___________
S234
S246
Retrieve additional content from data storage device
S244 iYES
S248
S250
Present retrieved additional
Additional content to be presented
YES >---0 content via output device of client device
Fig. 8b
10/37
X
->ϋ[ σ> σ>
11/37
-> ΟI
12/37
13/37
CM
14/37
CO ώ
15/37
16/37
17/37
Fig. 16
18/37
Fig. 17
19/37
Fig. 18
20/37
21/37
00:40 I 00:41
22/37
00:09 I 00:43
23/37
00:40 I 00:41
24/37
-> Ο [
25/37
00:09 I 00:41
26/37
00:40 I 00:41
27/37
00:40 I 01:29
28/37
-> ϋ I
X
29/37
CO
30/37
31/37
Fig. 30
32/37
-> ύ I
X
33/37
00:03 I 00:20
34/37
00:19 I 00:20
35/37
00:19 I 00:31
36/37 xlo <r Ο
Your results d NEXT
As you may have realised, the answers in the audio and video sections are the same.
However, most people score the audio clips and the video clips differently, despite the content being exactly the same.
When we see a candidate, we don’t just process what they are saying. We also hear accent and intonation. And we see the body language, eye contact, ethnicity, gender and appearance of a candidate.
This system looks at differences in your responses to the audio and video clips and, in so doing, identifies how you respond to people from different racial, gender and socioeconomic groups.
@ CURRENT SCORES
Ethnicity
50 40 30 20 10 0 10 20 30 40 5060
Gender
50 40 30 20 10 0 10 20 30 40 5060
Socioeconomic status
SES 2
SES 1
50 40 30 20 10 0 10 20 30 40 5060
Fig. 35
37/37
-> Οι
Intellectual Property Office
Application No. GB1809300.5
RTM
Date : 11 December 2018
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Java, JavaScript, Liberate, FLASH, Perl, MatLab, Pascal, Visual BASIC (Page 31)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
SYSTEM, MODULE AND METHOD
FIELD
The present invention relates to a system, module and method. In particular, but not exclusively, to a system and module for, and method of, receiving evaluation of content from a user where the content is presented to the user at least twice, but each time in a content item having a different format.
BACKGROUND
Systems and methods are known that allow a user, or reviewer, to evaluate content presented to them and to input evaluation data relating to the content, e.g. a “reviewer rating”. Different types of content, i.e. content items containing content having attributes that differ from one content item to the next, may be evaluated differently by a reviewer dependent upon the attributes of the content. Similarly, content in which content items containing subjects having different attributes from those of the subjects of other content items may be evaluated differently by a reviewer dependent upon the attributes of the subjects of the content. A reviewer may have a strong preference for content items containing content characterised by, or in which a subject of the content item can be characterised by, for example, first, second and/or third attributes, but a lesser preference for, or even dislike of, content items containing content characterised by, or in which a subject of the content item can be characterised by, for example, fourth, fifth and/or sixth attributes. Determination of the types of content that a reviewer prefers, or the subjects of content items that a reviewer prefers, can be made by determining the content items that attract positive “ratings” and the attributes of the content, or attributes of the subject, contained in those content items. Similarly, determination of the types of content, or subjects of content items, that a reviewer finds less preferable, or has a bias against, can be made by determining the content items that attract less positive, or negative, “ratings” and the attributes of the content, or attributes of the subject, contained in those content items that may have given rise to the less favourable “rating” may be identified.
Whilst collection of content item evaluation data and determining reviewer content preferences based upon attributes of content contained in content items, or of attributes of the subjects of content items, in the manner as described above may be satisfactory, and may continue to be satisfactory for certain operating conditions, the inventor has recognised that evaluation data input by a reviewer in response to a content item, or in response to a subject of a content item, may differ dependent upon the format, or mode of communication, in which the content contained in the content item is presented to the user (e.g. audio only or audio-visual). The inventor has recognised that for content items containing content having a particular attribute, or in which the subject thereof has a particular attribute, a preference, or otherwise, may not necessarily be recognised if the reviewer is exposed to content items in only one type of format. The inventor has recognised that a preference may become apparent only when a user is exposed to content in more than one type of format (e.g. audio-only as one type of format and audio-visual as another type of format), or as sub-types of a type of format (e.g. WAV, MP3, AAC, etc.).
The present invention has been devised with the foregoing in mind.
SUMMARY
According to an aspect of the invention there is provided a computer system comprising: a data storage device for storing a plurality of data files comprising content for presentation on an output device; and a processing device, said processing device operative to implement a content presentation module, the content presentation module comprising: a data file presentation module to retrieve said data files from said data storage device and to present a sub-set of said content of each one of said plurality of data files in sequence to a user via said output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input via a user interface input device of an instruction to initiate presentation of said sequence of data files; a user interface module for configuring said output device to display a user interface element controllable via said user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user, input evaluation data being stored in said data storage device; a grouping module for grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said subset of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content, and for storing in said data storage device for each one of a plurality of aspect descriptors: grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; a retrieval module for retrieving from said data storage device a sub-group of input evaluation data, said sub-group comprising said grouped first and grouped second evaluation data input in relation to one or more data files of said plurality of data files containing content that has one of said plurality of aspect descriptors; a comparator module for comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison, and for storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data; a variance data threshold identification module operative to compare, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group and, if variance data for any one of the sub groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, to initiate selection of further content for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set; and a reverse navigation inhibit module operative to determine if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, to initiate presentation of a sub-set of content of a first one of said plurality of data files in said sequence.
The computer system can be used to provide data representing potential biases to identify whether or not a user presents a bias, or biases, to content containing footage of persons having a particular attribute, or attributes. The system can provide information comprising data, which may indicate whether or not a user exhibits one or more biases and what those biases are, based upon the user-submitted evaluation data stored in the data storage device. From this information, an inference may be made that, because the user exhibits one or more biases to content containing certain aspect descriptors, that user may also exhibit one or more biases to persons possessing attributes represented by those certain aspect descriptors. To inhibit a user from moving the flow of content items presented in a content evaluation exercise back through the sequence, a “back” button of a viewing tool through which the content is presented is inhibited. For example, a user, who is about to input evaluation data in relation to a later content item in the sequence of content items, may wish to revisit an earlier content item in the sequence (for which evaluation data has already been input) in order to revise the evaluation data input earlier, or to try to ensure that the evaluation data to be input for the later content item (e.g. an audio-visual content item) is consistent with the evaluation data input for the earlier counterpart content item (e.g. an audio-only content item). Thus, the system can prevent a user being able to navigate back through a content evaluation exercise so that a proposed “rating” to be input for a later presented content item, in a particular format, cannot be compared to the “rating” input for an earlier reviewed counterpart item (in a different format). That is, “matching” of ratings provided for an earlier reviewed content item and a later reviewed counterpart content item can be prevented by disabling the “back” button functionality, or causing some other action responsive to operation of the “back” button.
To achieve this, the computer system implements the reverse navigation inhibit module, which operates to determine if a user interface control is operated prior to completion of an content evaluation exercise in full. That is, prior to completion of first all audio-only content items and then completion of all audio-visual content items in the plurality of data files.
Put another way, a user may, for example, attempt to initiate transition from a content item currently being presented to a previously presented content item. This transition attempt may be initiated by, for example, using a “back” button in the graphical user interface through which the content evaluation exercise is presented on a display. Therefore, transitioning back through content items in the sequence is inhibited by way of the reverse navigation inhibit module, which can monitor for activation of the user interlace control (e.g. the “back” button) and, responsive to determination of operation of the user interface control, initiate presentation of the content evaluation exercise from the beginning. That is, when activation of the user interface control is detected, the reverse navigation inhibit module stops the content evaluation exercise and recalls the first content item in the sequence either from cache or from the data storage device, and initiates presentation of the first content item in the sequence.
Optionally, each one of said data files may comprise audio-visual content. Further optionally, the system may further comprise an audio content extraction module to extract audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
Optionally, a portion of said plurality of data files may comprise a first plurality of data files containing audio content only, and another portion of said plurality of data files may comprise a second plurality of data files containing audio-visual content.
Optionally, said sub-set of said content may comprise said first plurality of data files and said full set of said content may comprise said second plurality of data files.
Optionally, each one of said second plurality of data files may correspond to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files.
Optionally, each one of said second plurality of data files may comprise data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content may comprise audio footage of a same individual providing a same spoken statement.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content may comprise audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
Optionally, each data file may further comprise content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
Optionally, said user interface element may comprise a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar may be representative of said user evaluation of content in a data file currently being presented to a user.
Optionally, said track bar may comprise markings defining degrees in a range of potential user evaluation possibilities.
Optionally said markings may be non-numerical.
Optionally, said track bar may comprise markings at extrema thereof. Providing markings only at the extrema of the track bar, and not at intervals over the length of the track bar may inhibit attempts to precisely control a position of the marker of the track bar (to precisely control an evaluation data value). Additionally, markings only at the extrema, and not at invervals over the length of the track bar, may encourage a “settling” of the marker rather than a precise locating of the marker by the user.
Optionally, a position of said marker within said track bar may be converted to a numerical value to form said evaluation data.
Optionally, a total time of audio-only content items may be such so as to prevent recall of what evaluation data was input in relation to each audio-only content item when inputting evaluation data in relation to corresponding audio-visual content items. This may be to try to put temporal distance between input of evaluation data in the first part of the exercise (i.e. in relation to the audioonly content items) and input of evaluation data in the second part of the exercise (i.e. in relation to the audio-visual content items) to try to inhibit an ability to recall the exact responses given during the audio-only part of the exercise).
Optionally, the system may be further operative to initiate presentation of additional content responsive to receiving evaluation data input in response to presentation of a final content item in said sub-set. For example, upon completion of an audio-only part of the content evaluation exercise, additional content, e.g. an exercise (such as a distraction exercise), is presented prior to commencement of the audio-visual part of the exercise.
Optionally, the system may further comprise a response time measurement module for measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
Optionally, the system may further comprise a vocal trait removal module operative to alter at least one aspect of audio content in said sub-set of said content prior to presentation via said output device. The vocal trait removal module may operate so that audio-only content is presented in a “neutral” voice (i.e. is a representation of a statement spoken by an individual who is the subject of a content item and not the actual spoken statement in the individual’s “real” voice). Further optionally, the vocal trait removal module may operate to remove vocal traits (e.g. accent) that might give an indication of at least one of the types of attributes that the subject of the audio-only content item possesses. Yet further optionally, at least one aspect may comprise: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
Optionally, a variance data threshold value is zero.
According to another aspect of the present invention, there is provided an assessment module, comprising: a data file presentation module to retrieve a plurality of data files from a data storage device and to present a sub-set of content of each one of said plurality of data files in sequence to a user via an output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input received via a user interface input device of an instruction to initiate presentation of said sequence of data files; a user interface module for configuring said output device to display a user interface element controllable via said user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user, input evaluation data being stored in said data storage device; a grouping module for grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said sub-set of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content, and for storing in said data storage device for each one of a plurality of aspect descriptors: grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; a retrieval module for retrieving from said data storage device a sub-group of input, said sub-group comprising said grouped first and grouped second evaluation data in relation to one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; a comparator module for comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison, and for storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data; a variance data threshold identification module operative to compare, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group and, if variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, to initiate selection of further content for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set; and a reverse navigation inhibit module operative to determine if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, to initiate presentation of a sub-set of content of a first one of said plurality of data files in said sequence.
Optionally, each one of said data files may comprise audio-visual content and wherein the system may further comprise an audio content extraction module to extract audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
Optionally, a portion of said plurality of data files may comprise a first plurality of data files containing audio content only, and another portion of said plurality of data files may comprise a second plurality of data files containing audio-visual content.
Optionally, said sub-set of said content may comprise said first plurality of data files and said full set of said content may comprise said second plurality of data files.
Optionally, each one of said second plurality of data files may correspond to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files.
Optionally, each one of said second plurality of data files may comprise data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content may comprise audio footage of a same individual providing a same spoken statement.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content may comprise audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
Optionally, each data file may further comprise content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
Optionally, said user interface element may comprise a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar may be representative of said user evaluation of content in a data file currently being presented to a user.
Optionally, said track bar may comprise markings defining degrees in a range of potential user evaluation possibilities. Further optionally, said markings may be non-numerical.
Optionally, said track bar may comprise markings at extrema thereof.
Optionally, a position of said marker within said track bar may be converted to a numerical value to form said evaluation data.
Optionally, the module may further comprise initiating presentation of an exercise responsive to a final one of said plurality of first data files in said sequence.
Optionally, the module may further comprise a response time measurement module for measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
Optionally, the module may further comprise a vocal trait removal module operative to alter at least one aspect of audio content in said sub-set of said content prior to presentation via said output device. Further optionally, said at least one aspect may comprise: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
According to a further aspect of the present invention, there is provided a method comprising the steps of: retrieving a plurality of data files comprising content from a data storage device; presenting a sub-set of content of each one of said plurality of data files in sequence to a user via an output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input via a user interface input device of an instruction to initiate presentation of said sequence of data files; configuring said output device to display a user interface element controllable via a user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user; storing input evaluation data in said data storage device; grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said sub-set of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content; storing in said data storage device for each one of a plurality of aspect descriptors: grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; retrieving from said data storage device a sub-group of input evaluation data, said sub-group comprising said grouped first and grouped second evaluation data input in relation to one or more data files of said plurality of data files containing content that has one of said plurality of aspect descriptors; comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison; storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data; comparing, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group and, if variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, initiating selection of further content for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set; and determining if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, initiating presentation of a subset of content of a first one of said plurality of data files in said sequence.
Optionally, each one of said data files may comprise audio-visual content, and wherein the method may further comprise extracting audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
Optionally, a portion of said plurality of data files may comprise a first plurality of data files containing audio content only, and another portion of said plurality of data files comprises a second plurality of data files containing audio-visual content. Further optionally, said sub-set of said content may comprise said first plurality of data files and said full set of said content may comprise said second plurality of data files.
Optionally, each one of said second plurality of data files may correspond to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files. Further optionally, each one of said second plurality of data files may comprise data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a same individual providing a same spoken statement.
Optionally, said audio-visual content may comprise audio-visual footage of an individual providing a spoken statement, and said audio content may comprise audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
Optionally, each data file may further comprise content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
Optionally, said user interface element may comprise a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar may be representative of said user evaluation of content in a data file currently being presented to a user.
Optionally, said track bar may comprise markings defining degrees in a range of potential user evaluation possibilities. Further optionally, said markings may be non-numerical.
Optionally, said track bar may comprise markings at extrema thereof.
Optionally, a position of said marker within said track bar may be converted to a numerical value to form said evaluation data.
Optionally, said method may further comprise initiating presentation of an exercise responsive to receiving evaluation data input in response to presentation of a final content item in said sub-set.
Optionally, said method may further comprise measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
Optionally, said method may further comprise altering at least one aspect of audio content in said sub-set of said content prior to presentation via said output device. Further optionally, said at least one aspect may comprise: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
Optionally, a variance data threshold value may be zero.
According to another aspect of the present invention, there is provided a machine readable medium, comprising processor executable instructions executable by a processor to implement a method as described above, and hereinafter.
DESCRIPTION OF THE DRAWINGS
One or more specific embodiments in accordance with aspects of the present invention will be described, by way of example only, and with reference to the following drawings in which:
Fig. 1 illustrates a schematic block diagram of a computer system in accordance with one or more embodiments of the present invention;
Fig. 2 illustrates a schematic block diagram of a content presentation module according to one or more embodiments of the present invention;
Fig. 3 illustrates a schematic block diagram representing a data storage device according to one or more embodiments of the present invention;
Fig. 4 schematically illustrates a content item data file structure for a first set of content items according to one or more embodiments of the present invention;
Fig. 5 schematically illustrates a content item data file structure for a second set of content items according to one or more embodiments of the present invention;
Figs. 6a and 6b schematically illustrate data file structures for data files containing evaluation data submitted by users in response to content in content items;
Fig. 7 schematically illustrates data file structures for data files according to one or more embodiments of the present invention;
Figs. 8a and 8b show a process flow diagram, which illustrate steps carried out in a method implemented by the computer system according to one or more embodiments of the present invention;
Figs. 9 to 34 illustrate pages displayed on a client device display at various stages during a content evaluation exercise;
Fig. 35 illustrates a user-results page displayed on a client device display to show results of the content evaluation exercise; and
Fig. 36 illustrates a post-results page displayed on a client device display via which presentation of additional content can be initiated, the additional content relevant to a particular user based upon their results in the content evaluation exercise.
DESCRIPTION
In general overview, a system, module and method according to one or more embodiments of the present invention provide for presentation of content items in different formats to a user (e.g. audio content and audio-visual content). The user can provide evaluation data in response to each item of content consumed. The evaluation data may be input during presentation of a content item to the user, or may be input when a content item finishes. However, the evaluation data is input before a next content item in a sequence of content items begins. The evaluation data represents a user evaluation of content currently being presented to a user (e.g. “user feedback”, or a “rating” of the content in the content item).
In one or more embodiments, the audio content comprises audio-only recordings based upon interview candidates providing an answer to a question put to them. In one example, the audio-only content comprises, for each candidate answer, an audio file containing a narrator reading the candidate’s response. That is, the audio files do not contain the voices of candidates, but that of a narrator who reads the candidate response. This may ensure that a candidate is not identifiable, or that attributes and/or characteristics of a candidate are not identifiable, from the audio file, because the recording in the audio file is not that of the candidate’s actual voice, but instead that of a third party, i.e. the narrator. The audio-visual content comprises audio-visual recordings of the same interview candidates answering the same question. A user using the system, module and method to undertake an evaluation of content contained in content items hears all narrator-read candidate responses first in a sequence, and then is presented with the audio-visual responses in sequence thereafter. That is, content is presented to the user first in one format (e.g. audio only), and then again in a different format (e.g. audio-visual). The user evaluation data input in response to the audio-only content items is compared to the user evaluation data input in response to the audio-visual content items. Any differences in evaluation data input in response to a content item containing audio-only content and the evaluation data input in response to content presented in a different format, i.e. an audio-visual content item, may be indicative of a bias that the user may have to certain aspect descriptors, or characteristics, that the content possesses. An aspect descriptor of the content is representative of an attribute, or characteristic, of a subject of the content item (e.g. an attribute, or characteristic, of a person featuring in the content item). Therefore, a bias against content items containing content having a particular aspect descriptor may be indicative of the user, or reviewer, having a bias against persons having the same, or similar, attributes to a person whose attributes represent the aspect descriptors of the content for which a potential bias has been identified. The difference may arise because the user reacts differently to the content item when they can see the content (and thus the person whose attributes define the aspect descriptors of the content) rather than just hearing the content. However, potential biases may be potentially identified only once a larger sample set has been presented to the user, i.e. the user views multiple content items and, consequently, is presented with content items containing content where the aspect descriptors differ from content item to content item. For example, content items containing responses from different persons of differing cultural and/or social environments (i.e. having different attributes so that the content items are defined by different aspect descriptors). A person’s attributes may comprise, but are not limited to: gender; ethnicity; and socio-economic status.
To try to ensure that a user is presented with a sufficient amount of content so that a meaningful sample set can be obtained from content items reviewed, there are multiple content items in the audio content set of the content set and also multiple content items in the audio-visual set of the content set. Providing multiple content items of each format type may also inhibit a user recalling evaluation data input in response to an earlier, counterpart, content item to a content item currently being reviewed. Also, to ensure that the user is presented with both types of content items in one session, the system may inhibit progress being saved. Further, to inhibit a user from moving the flow of content items back through the sequence, a “back” button of a viewing tool through which the content is presented is inhibited. For example, a user, who is about to input evaluation data in relation to a later content item in the sequence of content items, may wish to revisit an earlier content item in the sequence (for which evaluation data has already been input) in order to revise the evaluation data input earlier, or to try to ensure that the evaluation data to be input for the later content item (e.g. an audio-visual content item) is consistent with the evaluation data input for the earlier counterpart content item (e.g. an audio-only content item). Thus, the system can prevent a user being able to navigate back through a content evaluation exercise so that a proposed “rating” to be input for a later presented content item, in a particular format, cannot be compared to the “rating” input for an earlier reviewed counterpart item (in a different format). That is, “matching” of ratings provided for an earlier reviewed content item and a later reviewed counterpart content item can be prevented by disabling the “back” button functionality, or causing some other action responsive to operation of the “back” button.
Different formats of content items are presented in one reviewing session, i.e. the user rates all content items of a first format first (e.g. audio-only) and then, after a break, which forms part of the session, rates all content items of a second, different, format (e.g. audio-visual). Optionally, the content items of the second format may be presented to the user directly after completion of presentation of the content items of the first format with no break therebetween. Further optionally, the content items of the first format and those of the second format may be presented alternately.
Fig. 1 illustrates a schematic block diagram of a computer system 10 in which one or more embodiments of a content presentation module 12 may operate. The computer system 10 may include multiple client computing systems 14 (“client devices 14”) coupled to a processing device 16 via a network 18 (e.g., a public network such as the Internet, a private network such as a local area network (LAN), or a combination thereof). The network 18 may include the Internet and network connections to the Internet. Optionally, the processing device 16 and the client devices 14 may be located on a common LAN, personal area network (PAN), campus area network (CAN), metropolitan area network (MAN), wide area network (WAN), wireless local area network, cellular network, virtual local area network, or the like. The processing device 16, which may comprise a server computing system 16 (also referred to herein as server 16) may include one or more machines (e.g., one or more server computer systems, routers, gateways) that have processing and storage capabilities to provide the functionality described herein. The server 16 is operative to implement the content presentation module 12. The content presentation module 12 can perform various functions as described herein and include several sub-modules as described in more detail below with respect to Fig. 2.
The content presentation module 12 can be implemented as a part of a content review platform 20, such as, for example, an interviewer training platform, or may be implemented in another content review platform. While many of the examples provided herein are directed to an employment/recruitment context, the principles and features disclosed herein may be equally applied to other contexts and so such are within the scope of this disclosure as well. For example, the principles and features provided herein may be applied to a job performance evaluation, an evaluation of a sales pitch, an evaluation of an investment pitch, etc.
The content presentation module 12 can be implemented as a standalone module that interfaces with the content review platform 20 or other systems. It should also be noted that, in the illustrated example of Fig. 1, the server 16 implements the content presentation module 12, but one or more of the client devices 14 may also include client modules of the content presentation module 12 that can work in connection with, or independently from the functionality of the content presentation module 12 as depicted on the server 16.
The client computing systems 14 (also referred to herein as “client devices 14”) may each be a client workstation, a server, a computer, a portable electronic device, an entertainment system configured to communicate over a network, such as a set-top box, a digital receiver, a digital television, a mobile phone, a smart phone, a tablet, or other electronic devices. For example, portable electronic devices may include, but are not limited to, cellular phones, portable gaming systems, wearable computing devices or the like. The client devices 14 may have access to the Internet via a firewall, a router or other packet switching devices. The client devices 14 may connect to the server through one or more intervening devices, such as routers, gateways, or other devices. The client devices 14 are variously configured with different functionality and include a browser 22, one or more applications 24, a user interface 26 and an output device, such as a display 28. The client devices 14 may include a microphone and a video camera to record responses as digital interview data (audioonly content and audio-visual content). For example, the client devices 102 may record and store responses provided by interview candidates and/or stream or upload the recorded responses to the server 16 for capture and storage. In one or more embodiments, the client devices 14 access the content review platform 20 via the browser 22 to record responses. The recorded responses comprise content items that may include audio-only content and/or audio-visual content. They may also include digital data, such as code or text, or combinations thereof. In such one or more embodiments, the content review platform 20 is a wcb-based application or a cloud computing system that presents user interfaces to the client devices 14 via the browser 22.
Similarly, one of the applications 24 can be used to access the content review platform 20. The content review platform 20 can comprise one or more software products that facilitate the content review process. The content 30 is illustrated schematically in more detail in Figs. 4 and 5, but in general overview includes, for each candidate, a content item comprising an audio-only data file comprising a recording of the response of the candidate to a question, which may comprise a narrator reading the candidate’s response (i.e. not containing the candidate’s actual voice), or may be a direct recording of the candidate’s response (i.e. containing the candidate’s actual voice), a counterpart content item comprising an audio-visual data file comprising a recording of the response of the candidate to the same question (e.g. video captured during the interview), data submitted by the candidate before or after the interview (i.e. identifying attributes, or characteristics, of the candidate, such as, for example, ethnicity, gender, socio-economic status, age, etc.), or the like. These attributes are used to form aspect descriptors, which can characterise the content item.
The client devices 14 can be used by a user (e.g. a reviewer or evaluator) to review content and submit evaluation data responsive to the content presented to them. The user can access the content review platform 20 via the browser 22, or the application 24, as described above. The user interfaces presented to the user permit the user to submit evaluation data for each content item and also to receive information regarding any potential biases that the user may have against content items containing content having a particular aspect descriptor, or aspect descriptors. It may follow that, because the user may have a potential bias against content items containing content having a particular aspect descriptor, or aspect descriptors, the user may also have a potential bias against candidates having attributes represented by one or more of those particular aspect descriptors. The potential bias, or biases, are identifiable based upon differences between evaluation data submitted responsive to reviewing the audio-only content items and evaluation data submitted responsive to reviewing the audio-visual content items. The content presentation module 12 can present data representing potential biases (as based on the differences noted above) to identify whether or not a user presents a bias, or biases, to content containing footage of persons having a particular attribute, or attributes. The content presentation module 12 may be able to provide information comprising data, which may indicate whether or not a user exhibits one or more biases and what those biases are, based upon the user-submitted evaluation data stored in the data storage device 32. From this information, an inference may be made that, because the user exhibits one or more biases to content containing certain aspect descriptors, that user may also exhibit one or more biases to persons possessing attributes represented by those certain aspect descriptors.
The data store device 32 comprises a data repository on a memory device, or optionally one or more data repositories on one or more memory devices. The data storage device 32 comprises a database. Optionally, the data storage device 32 comprises any other organized collection of data. The data storage device 32 stores the content 30, evaluation data 34 and aggregated data 36, which comprises, aggregated evaluation data for each type of aspect descriptor for each content item containing content delivered by a person having an attribute represented by that type of aspect descriptor. The data storage device 32 also stores variance data, which comprises, for each type of aspect descriptor, a difference between the evaluation data submitted for those content items comprising audio-only content delivered by persons having an attribute represented by that type of aspect descriptor and evaluation data submitted for those content items comprising audio-visual content delivered by persons having the same type of attribute represented by that type of aspect descriptor. The data storage device 32 further comprises additional content 40, which may be presented to a user between presentation of content items containing audio-only content and presentation of content items containing audio-visual content and/or presented to a user after presentation of content items containing audio-visual content. In addition to the audio-only content items and audio-visual content items, the content 30 also includes aspect descriptors, i.e. information representative of attributes of a candidate. These candidate-attributes are represented by the aspect descriptors, which are used to define characteristics of the content in a content item. For example, where the candidate has provided explicit information regarding information such as ethnicity, gender, socio-economic status, race, religion, sexual orientation, disability, etc., that information is stored as aspect descriptors as part of content 30.
In the illustrated example of one or more embodiments, the server 16 implements the content review platform 20, including the content presentation module 12. The server 16 can include web server functionality that facilitates communication between the client devices 14 and the content review platform 20 to review content items (e.g. digital interviews, including recorded responses, as described herein). Optionally, web server functionality may be implemented on a machine other than the machine executing the content review platform 20.
Optionally, the functionality of the content review platform may be implemented by one or more different servers 16.
Fig. 2 is a schematic block diagram of the content presentation module 12 according to one or more embodiments of the present invention. The content presentation module 12 can be implemented as processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In the illustrated example, the content presentation module 12 comprises a data file presentation module 42, which forms part of a graphical user interface engine (GUI), a user interface module 44, which forms another part of a GUI, a grouping module 46, a retrieval module 48, a comparator module 50, a variance data threshold identification module 52, and a reverse navigation inhibit module 54. The content presentation module may also comprise optional modules, comprising at least one of: vocal trait removal module 56; response time measurement module 58; and audio content extraction module 60. These optional components will be described in more detail later.
The components of the content presentation module 12 may represent modules that can be combined together or separated into further modules, in one or more embodiments.
When a user has logged-in to the computer system, the data file presentation module 42 is implemented as part of the computer presentation module 12 by the processing device (i.e. server 16) and operates, responsive to an instruction to commence a content evaluation exercise, to retrieve content items (i.e. content item data files) from content 30 stored in the data storage device 32. Retrieved content is presented to a user either via a speaker (not shown in Fig. 1) of client device 14, if audio-only content items, or via a speaker and the display 28 of client device 14, if audio-visual content items. The data file presentation module 42 configures display 28 for presentation of the retrieved content and for presentation of a GUI, via which the user can interact with the content evaluation exercise.
In one or more embodiments, the content comprises a plurality of audio-only content items and a corresponding plurality of audio-visual content items. Each audio-only content item has an audio-visual content item counterpart. That is, speech content of the audio-only content item is the same as that of speech content of the respective audio-visual content item counterpart. The audio-only content items can be considered to be a sub-set of a “full set” of the content.
In an optional arrangement, the content comprises a plurality of audio-visual content items with no audio-only content item counterparts. In such an optional arrangement, audio-only content is obtained by extracting an audio element of each of the audio-visual content items and presenting this via the client device. The audio-visual content is then presented once all extracted audio-only content has been presented. Again, the extracted audio-only content can be considered to be a sub-set of the full set of content. The audio-content extraction module 60 may be implemented to effect extraction of audio content from audio-visual content items in such an optional arrangement.
The data file presentation module 42 operates to effect presentation of a sub-set of the content (i.e. audio-only content, or extracted audio-only content) of a plurality of content items, or data files, in sequence to the user via output devices (e.g. speakers) of the client device 14. The content for an “audio” part of the content evaluation exercise is obtained from the plurality of data files, which comprise: content items comprising audio-only content (where audio-visual content item data files are also present); or content items comprising audio-visual content (where only audio-visual content files are present), from which an audio-only component is extracted for the purposes of the “audio” part of the exercise.
Following presentation of the sub-set of the content, a full-set of the content (i.e. audio-visual content) of the plurality of content items, or data files, is presented in sequence to a user via output devices (e.g. speakers and display 28) of the client device 14.
The instruction to commence a content evaluation exercise is input via a user interface input device of the user interface 26 to initiate presentation of the sequence of data files comprising the audio-only content items followed by the data files comprising the audio-visual content items.
The user interface module 44 is implemented as part of the content presentation module 12 by the processing device (i.e. server 16) and operates to configure the display 28 to display, as part of a displayed GUI, a user interface element controllable via a user interface input device. A user can input evaluation data, which represents a user evaluation of content currently being presented to a user, by manipulating the user interface element using the user interface input device. Input evaluation data is stored in data storage device 32 as either first evaluation data or second evaluation data. First evaluation data comprises evaluation data input in response to content contained in content items presented in a first part of the content evaluation exercise, i.e. the audio-only content items. Second evaluation data comprises evaluation data input in response to content contained in content items presented in a second part of the content evaluation exercise, i.e. the audio-visual content items. The input evaluation data is stored as a value, which represents a user “rating” of the content in a content item. The user interface element will be described in more detail later.
Also implemented as part of the content presentation module 12 by the processing device is the grouping module 46, which operates to group evaluation data input by the user. Input evaluation data is retrieved from data storage device 32 and is grouped based upon the content having a particular aspect descriptor, of which there may be many. Therefore evaluation data input responsive to reviewing a content item containing content that has a first aspect descriptor is grouped with evaluation data input in relation to all other content items (i.e. data files) containing content having the first aspect descriptor. Likewise, evaluation data input responsive to reviewing content items containing content that has a second aspect descriptor is grouped with evaluation data input in relation to all other content items containing content having the second aspect descriptor. Evaluation data input responsive to reviewing content items containing content that fulfils each one of plural further aspect descriptors is grouped together, for each further aspect descriptor, for all content items containing content a subject having an attribute represented by such further aspect descriptor.
The evaluation data input responsive to audio-only content and stored in the data storage device 32 as first evaluation data is grouped and stored, in the data storage device 32, separately from evaluation data input responsive to audio-visual content and stored in the data storage device 32 as second evaluation data, and the stored grouped evaluation data is further compartmentalised based upon an aspect descriptor associated with a content item.
Aspect descriptors of content items represent attributes of a person featuring in the content. For example, a person featuring in a content item may be: of a first ethnicity, and/or of a second ethnicity, and/or of a third ethnicity; a first gender, or a second gender; a first socio-economic status, or a second socio-economic status. That is, aspect descriptors of a content item represent the attributes of a person featuring in the content. Therefore, evaluation data submitted in response to a content item will be aggregated and stored in a particular grouped evaluation data “bucket” if it fulfils a criteria for addition to the “bucket”, e.g. that the person featuring in the content item is male, so the content aspect descriptor is “MALE”.
Also implemented as part of the content presentation module 12 by the processing device is the retrieval module 48, which operates to retrieve from the data storage device 32 a sub-group of the grouped evaluation data. The sub-group of the grouped evaluation data comprises both grouped first evaluation data and grouped second evaluation data aggregated from first evaluation data and second evaluation data that has been input in relation to content items containing content that has a particular one aspect descriptor of the plurality of aspect descriptors. Another sub-group of the grouped evaluation data comprises both grouped first evaluation data and grouped second evaluation data aggregated from first evaluation data and second evaluation data that has been input in relation to content items containing content that has a different particular one aspect descriptor of the plurality of aspect descriptors.
Also implemented as part of the content presentation module 12 by the processing device is the comparator module 50 for comparing, for each sub-group of grouped evaluation data:
grouped first evaluation data, comprising evaluation data input in relation to one or more content items of the plurality of content items, the one or more content items containing audio-only content that has one of the plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input in relation to one or more content items of the plurality of content items, the one or more content items containing audio-visual content that has a same one of the plurality of aspect descriptors.
From the comparison, the comparator module 50 obtains variance data, i.e. a value, which represents a difference between the grouped first evaluation data and the grouped second evaluation data for a particular content aspect descriptor. For a different sub-group, the variance data represents a difference between the grouped first evaluation data and the grouped second evaluation data for a different particular aspect descriptor.
The comparator module 50 also operates to store the variance data in the data storage device 32. Variance data is stored for each sub-group of input evaluation data.
The processing device also operates to implement a variance data threshold identification module 52, which operates to compare, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group. If variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, the module 52 operates to initiate selection of additional content 40 (see Fig. 1) for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set.
In a particular example, if the variance data exceeds the variance data threshold for a particular sub-group, then this may be an indication that the user who input the evaluation data has a potential bias against content containing the aspect descriptors associated with the sub-group. This may lead to a conclusion that the user may have a potential bias against persons who possess attributes that are represented by that aspect descriptor.
The processing device also operates to implement a reverse navigation inhibit module 52, which operates to determine if a user interface control is operated prior to completion of an content evaluation exercise in full. That is, prior to completion of first all audio-only content items and then completion of all audio-visual content items in the plurality of data files.
A user may, for example, attempt to initiate transition from a content item currently being presented to a previously presented content item. This transition attempt may be initiated by, for example, using a “back” button in the graphical user interface through which the content evaluation exercise is presented on the display.
For example, a user, having input evaluation data in relation to a later content item in the sequence of content items, may wish to revisit an earlier content item in the sequence (for which evaluation data has already been input) in order to revise the evaluation data input earlier, or to try to ensure that the evaluation data to be input for the later content item (e.g. an audio-visual content item) is consistent with the evaluation data input for the earlier counterpart content item (e.g. an audio-only content item). Transitioning back through content items in the sequence is inhibited by way of the reverse navigation inhibit module 52, which can monitor for activation of the user interface control (e.g. the “back” button) and, responsive to determination of operation of the user interface control, initiates presentation of the content evaluation exercise from the beginning. That is, when activation of the user interface control is detected, the reverse navigation inhibit module 52 stops the content evaluation exercise and recalls the first content item in the sequence either from cache or from the data storage device 32, and initiates presentation of the first content item in the sequence.
Fig. 3 illustrates a schematic block diagram representing the data storage device 32 according to one or more embodiments of the present invention. The data storage device 32 may comprise one or more of: a main memory (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), etc.); a static memory (e.g., flash memory, static random access memory (SRAM), etc.), and a database, each of which communicate with each other via a bus.
The data storage device 32 may include a computer-readable storage medium on which is stored one or more sets of instructions (e.g., content presentation module 12) embodying any one or more of the methodologies or functions described herein.
Content 30, comprising content items containing audio-only content and content items containing audio-visual content is stored in the data storage device 32. Further, additional content 40, comprising, for example, content for presentation between a first part of a content evaluation exercise and a second part of the content evaluation exercise is stored in data storage device 32. The additional content 40, may also comprise content for presentation after completion of the content evaluation exercise, which may be selected for presentation based upon a determination made by variance data threshold identification module 52.
In the data storage device, data submitted by each user using the computer system, e.g. user A, B, C, D, etc., is stored in the data storage device 32 separately for each user.
For a first user “A”, first evaluation data, which is input in response to content in content items comprising audio-only content, is denoted by reference numeral 340A, second evaluation data, which is input in response to content in content items comprising audio-visual content, is denoted by reference numeral 342B, grouped evaluation data is denoted by reference numeral 36A, and variance data is denoted by reference numeral 3 8A.
Although data storage device 32 is illustrated as a single medium in the figure, the term “data storage device” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions and/or the content 30, additional content, evaluation data, grouped evaluation data and variance data. The term “data storage device” shall also be taken to include any medium that is capable of storing a set of instructions for execution by a machine and that causes the machine to perform any one or more of the methodologies of one or more embodiments of the present invention and/or for storing the content 30, additional content, evaluation data, grouped evaluation data and variance data. The term “data storage device” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, magnetic media or other types of mediums for storing the instructions and/or the content 30, additional content, evaluation data, grouped evaluation data and variance data.
Fig. 4 schematically illustrates a content item data file structure for a first set of content items according to one or more embodiments of the present invention. In the figure, each row represents a content item and each column represents a data element of the content item. The first set of content items comprises content items containing audio-only content. Each content item data file comprises data for identifying the particular content item, i.e. a content item name 62, e.g. “Al”. Additionally, each content item data file comprises data representative of the aspect descriptors 64 of the content in the data file. More particularly, these aspect descriptors 64 of the content are representative of attributes of the person whose statement is contained in the content item. These aspect descriptors define the content by, for example, ethnicity, gender and socio-economic status. For content item “Al”, the content is defined as comprising “Ethnicity 1”, “Gender 1” and “Socio-economic status 1” aspect descriptors. In addition to the file name 62 and aspect descriptors 64 defining the content, the content item data file further comprises the content item file 66 itself, which, in the present example, comprises a content item containing audio-only content. The format of the audio-only content item in the illustrated example comprises MP3 format, but other suitable formats may comprise, for example> WAV, AAC, etc.
Fig. 5 schematically illustrates a content item data file structure for a second set of content items according to one or more embodiments of the present invention. As in Fig. 4, each row represents a content item and each column represents a data element of the content item. The first set of content items comprises content items containing audio-visual content. Each content item data file comprises data for identifying the particular content item, i.e. a content item name 68, e.g. “VAI”. Additionally, each content item data file comprises data representative of the aspect descriptors 70 of the content in the data file. More particularly, these aspect descriptors 70 of the content are representative of attributes of the person whose statement is contained in the content item. These aspect descriptors define the content by, for example, ethnicity, gender and socio-economic status. For content item “VAI”, the content is defined as comprising “Ethnicity 3”, “Gender 2” and “Socioeconomic status 2” aspect descriptors. In addition to the file name 68 and aspect descriptors 70 defining the content, the content item data file further comprises the content item file 74 itself, which, in the present example, comprises a content item containing audio-visual content. The format of the audio-only content item in the illustrated example comprises MPEG4 format, but other suitable formats may comprise, for example, WMV, FLV, etc. The content item data file further comprises data identifying an audio-only counterpart content item, i.e. a related audio content item name 74, e.g. “Al2” for content item “VAI”. The audio-only counterpart content item is the audio-only content item that contains the same spoken statement, by the same person (or narrator-read equivalent), as the spoken statement in the audio-visual content item.
Fig. 6a schematically illustrates a data file structure for data files containing evaluation data submitted by a plurality of users A, B, C, D in response to the first set of content items. Each row represents a data file containing evaluation data input responsive to a particular one content item and each column represents a data element of the data file. Each data file comprises data for identifying a particular content item, i.e. the content item name 62, e.g. “Al”. Additionally, each data file comprises first evaluation data 76, which has been submitted by the user responsive to consuming the content item, and comprises a “rating” of the content in the content item. The evaluation data comprises a numerical value, which represents the user “rating”.
Fig. 6b schematically illustrates a data file structure for data files containing evaluation data submitted by a plurality of users A, B, C, D in response to the second set of content items. Each row represents a data file containing evaluation data input responsive to a particular one content item and each column represents a data element of the data file. Each data file comprises data for identifying a particular content item, i.e. the content item name 68, e.g. “VAI”. Additionally, each data file comprises second evaluation data 78, which has been submitted by the user responsive to consuming the content item, and comprises a “rating” of the content in the content item. The evaluation data comprises a numerical value, which represents the user “rating”. The data file further comprises data identifying an audio-only counterpart content item, i.e. a related audio content item name 74, e.g. “Al2” for content item “VAI”.
Fig. 7 schematically illustrates data file structures for data files 80 containing aggregated first evaluation data, data files 82 containing aggregated second evaluation data, and data files 84 containing variance data.
For the data files 80, each row represents attributes that a person in a content item may have, , e.g. ethnicity, gender, or socio-economic status attributes of the person in a content item, and each column represents a data element of the data file. Each one of the data files 80 comprises data 86 representing an aspect descriptor and aggregated first evaluation data 88 for that aspect descriptor. That is, first evaluation data, i.e. evaluation data that is submitted for all of the first set of content items (audio-only) containing content having a particular aspect descriptor, e.g. “Ethnicity 1” is grouped, or aggregated. Particularly, the values that represent the evaluation data are summed. With reference to Figs. 4 and 6a, those content items containing content defined by aspect descriptor “Ethnicity 1” are “Al”, “A2”, “A3” and “A4”. The first evaluation data submitted for these content items comprise values “47.58”, “21.51”, “47.58” and “49.64” respectively. As can be seen for the content aspect descriptor “Ethnicity 1” in the first row of the representation of data files 80, the sum of these first evaluation data values is “166.31”.
For the data files 82, each row represents attributes that a person in a content item may have, e.g. ethnicity, gender, or socio-economic status attributes of the person in a content item, and each column represents a data element of the data file. Each one of the data files 82 comprises data 86 representing an aspect descriptor and aggregated second evaluation data 90 for that aspect descriptor. That is, second evaluation data, i.e. evaluation data that is submitted for all of the second set of content items (audio-visual) containing content having a particular aspect descriptor, e.g. “Ethnicity 1” is grouped, or aggregated. Particularly, the values that represent the evaluation data are summed. With reference to Figs. 5 and 6b, those content items containing content defined by aspect descriptor “Ethnicity 1” are “VA9”, “VAIO”, “VA11” and “VA12”. The second evaluation data submitted for these content items are “95.57”, “0.29”, “27.79” and “95.57” respectively. As can be seen for the content aspect descriptor “Ethnicity 1” in the first row of the representation of data files 82, the sum of these second evaluation data values is “219.22”.
The data files 84 are obtained by comparing the grouped first evaluation data 88 of data files 80 with the grouped second evaluation data 90 of data files 82. This is achieved by way of comparator module 50, which compares the values that represent the first evaluation data, by aspect descriptor, with the values that represent the second evaluation data, by aspect descriptor, to obtain variance data. In particular, the variance data is obtained by subtracting a value that represents first evaluation data for a particular content aspect descriptor from a value that represents second evaluation data for the same particular content aspect descriptor. This is done for each content aspect descriptor to obtain variance data for each content aspect descriptor.
For the data files 84, each row represents a content aspect descriptor and each column represents a data element of the data file. Each one of the data files 84 comprises data 86 representing an aspect descriptor and variance data 92 for that aspect descriptor. Referring to data files 80 and 82, the difference between grouped first evaluation data based on first evaluation data submitted by user “A” for those content items containing content having an aspect descriptor of “Ethnicity 1” and grouped second evaluation data based on second evaluation data submitted by user “A” for those content items containing content having an aspect descriptor of “Ethnicity 1” is “52.91”. That is the variance data for content having content aspect descriptor “Ethnicity 1” for user “A” has a value of “52.91”, i.e. “219.22” minus “166.31”. Values of variance data for other content aspect descriptors are obtained in a similar manner for the user.
Figs. 8a and 8b show a process flow diagram, which illustrate steps carried out in a method 200 implemented by the computer system according to one or more embodiments of the present invention.
When a user has logged-in to the computer system, the data file presentation module 42 of content presentation module 12 sends a request to data storage device 32 in order to retrieve data files, i.e. content 30, from the data storage device 32 (S202). The content items are presented to the user via the output device of client device 14 in sequence. In the described example, a sub-set of the content is presented first (S206), followed by the full set of content (S214). In such an example, the sub-set comprises audio-only content and a plurality of audio-only content items are presented in sequence first. These audio-only content items are then followed by the full set of content, which comprises audio-visual content. Thus, following presentation of the audio-only content items, the audio-visual content items are presented in sequence. Each of the content items, whether audio-only, or audiovisual, is presented with a user interface, i.e. a graphical user interface (GUI) that is displayed on display 28 of the client device 14 and which GUI comprises a plurality of user controllable elements (see, e.g. Fig. 19). The GUI data for configuring the display to display the GUI is located in data storage device and is retrieved when the data file presentation module 42 retrieves the content data files.
During presentation of each content item, the user interface module 44 of content presentation module 12 operates to configure (S204) the output device to display, as part of the GUI, a user interface element controllable via user interface input device 26 (see, e.g. Figs. 19 and 20). Using the user interface input device 26 a user can effect control of the user interface element to input evaluation data that represents the user’s evaluation of content currently being presented to them (e.g. a rating). Upon receiving (S208, or S216)) an instruction based upon a command input by the user using the input device 26, the evaluation data is finalised, based upon user interface element data, and the evaluation data for that user (e.g. user “A”) is communicated to the first evaluation data storage region 340A of data storage device 32 for storage therein (S210), if the content item being reviewed is an audio-only content item. Similarly, if the content items being evaluated comprise audio-visual content, the user-input evaluation data is communicated to the second evaluation data storage region 342A of data storage device 32 for storage therein (S218).
The data file presentation module 42, for presentation of the audio-only content items, monitors the presentation of the content items and receipt of evaluation data in relation to each one and determines if the final audio-only content item has been presented (S212). If not, the process returns to step S206 and a next audio-only content item in the sequence is presented. However, if the final audio-only content item in the sequence has been presented, the data file presentation module 42 implements presentation of a first audio-visual content item in a sequence of audio-visual content items (S214).
Again, the data file presentation module 42, for presentation of the audio-visual content items, monitors the presentation of the content items and receipt of evaluation data in relation to each one and determines if the final audio-visual content item has been presented (S220). If not, the process returns to step S214 and a next audio-visual content item in the sequence is presented. However, if the final audio-visual content item in the sequence has been presented, the data file presentation module 42 operates to notify the grouping module 46 that all content items have been presented and evaluated.
Responsive to such a notification, the grouping module 46 retrieves evaluation data from the data storage device 32 and groups (S222, or S224) the retrieved evaluation data dependent upon content characteristic, and type of content item (i.e. audio-only or audio-visual) for which the evaluation data was submitted. With reference to Fig. 7, and to give an example, the evaluation data submitted for all audio-only content items containing content characterised as “Ethnicity 1” will be grouped and grouped evaluation data for all content items characterised as this type (i.e. with an aspect descriptor “Ethnicity 1”) is summed and stored (S226) as first evaluation data for “Ethnicity 1” in the grouped evaluation data region 36A in data storage device 32. The evaluation data for each content item is represented as a value and so the grouped evaluation data for content items defined by a particular aspect descriptor comprise the sum of all values for content items containing content having the particular aspect descriptor. For the present example, this is “166.31” - see Fig. 7.
The grouping module 46 groups and sums evaluation data for each aspect descriptor for the audio-only content items (S222), and stores the grouped and summed evaluation data as grouped first evaluation data in grouped evaluation data region 36A of data storage device 32 (S226). Similarly, the grouping module 46 carries out a same operation for each aspect descriptor for the audio-visual content items (S224) and stores the grouped and summed evaluation data for the audio-visual content items as second evaluation data in grouped evaluation data region 36A of data storage device 32 (S228).
Upon grouping of evaluation data for all aspect descriptorss and for both types of content item (i.e. audio-only and audio-visual), retrieval module 48 operates to request, from data storage device 32, a sub-group of input evaluation data. The sub-group of input evaluation data comprises grouped and summed first evaluation data and grouped and summed second evaluation data for a particular aspect descriptor, e.g. “Ethnicity 1”. Thus, the grouped first evaluation data for “Ethnicity 1” comprises a value that represents a sum of evaluation data values for all audio-only content items containing content with the aspect descriptor “Ethnicity 1”. Likewise, the grouped second evaluation data for “Ethnicity 1” comprises a value that represents a sum of evaluation data values for all audiovisual content items containing content with the aspect descriptor as “Ethnicity 1”.
Responsive to retrieval (S234) of the data for the particular aspect descriptor, the data is communicated, by the retrieval module 48, to comparator module 50, which operates to subtract the grouped first evaluation data for the particular aspect descriptor from the grouped second evaluation data for the particular aspect descriptor to obtain (S236) the variance data. With reference to Fig. 7, and using the example of “Ethnicity 1” as the particular aspect descriptor, the comparator module 50 subtracts “166.31” from “219.22”, giving a result of “52.91” as the variance data 92 for the particular aspect descriptor. The variance data for the particular aspect descriptor is communicated, by the comparator module 50, to the data storage device 32 for storage (S238) in the variance data region 38A thereof.
The comparator module 50 determines if variance data has been calculated for all aspect descriptors (S240). If not, the comparator module 50 proceeds to calculate variance data for a different aspect descriptor (i.e. the process returns to step S236). If variance data has been calculated for all aspect descriptors, the comparator module 50 sends a notification to variance data threshold identification module 52.
Responsive to the notification from the comparator module 50, variance data threshold identification module 52 retrieves variance data for a particular aspect descriptor from the data storage device 32. The variance data threshold identification module 52 operates to compare (S242) the variance data for a particular aspect descriptor to a variance data threshold for the particular aspect descriptor and continues the comparison operation until a determination is made (S244) that a comparison for a final aspect dsecriptor is completed. If the variance data for a particular aspect descriptor exceeds a variance data threshold for the particular aspect descriptor, the variance data threshold identification module 52 operates to send a request to data storage device 32 to retrieve additional content for presentation (S246). This step may be implemented for each aspect descriptor for which the variance data exceeds the variance data threshold.
The additional content may comprise educational material for the purpose of trying to reduce a variance data value for a particular aspect descriptor. Selection of the additional content for presentation is based upon the particular aspect descriptor for which the variance data threshold is exceeded. If multiple thresholds are exceeded, then additional content may be presented related to each aspect descriptor for which a threshold is exceeded.
When the variance data threshold identification module 52 determines that a comparison for a final aspect descriptor is completed, it then determines if additional content is to be presented (S248). If no additional content is to be presented, the process ends. However, if additional content is to be presented, the variation data threshold identification module 52 initiates presentation (S250) of the additional content at the client device 14.
In addition to the process steps described above, reverse navigation inhibit module 54 operates, during the evaluation data collection phase of the method to determine if a user interface control is operated (S230). The user interface control may be, for example, a “back” button of the browser upon which the user interface is displayed (see feature 102 of Fig. 19). If there is no detected operation of the user interface control, the evaluation data collection steps continue (S232). However, if operation of the user interface control is detected, the process returns to presentation of the first content item in the sequence of presentation of the sub-set of content items, i.e. to the presentation of a first audio-only content item. That is, when activation of the user interface control is detected, the reverse navigation inhibit module 54 stops the content evaluation exercise and recalls the first content item in the sequence either from cache or from the data storage device 32, and initiates presentation of the first content item in the sequence. For example, a user, having input evaluation data in relation to a later content item in the sequence of content items, may wish to revisit an earlier content item in the sequence (for which evaluation data has already been input) in order to revise the evaluation data input earlier, or to try to ensure that the evaluation data to be input for the later content item (e.g. an audio-visual content item) is consistent with, or “matches” the evaluation data input for the earlier counterpart content item (e.g. an audio-only content item). That is, the user may attempt to operate the “back” button in an attempt to initiate transition from a data file (i.e. content item) currently being presented to a previously presented data file. This type of action is prevented, because if the “back” button is operated, the process returns to presentation of the first audio-only content item in the sequence, i.e. the process returns to the beginning.
Fig. 9 schematically illustrates an example log-in page displayed by display of the computer system 10.
Figs. 10 to 14 schematically illustrate example sign-up process pages displayed by display of the computer system 10 during a new-user sign-up process.
Fig. 15 schematically illustrates an example welcome page displayed by display of the computer system 10 once a user has logged-in to the content evaluation environment. The display is configured to present user control function “buttons”. Effecting actuation of a “watch now” button initiates presentation of an introductory audio-visual content item, which serves to explain the content evaluation process. The introductory audio-visual content item is stored in the data storage device 32 and is retrieved therefrom for display on the display on actuation of the “watch now” button. Effecting actuation of a “skip” button initiates transition of the content evaluation exercise process to a next stage in which further information is presented regarding the process (see Figs. 16 and 17, which illustrate examples of pages displayed at this stage). Following actuation of a “Start” button displayed on the page represented in Fig. 17, display of an example exercise start page is initiated (see Fig. 18). Actuating the “Go” button displayed on the page represented in Fig. 18 initiates the example exercise. Content for the example exercise is retrieved from the data storage device upon actuation of the “Go” button and is presented to the user via output elements of the client device (e.g. speakers only, or speakers and display).
A page in which a first part of the example is displayed is represented schematically in Fig. 19.
A first part of a user interface element that can be controlled by the user by way of inputting instructions using a user interface input device is represented by reference numeral 94. The first part 94 of the user interface element comprises a “scale” defining a range via which the user is provided with limits to constrain their evaluation data input response.
The user interface also comprises a first control element 96, actuation of which can initiate transition between “PLAY” and “PAUSE” modes during presentation of a content item. The user interface also comprises a “progress” bar 98 to represent graphically progress of presentation of a content item versus total content item length. Elapsed time versus total content item length can also be represented by indicator 100.
The user interface also comprises a second control element 102, which comprises a “Back” button. It is actuation of this second control element 102 that reverse navigation inhibit module 54 monitors and initiates appropriate action accordingly if such actuation is detected.
A second part 104 of the user interface element is illustrated in Fig. 20. The second part 104 comprises a cursor element controllable by instructions input using the user interface device (e.g. a mouse). Movement of the second part 104 is limited to within the bounds of the first part 94 of the user interface element. The second part 104 can be located at extrema of the first part 94, or at positions between the extrema. The second part 104 can be made to appear in the first part 94 by controlling a cursor on screen, using the user interface input device, to position the screen-cursor within the bounds of the first part 94. When the screen-cursor is located within the bounds of first part 94, the second part 104 appears.
In the illustrate example, no scale is provided on the user interface element, apart from identifiers at the extrema. However, a scale may be provided in optional arrangements.
The first part 94 and second part 104 of the user interface element, in combination, comprises a track bar (first part) with a controllable marker (second part). The marker of the track bar is controllable by way of instructions input through, and received from, the user interface input device. A position of the marker within the track bar is representative of the user evaluation of content in a content item currently being presented to a user. In particular, the position of the marker within the track bar is converted to a numerical value to form the evaluation data.
When a user is satisfied with the “rating” of the content that is represented by the position of the marker in the track bar, the user can initiate transition of the process to initiate presentation of a next content item in the exercise by effecting actuation of the “Finish” button displayed in the user interface. Actuation of the “Finish” button effects display of the next content item in the exercise (i.e. as illustrated in Fig. 21).
In the example exercise, the content item represented in Fig. 20 comprises an audio-only content item. However, the content item represented in Fig. 21 comprises an audio-visual content item.
In Fig. 21, the same elements of the user interface are displayed as in Fig. 19, i.e. first part 94 of user interface element, first control element 96, “progress” bar 98, indicator 100 and second control element 102. In addition, visual content 106 (e.g. visual footage of a candidate providing a spoken statement) is also present. The second part 104 of user interface element is not shown in Fig. 21, but is present in Fig. 22.
Again, when the user is satisfied with the “rating” of the content that is represented by the position of the marker in the track bar, the user can initiate transition of the process to initiate completion of the example exercise by effecting actuation of the “Finish” button displayed in the user interface. Actuation of the “Finish” button effects completion of the example exercise and initiates transition of the process to initiate display of a start page for an actual content evaluation exercise (see Fig. 23). Upon selection of a “Start Module” option a transition is initiated to a next page in the actual content evaluation exercise, which comprises presentation of a first content item in the exercise (see Fig. 24).
In the actual content evaluation exercise illustrated in Figs. 24 to 26 and 32 to 34, twelve different audio-only content items are presented to a user in sequence (Figs. 24 to 26), followed by an exercise in which content of an entirely different type is presented (Figs. 27 to 31), namely a “distraction exercise”, followed by presentation of twelve different audio-visual content items in sequence (Figs. 32 to 34).
Each one of the audio-only content items has an audio-visual content item counterpart. The counterpart audio-visual content items may be in a same sequence as the audio-only content items, but need not be so. The distraction exercise content is presented to the user between presentation of the audio-only content and presentation of the audio-visual content to try to put temporal distance between input of evaluation data in the first part of the exercise and input of evaluation data in the second part of the exercise. This is to try to reduce the likelihood of a user being able to remember the evaluation data input in the first exercise to an audio-only content item counterpart to an audio-visual content item currently being presented, to try to avoid exactly the same evaluation data being input for the content item in the first exercise and its counterpart in the second exercise.
The distraction exercise can comprise, as per the examples illustrated in Figs. 27 to 31, presenting the user with one or more “general knowledge” type questions, to which the user must submit a correct answer in order to progress. However, the nature of the content in the distraction exercise is not necessarily important, merely that it provides a temporal “buffer” between the first and second parts of the exercise.
When a final content item in the sequence of content items has evaluation data input by the user in relation thereto, transition to a results page of the exercise is initiated by effecting actuation of a “Finish” button on the final content item page (see Fig. 34). The results page is illustrated in Fig. 35 and displays the variance data obtained by operation of the content presentation module during the exercise in a graphical form, by aspect descriptor. The example variance data represented in Fig. 35 is based upon the example variance data values represented in Fig. 7.
If a variance data threshold (which may optionally be zero in one or more embodiments) is exceeded for any one of the characteristics, as determined by the variance data threshold identification module 52, then additional content is retrieved from the data storage device 32 and is prepared for display to the user. Display of the additional content (see Fig. 36) is initiated by effecting actuation of the “Next” button displayed on the page illustrated in Fig. 34. Fig. 36 illustrates a post-results page displayed on a client device display via which presentation of the additional content can be initiated, the additional content relevant to a particular user based upon their results in the content evaluation exercise.
In one or more optional arrangements, a further operation may be performed by the variance data threshold identification module 52. To give an example, for a user “X”, a grouped first evaluation data value is “+10” (i.e. audio-only content) and a grouped second evaluation data value is “+20” (i.e. audio-visual content) for an aspect descriptor “MALE”. This results in a variance data value of “+10” (i.e. 20 minus 10). An inference may be made that user “X” has a bias in favour of subjects of the content who are male. However, assume that, for the same user, a grouped first evaluation data value is “+10” and a grouped second evaluation data value is “+20” for an aspect descriptor “FEMALE”. This results in a variance data value of “+10” (i.e. 20 minus 10). An inference may be made that user “X” has a bias in favour of subjects of the content who are female. Whilst favouring both males and females is not necessarily something that is mutually exclusive, what the positive values of the variance data for the male and female aspect descriptors might indicate is that user X prefers content presented to them in audio-visual format compared to audio-only format. Should a potential situation such as illustrated by example above occur, the variation data threshold identification module 52 may operate to compare non-zero variance data values of aspect descriptors within a same aspect descriptor group. An aspect descriptor group comprises, for example, gender (of which “MALE” and FEMALE” aspect descriptors fall within the same aspect descriptor group), ethnicity (of which “ETHNICITY 1”, “ETHNICITY 2” and “ETHNICITY 3” aspect descriptors fall within the same aspect descriptor group), etc. By comparing variance data values for aspect descriptors within a same aspect descriptor group, user preferences regarding the format (i.e. audio-only or audio-visual) in which content is presented to them may be effectively disregarded. This may avoid an inference process being influenced by such format preferences, which may affect the inference of whether or not a person has a bias against persons having particular attributes.
To continue the above example for the case where the variance data threshold identification module 52 carries out the further operation, the variance data value for “MALE” is subtracted from the variance data value for “FEMALE” (i.e. 10 minus 10), giving a secondary variance data value of “0”, from which can be inferred that user X has no bias against either males or females.
In one or more optional arrangements, the content presentation module may also comprise at least one of: vocal trait removal module 56; response time measurement module 58; and audio content extraction module 60.
The vocal trait removal module 56 is operative to modify the audio content of audio-only content items, or the audio content extracted from audio-visual content items (where only audio-visual content items are present) prior to presentation of the audio-only content. The vocal trait removal module 56 may comprise, for example, a voice changer operative to modify data in the audio file to modify: a tone or pitch of a voice of a person who is the subject of the content item; and/or add distortion to the person’s voice; remove harmonics that represent distinguishing characteristics of the person’s voice, or a combination of all of the above, so that the content item is presented with the person’s voice so modified. The vocal trait removal module 56 may be used to provide audio content containing spoken statements where the content is provided for presentation with the spoken statements in a “neutral” tone, i.e. with distinguishing characteristics such as, for example, pitch, accent, fluency, etc., removed. This may inhibit a user to whom the audio content is presented being able to ascertain, from the audio content, gender, age, ethnicity, etc. of the speaker, as far as these traits are detectable from a person’s voice.
The response time measurement module 58 is operative to measure the speed of response of a user when inputting evaluation data (i.e. the user “rating”) responsive to listening to/viewing the audio/video. A user may, if they are aware what they are being tested for, and are aware that they do have a bias, attempt to deceive the computer system by giving what they perceive to be a score/rating that shows them in a more positive light. There may be a hesitancy in such a case compared to the case where a user responds truthfully. Therefore, measuring a speed of response for a user to input evaluation data to a content item currently being reviewed may provide an indicator of a potential bias against content contained in a content item.
The audio content extraction module 60 is operative to extract audio content from audiovisual content items. The extracted audio-only content can be presented during the first part of the content evaluation exercise, i.e. when audio-only content is presented.
The content presentation module, sub-modules thereof, and other features described herein can be implemented as discrete hardware components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs, or similar devices. The content presentation module may implement operations as described herein. In addition, the content presentation module can be implemented as firmware or functional circuitry within hardware devices. Further, content presentation module can be implemented in any combination of hardware devices and software components.
It will be understood by those skilled in the art that the drawings are merely diagrammatic and that further items of equipment may be required in a commercial apparatus. The position of such ancillary items of equipment forms no part of the present invention and is in accordance with conventional practice in the art.
Insofar as embodiments of the invention described above are implementable, at least in part, using a software-controlled programmable processing device such as a general purpose processor or special-purposes processor, digital signal processor, microprocessor, or other processing device, data processing apparatus or computer system it will be appreciated that a computer program for configuring a programmable device, apparatus or system to implement methods and apparatus is envisaged as an aspect of the present invention. The computer program may be embodied as any suitable type of code, such as source code, object code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language, such as, Liberate, OCAP, MHP, Flash, HTML and associated languages, JavaScript, PHP, C, C++, Python, Nodejs, Java, BASIC, Perl, Matlab, Pascal, Visual BASIC, ActiveX, assembly language, machine code, and so forth. A skilled person would readily understand that term “computer” in its most general sense encompasses programmable devices such as referred to above, and data processing apparatus and computer systems.
Suitably, the computer program is stored on a carrier medium in machine readable form, for example the carrier medium may comprise memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Company Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD) subscriber identity module, tape, cassette solid-state memory.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the invention. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
The scope of the present disclosure includes any novel feature or combination of features disclosed therein either explicitly or implicitly or any generalisation thereof irrespective of whether or not it relates to the claimed invention or mitigate against any or all of the problems addressed by the present invention. The applicant hereby gives notice that new claims may be formulated to such features during prosecution of this application or of any such further application derived therefrom. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in specific combinations enumerated in the claims.

Claims (57)

1. A computer system comprising:
a data storage device for storing a plurality of data files comprising content for presentation on an output device; and a processing device, said processing device operative to implement a content presentation module, the content presentation module comprising:
a data file presentation module to retrieve said data files from said data storage device and to present a sub-set of said content of each one of said plurality of data files in sequence to a user via said output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input via a user interface input device of an instruction to initiate presentation of said sequence of data files;
a user interface module for configuring said output device to display a user interface element controllable via said user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user, input evaluation data being stored in said data storage device;
a grouping module for grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said sub-set of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content, and for storing in said data storage device for each one of a plurality of aspect descriptors:
grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors;
a retrieval module for retrieving from said data storage device a sub-group of input evaluation data, said sub-group comprising said grouped first and grouped second evaluation data input in relation to one or more data files of said plurality of data files containing content that has one of said plurality of aspect descriptors;
a comparator module for comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison, and for storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data;
a variance data threshold identification module operative to compare, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group and, if variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, to initiate selection of further content for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set; and a reverse navigation inhibit module operative to determine if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, to initiate presentation of a sub-set of content of a first one of said plurality of data files in said sequence.
2. A system according to claim 1, wherein each one of said data files comprises audio-visual content, and wherein the system further comprises an audio content extraction module to extract audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
3. A system according to claim 1, wherein a portion of said plurality of data files comprises a first plurality of data files containing audio content only, and another portion of said plurality of data files comprises a second plurality of data files containing audio-visual content.
4. A system according to claim 3, wherein said sub-set of said content comprises said first plurality of data files and said full set of said content comprises said second plurality of data files.
5. A system according to claim 3 or 4, wherein each one of said second plurality of data files corresponds to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files.
6. A system according to claim 5, wherein each one of said second plurality of data files comprises data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
7. A system according to any of claims 2 to 6, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a same individual providing a same spoken statement.
8. A system according to any of claims 2 to 6, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
9. A system according to claim 7 or 8, wherein each data file further comprises content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
10. A system according to any of the preceding claims, wherein said user interface element comprises a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar is representative of said user evaluation of content in a data file currently being presented to a user.
11. A system according to claim 10, wherein said track bar comprises markings defining degrees in a range of potential user evaluation possibilities.
12. A system according to claim 11, wherein said markings are non-numerical.
13. A system according to any of claims 10 to 12, wherein said track bar comprises markings at extrema thereof.
14. A system according to any of claims 10 to 13, wherein a position of said marker within said track bar is converted to a numerical value to form said evaluation data.
15. A system according to any of the preceding claims, further comprising initiating presentation of an exercise responsive to receiving evaluation data input in response to presentation of a final content item in said sub-set.
16. A system according to any of the preceding claims, further comprising a response time measurement module for measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
17. A system according to claim 2, or any of claims 3 to 16 when dependent upon claim 2, further comprising a vocal trait removal module operative to alter at least one aspect of audio content in said sub-set of said content prior to presentation via said output device.
18. A system according to claim 17, wherein said at least one aspect comprises: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
19. A system according to any of the preceding claims, wherein a variance data threshold value is zero.
20. An assessment module, comprising:
a data file presentation module to retrieve a plurality of data files from a data storage device and to present a sub-set of content of each one of said plurality of data files in sequence to a user via an output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input received via a user interface input device of an instruction to initiate presentation of said sequence of data files;
a user interface module for configuring said output device to display a user interface element controllable via said user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user, input evaluation data being stored in said data storage device;
a grouping module for grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said sub-set of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content, and for storing in said data storage device for each one of a plurality of aspect descriptors:
grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors;
a retrieval module for retrieving from said data storage device a sub-group of input, said subgroup comprising said grouped first and grouped second evaluation data in relation to one or more data files of said plurality containing content that has one of said plurality of aspect descriptors;
a comparator module for comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison, and for storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data;
a variance data threshold identification module operative to compare, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the subgroup and, if variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, to initiate selection of further content for presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data tor which the associated variance data thereof exceeds the variance data threshold for that data set; and a reverse navigation inhibit module operative to determine if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, to initiate presentation of a sub-set of content of a first one of said plurality of data files in said sequence.
21. A module according to claim 20 wherein each one of said data files comprises audio-visual content and wherein the system further comprises an audio content extraction module to extract audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
22. A module according to claim 20, wherein a portion of said plurality of data files comprises a first plurality of data files containing audio content only, and another portion of said plurality of data files comprises a second plurality of data files containing audio-visual content.
23. A module according to claim 22, wherein said sub-set of said content comprises said first plurality of data files and said full set of said content comprises said second plurality of data files.
24. A module according to claim 22 or 23, wherein each one of said second plurality of data files corresponds to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files.
25. A module according to claim 24, wherein each one of said second plurality of data files comprises data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
26. A module according to any of claims 21 to 25, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a same individual providing a same spoken statement.
27. A module according to any of claims 21 to 25, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
28. A system according to claim 26 or 27, wherein each data file further comprises content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
29. A module according to any of claims 20 to 28, wherein said user interface element comprises a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar is representative of said user evaluation of content in a data file currently being presented to a user.
30. A module according to claim 29, wherein said track bar comprises markings defining degrees in a range of potential user evaluation possibilities.
31. A module according to claim 30, wherein said markings are non-numerical.
32. A module according to any of claims 29 to 31, wherein said track bar comprises markings at extrema thereof.
33. A module according to any of claims 29 to 32, wherein a position of said marker within said track bar is converted to a numerical value to form said evaluation data.
34. A module according to any claims 20 to 33, further comprising initiating presentation of an exercise responsive to a final one of said plurality of first data files in said sequence.
35. A module according to any of claims 20 to 34, further comprising a response time measurement module for measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
36. A module according to claim 21, or any of claims 22 to 35 when dependent upon claim 21, further comprising a vocal trait removal module operative to alter at least one aspect of audio content in said sub-set of said content prior to presentation via said output device.
37. A module according to claim 36, wherein said at least one aspect comprises: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
38. A method comprising the steps of:
retrieving a plurality of data files comprising content from a data storage device;
presenting a sub-set of content of each one of said plurality of data files in sequence to a user via an output device followed by a full set of said content of each one of said plurality of data files in sequence to a user via said output device responsive to input via a user interface input device of an instruction to initiate presentation of said sequence of data files;
configuring said output device to display a user interface element controllable via a user interface input device, and through which a user can input evaluation data representing a user evaluation of content currently being presented to a user;
storing input evaluation data in said data storage device;
grouping evaluation data stored in said data storage device according to aspect descriptors of the content for each said sub-set of said content of each one of said plurality of data files and each said full set of said content of each one of said plurality of data files, each said aspect descriptor representative of an attribute of a subject of the content;
storing in said data storage device for each one of a plurality of aspect descriptors:
grouped first evaluation data, comprising evaluation data input responsive to presentation of said sub-set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors; and grouped second evaluation data, comprising evaluation data input responsive to presentation of said full set of said content for one or more data files of said plurality containing content that has one of said plurality of aspect descriptors;
retrieving from said data storage device a sub-group of input evaluation data, said sub-group comprising said grouped first and grouped second evaluation data input in relation to one or more data files of said plurality of data files containing content that has one of said plurality of aspect descriptors;
comparing, for each sub-group of input evaluation data, retrieved grouped first evaluation data and retrieved grouped second evaluation data and for providing variance data responsive to said comparison;
storing in said data storage device variance data representing the difference between said grouped first evaluation data and said grouped second evaluation data for each sub-group of input evaluation data;
comparing, for each sub-group of input evaluation data, the variance data for the sub-group to a variance data threshold for the sub-group and, if variance data for any one of the sub-groups exceeds a corresponding variance data threshold for the sub-group of input evaluation data, initiating selection of further content tor presentation via the output device, the selection based upon the particular at least one sub-group of input evaluation data for which the associated variance data thereof exceeds the variance data threshold for that data set; and determining if a user interface control is operated prior to completion of presentation of said plurality of data files in their entirety in an attempt to initiate transition from a data file currently being presented to a user to a previously presented data file and, responsive to determination of operation of said user interface control, initiating presentation of a sub-set of content of a first one of said plurality of data files in said sequence.
39. A method according to claim 38, wherein each one of said data files comprises audio-visual content, and wherein the method further comprises extracting audio content from said data files, wherein said sub-set of said content comprises audio content extracted from each one of said data files and said full set of said content comprises audio-visual content.
40. A method according to claim 38, wherein a portion of said plurality of data files comprises a first plurality of data files containing audio content only, and another portion of said plurality of data files comprises a second plurality of data files containing audio-visual content.
41. A method according to claim 40, wherein said sub-set of said content comprises said first plurality of data files and said full set of said content comprises said second plurality of data files.
42. A method according to claim 40 or 41, wherein each one of said second plurality of data files corresponds to a respective one of said first plurality of data files such that the audio content of a file of said second plurality of data files corresponds to the audio content in a corresponding file of said first plurality of data files.
43. A method according to claim 42, wherein each one of said second plurality of data files comprises data identifying a corresponding one of the first plurality of data files to which the one of the second plurality of data files corresponds.
44. A method according to any of claims 39 to 43, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a same individual providing a same spoken statement.
45. A method according to any of claims 39 to 43, wherein said audio-visual content comprises audio-visual footage of an individual providing a spoken statement, and said audio content comprises audio footage of a third party providing a same spoken statement as that of the individual providing the spoken statement in the audio-visual footage.
46. A method according to claim 44 or 45, wherein each data file further comprises content definition data based upon a plurality of types of attribute of the individual delivering the spoken statement, each one of said plurality of types of attribute corresponding to a respective one of said plurality of aspect descriptors.
47. A method according to any of claims 38 to 46, wherein said user interface element comprises a track bar with a controllable marker, said marker of said track bar controllable via said user interface input device, and further wherein a position of said marker within said track bar is representative of said user evaluation of content in a data file currently being presented to a user.
48. A method according to claim 47, wherein said track bar comprises markings defining degrees in a range of potential user evaluation possibilities.
49. A method according to claim 48, wherein said markings are non-numerical.
50. A method according to any of claims 47 to 49, wherein said track bar comprises markings at extrema thereof.
51. A method according to any of claims 47 to 50, wherein a position of said marker within said track bar is converted to a numerical value to form said evaluation data.
52. A method according to any of claims 38 to 51, further comprising initiating presentation of an exercise responsive to receiving evaluation data input in response to presentation of a final content item in said sub-set.
53. A method according to any of claims 38 to 52, further comprising measuring a time between commencement of presentation of content contained in one of said sub-set and said full set, and input of evaluation data.
54. A method according to claim 39, or any of claims 40 to 53 when dependent upon claim 39, further comprising altering at least one aspect of audio content in said sub-set of said content prior to presentation via said output device.
55. A method according to claim 54, wherein said at least one aspect comprises: speech flow; loudness; intonation; pitch; and/or intensity of harmonics.
56. A method according to any of claims 38 to 55, wherein a variance data threshold value is zero.
57. A machine readable medium, comprising processor executable instructions executable by a processor to implement a method according to any of claims 38 to 56.
GB1809300.5A 2018-06-06 2018-06-06 System, module and method Withdrawn GB2574587A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1809300.5A GB2574587A (en) 2018-06-06 2018-06-06 System, module and method
PCT/GB2019/051410 WO2019234388A1 (en) 2018-06-06 2019-05-22 System, module and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1809300.5A GB2574587A (en) 2018-06-06 2018-06-06 System, module and method

Publications (2)

Publication Number Publication Date
GB201809300D0 GB201809300D0 (en) 2018-07-25
GB2574587A true GB2574587A (en) 2019-12-18

Family

ID=62975493

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1809300.5A Withdrawn GB2574587A (en) 2018-06-06 2018-06-06 System, module and method

Country Status (2)

Country Link
GB (1) GB2574587A (en)
WO (1) WO2019234388A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7735104B2 (en) * 2003-03-20 2010-06-08 The Directv Group, Inc. System and method for navigation of indexed video content
GB2457968A (en) * 2008-08-06 2009-09-02 John W Hannay & Co Ltd Forming a presentation of content
US20170177632A1 (en) * 2015-12-18 2017-06-22 Samsung Electronics Co., Ltd. Method and apparatus for saving web content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
WO2019234388A1 (en) 2019-12-12
GB201809300D0 (en) 2018-07-25

Similar Documents

Publication Publication Date Title
US11392970B2 (en) Administering a digital survey over voice-capable devices
US20200288012A1 (en) System and Method for Three-Way Call Detection
US10347250B2 (en) Utterance presentation device, utterance presentation method, and computer program product
US20190007510A1 (en) Accumulation of real-time crowd sourced data for inferring metadata about entities
CN110708607B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN110442867B (en) Image processing method, device, terminal and computer storage medium
US20200175457A1 (en) Evaluation of actor auditions
US20180315418A1 (en) Dialogue analysis
CN109657099B (en) Learning interaction method and learning client
CN113392273A (en) Video playing method and device, computer equipment and storage medium
JP2016194614A (en) Music recommendation system, program, and music recommendation method
de Jong-Lendle et al. Voice lineups: a practical guide
US20220197931A1 (en) Method Of Automating And Creating Challenges, Calls To Action, Interviews, And Questions
JP5258056B2 (en) Question sentence candidate presentation device
CN111490929B (en) Video clip pushing method and device, electronic equipment and storage medium
KR20180101948A (en) Answer providing method, apparatus and computer program for excuting the method
CN108777804B (en) Media playing method and device
GB2574587A (en) System, module and method
CN113301352A (en) Automatic chat during video playback
JP2014048924A (en) Information processor, information processing method, and program
CN114363650B (en) Live broadcast room public screen text display method, electronic equipment and storage medium
CN110099332A (en) A kind of audio environment methods of exhibiting and device
CN114821815A (en) Pet online interaction system operation method, device, equipment and medium
JP6482703B1 (en) Estimation apparatus, estimation method, and estimation program
JP6838739B2 (en) Recent memory support device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)