CN110517085B - Display report generation method and device, electronic equipment and storage medium - Google Patents

Display report generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110517085B
CN110517085B CN201910797205.XA CN201910797205A CN110517085B CN 110517085 B CN110517085 B CN 110517085B CN 201910797205 A CN201910797205 A CN 201910797205A CN 110517085 B CN110517085 B CN 110517085B
Authority
CN
China
Prior art keywords
information
user
watching
work
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910797205.XA
Other languages
Chinese (zh)
Other versions
CN110517085A (en
Inventor
杨育松
王曦光
王晨
王勇
徐峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhuanet Co ltd
Original Assignee
Xinhuanet Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhuanet Co ltd filed Critical Xinhuanet Co ltd
Priority to CN201910797205.XA priority Critical patent/CN110517085B/en
Publication of CN110517085A publication Critical patent/CN110517085A/en
Application granted granted Critical
Publication of CN110517085B publication Critical patent/CN110517085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Abstract

The embodiment of the application provides a display report generation method, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring emotion change information of a user in the process of watching a work as first information, acquiring information of the work watched by the user in the process of watching the work as second information, and generating a display report based on the first information, the second information and the matching relation of the first information and the second information in time; the display report is used for displaying emotion information corresponding to each work by the user. According to the method and the device, the emotion change of the user in the process of watching the works can be visually observed, the love degree of the user for each work can be visually determined based on the emotion change, and the user experience is improved.

Description

Display report generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer and biological detection technologies, and in particular, to a method for generating a presentation report, an electronic device, and a computer-readable storage medium.
Background
Arts add value to an individual's life and the entire society, and participation in artistic activities can help individuals and society develop internal cognitive and emotional processes. Art attracts viewers in an emotional and mental as well as aesthetic sense. Generally, the value of a work, especially an art work, is measured by the experience of the audience, but the experience of the audience is difficult and directly observed.
In the prior art, after the audience watches the works, the experience of the audience watching the works and the favorite degree of the audience watching the works are generally obtained in a questionnaire mode, for example, after the audience watches the exhibition of the works, the questionnaire is sent to the audience, and the favorite works are filled in by the audience in a questioning and answering mode, so that the favorite degree of the audience regarding the works is determined. However, in this way, the emotion change of the user in the process of watching the work exhibition and the preference degree of each work cannot be intuitively observed, so how to more intuitively observe the emotion change of the user in the process of watching the work exhibition and the preference degree of each work becomes a key problem.
Disclosure of Invention
The application provides a method and a device for generating a display report, an electronic device and a computer-readable storage medium, which can solve at least one technical problem. The technical scheme is as follows:
in a first aspect, the present application provides a method for generating a presentation report, comprising:
acquiring emotion change information of a user in the process of watching a work as first information;
collecting information of the works watched by the user in the process of watching the works as second information;
generating a display report based on the first information, the second information and the matching relation of the first information and the second information in time;
the work display report is used for displaying emotion information corresponding to each work by the user.
In one possible implementation manner, obtaining emotion change information of a user in a process of watching a work includes:
acquiring a physiological electric signal acquired by a physiological sensor in the process of watching a work by a user;
and determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
In another possible implementation manner, generating a presentation report based on the first information, the second information, and a matching relationship between the first information and the second information in time, and the method further includes:
acquiring position information of a user in the process of watching a work from a positioning sensor;
determining the standing position information and the driving route information of the user in the process of watching the work based on the position information of the user in the process of watching the work;
and marking the standing position of the user in the process of watching the work in the driving route information to obtain third information, wherein the third information is the driving route information marked with the standing position.
In another possible implementation manner, generating a presentation report based on the first information, the second information, and a matching relationship between the first information and the second information in time includes:
generating a presentation report based on the following information:
first information; second information; third information; the matching relation of the first information and the second information on time; and the second information and the third information are in matching relation in position.
In another possible implementation manner, the method further includes:
acquiring fourth information, wherein the fourth information comprises: the multimedia information collected from the third person's perspective by the user in the process of watching the work comprises: behavior information of the user, contents of the work viewed by the user, and environmental information around the user.
In another possible implementation manner, determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal includes:
and determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal and multimedia information acquired from the third person perspective in the process of watching the work by the user.
In another possible implementation, the presentation report is generated based on the following information, including:
generating a presentation report based on the following information:
first information; second information; third information; fourth information; the matching relation of the first information and the second information in time; the matching relation of the second information and the third information on the position is realized; and the matching relation between the third information and the fourth information in position.
In another possible implementation manner, the method further includes:
and when the display report display operation triggered by the user is detected, controlling the display report to be displayed.
In another possible implementation manner, the method further includes:
when the triggering operation of a user on any one of the first information, the second information, the third information and the fourth information on the display report is detected, determining the display information corresponding to the triggering operation in any one of the information and the display information corresponding to the triggering operation in other information;
controlling display triggering operation to display information corresponding to any information and display information corresponding to other information respectively so as to realize linkage display of the first information, the second information, the third information and the fourth information on a display report;
the other information is information other than any one of the first information, the second information, the third information, and the fourth information.
In a second aspect, the present application provides a presentation report generation apparatus, comprising:
the first acquisition module is used for acquiring emotion change information of a user in the process of watching a work as first information;
the acquisition module is used for acquiring the information of the works watched by the user in the process of watching the works as second information;
the generation module is used for generating a display report based on the first information, the second information and the matching relation of the first information and the second information in time;
the display report is used for displaying emotion information corresponding to each work by the user.
In one possible implementation manner, the first obtaining module includes: an acquisition unit and a determination unit, wherein,
the acquisition unit is used for acquiring a physiological electric signal acquired by the physiological sensor in the process of watching a work by a user;
and the determining unit is used for determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
In another possible implementation manner, the apparatus further includes: a second obtaining module, a first determining module and a labeling module, wherein,
the second acquisition module is used for acquiring the position information of the user in the process of watching the works from the positioning sensor;
the first determination module is used for determining the standing position information and the driving route information of the user in the process of watching the work based on the position information of the user in the process of watching the work;
and the marking module is used for marking the standing position of the user in the process of watching the work in the driving route information to obtain third information, and the third information is the driving route information marked with the standing position.
In another possible implementation manner, the generating module is specifically configured to generate the presentation report based on the following information:
first information; second information; third information; the matching relation of the first information and the second information in time; and the second information and the third information are in matching relation in position.
In another possible implementation manner, the apparatus further includes: a third obtaining module, wherein,
a third obtaining module, configured to obtain fourth information, where the fourth information includes: the multimedia information collected from the third person perspective by the user in the process of watching the work comprises: the content of the work viewed by the user, and the environmental information around the user.
In another possible implementation manner, the determining unit is specifically configured to determine, based on the acquired physiological electrical signal and multimedia information collected from a third person's perspective during the process of the user watching the work, emotion change information of the user during the process of the user watching the work.
In another possible implementation manner, the generating module is specifically configured to generate the presentation report based on the following information:
first information; second information; third information; fourth information; the matching relation of the first information and the second information in time; the matching relation of the second information and the third information on the position is realized; and the matching relation between the third information and the fourth information in position.
In another possible implementation manner, the apparatus further includes: a first control display module, wherein,
and the control display module is used for controlling display of the display report when the display operation of the display report triggered by the user is detected.
In another possible implementation manner, the apparatus further includes: a second determining module and a second control display module, wherein,
the second determining module is used for determining display information corresponding to the triggering operation in any information and display information corresponding to other information respectively when the triggering operation of the user on the display report aiming at any information in the first information, the second information, the third information and the fourth information is detected;
the second control display module is used for controlling display of corresponding display information in any information of the display triggering operation and corresponding display information in other information respectively so as to realize linkage display of the first information, the second information, the third information and the fourth information on the display report;
the other information is information other than any one of the first information, the second information, the third information, and the fourth information.
In a third aspect, an electronic device is provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: and executing the operation corresponding to the method for generating the presentation report shown in the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method for generating a presentation report as shown in the first aspect or any possible implementation manner of the first aspect.
The beneficial effect that technical scheme that this application provided brought is:
compared with the prior art that the experience of watching works of audiences is obtained through a questionnaire survey mode and the favorite degree of watching each work is obtained, the display report is generated based on the emotion change information of the user in the process of watching the works and the content of the work watched by the user in the process of watching the works, namely, the emotion change of the user in the process of watching the works can be observed more intuitively based on the display report, the favorite degree of the user for each work can be determined intuitively based on the emotion change information, the questionnaire survey time of the user can be saved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of a method for generating a display report according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a display report generation apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device for generating a presentation report according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating the driving route information labeled with the standing information in the embodiment of the present application;
fig. 5 is a schematic diagram of emotion changes of a user during a process of viewing a work in an embodiment of the application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
An embodiment of the present application provides a method for generating a presentation report, as shown in fig. 1, the method includes:
step S101, obtaining emotion change information of a user in the process of watching a work as first information.
The work in the embodiment of the present application may be an art work, but is not limited to an art work, and the work that other users can view is within the protection scope of the present application.
For the embodiment of the application, the process of watching the works by the user can comprise the following steps: the user is in the in-process of watching scientific and technological exhibition work, and the user is in the in-process of watching the drawing exhibition work, and the user is in the in-process of watching the movie. The embodiments of the present application are not limited. Wherein, the emotion change information of the user in the process of watching the work can be as shown in fig. 5.
And step S102, collecting information of the works watched by the user in the process of watching the works as second information.
For example, step S102 may include: collecting the contents of each scientific and technological exhibition work watched by a user in the process of watching the scientific and technological exhibition; collecting all painting exhibits watched by a user in the process of watching the painting exhibit; the method comprises the steps of collecting each frame of image or each movie fragment watched by a user in the process of watching a movie.
For the embodiment of the present application, step S102 may specifically include: information of a work viewed by a user in the process of viewing the work is collected from a first-person perspective.
For example, the user can carry glasses in the process of watching a work exhibition, and the glasses are provided with image acquisition equipment which is used for acquiring information of each work watched by the user in the process of watching the work. Or the information of each work watched by the user in the process of watching the work is collected through other collection modes, and the collection of the information of each work watched by the user in the process of watching the work through the image collection device installed on the glasses is not limited.
For the embodiment of the present application, step S101 may be performed before step S102, may be performed after step S102, and may also be performed simultaneously with step S102. The embodiments of the present application are not limited thereto. In the embodiment of the present application, the execution sequence of the steps shown in fig. 1 is only an example, and is not a limitation to the present application.
Step S103, generating a display report based on the first information, the second information and the matching relation of the first information and the second information in terms of time.
The work display report is used for displaying emotion information corresponding to each work by the user.
According to the embodiment of the application, when the user watches the works, the emotion change of the user in the process of watching the works is collected and determined in real time, so that the content of the works watched by the user in the process of watching the works and the emotion change information of the user in the process of watching the works have a matching relation in time, and based on the matching relation, a display report is generated to be used for displaying the emotion change of the user in the process of watching each work.
For example, if a user watches a work 1 at 8: 00-8: 05 and watches a work 2 at 8: 06-8: 10, corresponding emotion change information of the 8: 00-8: 05 user in the process of watching the work 1 can be obtained; and acquiring emotion change information corresponding to the users in the process of watching the work 2 in a ratio of 8: 06-8: 10, and generating a display report based on the emotion change information to display the emotion change information corresponding to the users in the process of watching the work 1 and the work 2.
Compared with the prior art that the experience of watching works of audiences and the favorite degree of watching each work are obtained through a questionnaire survey mode, the display report is generated based on the emotion change information of the user in the process of watching the works and the content of the work watched by the user in the process of watching the works, namely, the emotion change of the user in the process of watching the works can be observed more intuitively based on the display report, and the favorite degree of the user for each work can be determined intuitively based on the emotion change information, so that the questionnaire survey time of the user can be saved, and the user experience is improved.
In a possible implementation manner of the embodiment of the present application, step S101 may specifically include: acquiring a physiological electric signal acquired by a physiological sensor in the process of watching a work by a user; and determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
For the embodiment of the application, the generation signal collected by the physiological sensor in the process of watching the artwork by the user can include: at least one of an electrocardiosignal, a cutaneous electrical signal, an electroencephalographic signal, and an oculomotor signal.
For the embodiment of the application, a user can wear each physiological sensor in the process of watching each work so as to acquire the physiological electric signals (including at least one of electrocardiosignals, skin electric signals, electroencephalogram signals and eye movement signals) of the user in the process of watching each work, and the emotion change information of the user in the process of watching each work is determined based on the physiological electric signals of the user in the process of watching each work.
In another possible implementation manner of the embodiment of the present application, before the step S103, the method may further include: step Sa (not shown), step Sb (not shown), and step Sc (not shown), wherein,
and step Sa, obtaining the position information of the user in the process of watching the works from the positioning sensor.
For the embodiment of the application, the physiological sensor worn by the user in the process of watching the works has a positioning function, or the sensor with the positioning function is worn by the user in the process of watching the works, or the glasses worn by the user in the process of watching the works can have the positioning function. The positioning sensor is a separate sensor, or is integrated with other physiological sensors, or is integrated with glasses worn by a user when the user views the work, and is not limited.
And Sb, determining the location information of the user in the process of watching the work and the driving route information based on the location information of the user in the process of watching the work.
For the embodiment of the application, the driving route information and the stopping position of each user in the process of watching the work can be determined based on the position information in the positioning sensor. In the embodiment of the application, if it is detected that the staying time of the user at a certain position is greater than a preset time threshold, the position information corresponding to the position where the staying time is greater than the preset time threshold is determined as the standing position information.
And step Sc, marking the standing position of the user in the process of watching the work in the driving route information to obtain third information.
And the third information is the driving route information marked with the standing position.
For the embodiment of the application, in general, a user stays for a long time in the process of watching a work to represent that the user is more interested in the work at the position, and the work interested by the user can be better determined by marking the information of the location where the user stays in the process of watching the work in the driving route information.
For example, as shown in fig. 4, fig. 4 is the driving route information labeled with the location of the stationary, wherein the solid dots represent the location of the stationary during the process of viewing the work by the user.
In the embodiment of the present application, the execution sequence of steps Sa, Sb, and Sc with steps S101 and S102 is not limited.
In another possible implementation manner of the embodiment of the present application, step S103 may specifically include: step S1031 (not shown in the figure), in which,
step S1031, generating a display report based on the following information:
first information; second information; third information; the matching relation of the first information and the second information in time; and the second information and the third information are in matching relation in position.
Specifically, step S103 may specifically include step S1031, in addition to step Sa, step Sb, and step Sc.
In the embodiment of the present application, the presentation report generated in step S1031 includes: the emotion change information of the user in the process of watching the work, the content of the work watched by the user in the process of watching the work and the driving route information marked with the standing position can be displayed, and the generated display report can also display the association relation among the first information, the second information and the third information.
For the embodiment of the application, when a display report display instruction triggered by a user is received, the generated display report is displayed, and when a certain work is displayed, emotion change information of the user when the user watches the work and stop position information corresponding to the user when the user watches the work can be displayed at the same time.
In another possible implementation manner of the embodiment of the present application, the method further includes: step Sd (not shown in the figure), in which,
and step Sd, acquiring fourth information.
Wherein the fourth information includes: the user collects multimedia information from the third person's viewpoint during the process of viewing the work.
Wherein, the multimedia information collected from the third person's view angle includes: behavior information of the user, contents of the work viewed by the user, and environmental information around the user.
For the embodiment of the application, based on the current position of the user, which is located by the positioning sensor, the camera at the corresponding position is controlled to monitor, so that the behavior information of the user, the content of the works watched by the user and the environmental information around the user are collected from the visual angle of a third person.
For the embodiment of the application, the behavior information of the user is the behavior information of the user in the process of watching the work, for example, playing a mobile phone, making a call, and the like. In the embodiment of the present application, the behavior information of the user may further include facial expression information of the user.
In another possible implementation manner of the embodiment of the present application, step S101 may specifically include: step S1011 (not shown in the figure), wherein,
step S1011, determining emotion change information of the user in the process of watching the work from multimedia information collected from a third person perspective based on the acquired physiological electric signal and the process of watching the work by the user.
For the embodiment of the application, if the user behavior information corresponding to the user watching a certain work is playing a mobile phone, it can be determined that the detected emotion change information corresponding to the user watching the certain work is not the real emotion corresponding to the user watching the work, and furthermore, the multimedia information collected from the third person view angle can include facial expression information of the user watching the work. Therefore, the accuracy of the emotion change information of the user in the process of watching the work, which is determined based on the acquired physiological electric signal and the multimedia information collected from the third person's view angle, of the user in the process of watching the work is high.
In another possible implementation manner of the embodiment of the present application, step S1031 may specifically include: step S1031a (not shown in the figure), wherein,
step S1031a, generating a presentation report based on the following information:
first information; second information; third information; fourth information; the matching relation of the first information and the second information in time; the matching relation of the second information and the third information on the position is realized; and the matching relation between the third information and the fourth information in position.
For the embodiment of the present application, the generated display report may include: the information processing method comprises the following steps of first information (emotion change information of a user in the process of watching a work), second information (information of the work watched by the user in the process of watching an art), third information (driving route information marked with a standing position), fourth information (multimedia information collected from a third human visual angle by the user in the process of watching the work), the matching relation of the first information and the second information in time, the matching relation of the second information and the third information in position, and the matching relation of the third information and the fourth information in position.
For the embodiment of the application, the preset information of each work watched by the user in the process of watching the work is determined as fifth information based on the second information, and then the display report is generated based on the first information, the second information, the third information, the fourth information and the fifth information, or the display report is generated based on the first information, the fifth information, the third information and the fourth information, or the display report is generated according to at least two of the first information, the second information, the third information, the fourth information and the fifth information.
For the embodiment of the application, the generated presentation report can be a common electronic report or an electronic report which can be displayed in a linkage manner. The embodiments of the present application are not limited.
In another possible implementation manner of the embodiment of the present application, the method may further include: step Se (not shown in the figure) in which,
and step Se, when the display report display operation triggered by the user is detected, controlling the display report to be displayed.
With the embodiment of the present application, when a presentation report display operation triggered by a user is detected, the presentation report generated in step S1031a is controlled to be displayed.
For the embodiment of the application, if the display report to be displayed is a common display report, the display report can be displayed on any screen. And if the display report to be displayed is a display report capable of being displayed in a linkage manner, performing linkage display in the display modes of the step Sf and the step Sg.
In another possible implementation manner of the embodiment of the present application, the method may further include: step Sf (not shown in the figure) and step Sg (not shown in the figure), wherein,
and step Sf, when the triggering operation of the user on any one of the first information, the second information, the third information and the fourth information on the display report is detected, determining the display information corresponding to the triggering operation in any one of the information and the display information corresponding to the triggering operation in other information respectively.
And step Sg, controlling display triggering operation to display information corresponding to any information and display information corresponding to other information respectively so as to realize linkage display of the first information, the second information, the third information and the fourth information on the display report.
The other information is information except any one of the first information, the second information, the third information and the fourth information.
For example, when the fact that the user triggers the display of any work is detected, based on the matching relation, emotion change information of the user when the user watches the work, multimedia information collected from a third person visual angle when the user watches the work, and corresponding position information of the user on a driving route marked with a standing position are determined, and the display is controlled.
For another example, when the user clicks a certain position on the driving route marked with the standing position, the work information corresponding to the position, the emotion information corresponding to the user, and the multimedia information monitored from the third person's viewpoint are determined, and the display is controlled.
For the embodiment of the application, the emotion change information of the user in the process of watching the works, the work information watched by the user in the process of watching the works, the driving route information marked with the standing information and the multimedia information collected from the perspective of a third person are displayed in a linkage display mode, the abundance of displayed display reports can be improved, the convenience of determining the emotion change of the user in the process of watching each work is improved, and then the user experience can be improved.
The above embodiment introduces a method for generating a presentation report from the perspective of a method flow, and the following embodiment introduces a device for generating a presentation report from the perspective of a virtual module or a virtual unit, which is applicable to the above method embodiment, and is specifically as follows:
an embodiment of the present application provides a display report generating apparatus, and as shown in fig. 2, the display report generating apparatus 20 may include: a first acquisition module 21, an acquisition module 22, and a generation module 23, wherein,
the first obtaining module 21 is configured to obtain, as the first information, emotion change information of the user during the process of viewing the work.
And the acquisition module 22 is used for acquiring information of the work watched by the user in the process of watching the work as second information.
The generating module 23 is configured to generate a display report based on the first information, the second information, and a matching relationship between the first information and the second information in time.
The display report is used for displaying emotion information corresponding to each work by the user.
In another possible implementation manner of the embodiment of the present application, the first obtaining module 21 includes: an acquisition unit and a determination unit, wherein,
the acquisition unit is used for acquiring a physiological electric signal acquired by the physiological sensor in the process of watching a work by a user;
and the determining unit is used for determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a second obtaining module, a first determining module and a labeling module, wherein,
the second acquisition module is used for acquiring the position information of the user in the process of watching the works from the positioning sensor;
the first determination module is used for determining the standing position information and the driving route information of the user in the process of watching the work based on the position information of the user in the process of watching the work;
and the marking module is used for marking the standing position of the user in the process of watching the work in the driving route information to obtain third information.
And the third information is the driving route information marked with the standing position.
In another possible implementation manner of the embodiment of the present application, the generating module 23 is specifically configured to generate the presentation report based on the following information:
first information; second information; third information; the matching relation of the first information and the second information in time; and the second information and the third information are in matching relation in position.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a third obtaining module, wherein,
and the third acquisition module is used for acquiring the fourth information.
Wherein the fourth information includes: the user collects multimedia information from the third person's viewpoint during the process of viewing the work.
Wherein, the multimedia information collected from the third person's view angle includes: behavior information of the user, contents of the work viewed by the user, and environmental information around the user.
In another possible implementation manner of the embodiment of the application, the determining unit is specifically configured to determine emotion change information of the user in the process of watching the work based on the acquired physiological electrical signal and multimedia information acquired from a third person's perspective in the process of watching the work.
In another possible implementation manner of the embodiment of the present application, the generating module 23 is specifically configured to generate the presentation report based on the following information:
first information; second information; third information; fourth information; the matching relation of the first information and the second information in time; the matching relation of the second information and the third information on the position is realized; and the matching relation between the third information and the fourth information in position.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a first control display module, wherein,
and the first control display module is used for controlling display of the display report when the display operation of the display report triggered by the user is detected.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a second determining module and a second control display module, wherein,
the second determining module is used for determining display information corresponding to the triggering operation in any information and display information corresponding to other information respectively when the triggering operation of the user on the display report aiming at any information in the first information, the second information, the third information and the fourth information is detected;
the second control display module is used for controlling display of corresponding display information in any information of the display triggering operation and corresponding display information in other information respectively so as to realize linkage display of the first information, the second information, the third information and the fourth information on the display report;
the other information is information except any one of the first information, the second information, the third information and the fourth information.
It should be noted that the terms "first", "second", and the like in the embodiments of the present application are only used for distinguishing the devices, modules, or units, and are not used for limiting the devices, modules, or units to be different devices, modules, or units, and also for limiting the order or interdependence relationship of the functions performed by the devices, modules, or units.
The embodiment of the application provides a device for generating a display report, compare with the experience of watching works and watching the love degree of each work of the audience obtained through the questionnaire survey in the prior art, the embodiment of the application generates the display report based on the emotion change information of the user in the process of watching the works and the content of the work watched by the user in the process of watching the works, namely, the emotion change of the user in the process of watching the works can be observed more intuitively based on the display report, and the love degree of the user for each work can be determined intuitively based on the display report, so that the time of the questionnaire survey of the user can be saved, and the user experience is improved.
The display report generation apparatus of this embodiment can execute the display report generation method shown in any of the above embodiments of this application, and the implementation principles thereof are similar and will not be described herein again.
The above embodiments describe a method for generating a presentation report from the perspective of a method flow and a device for generating a presentation report from the perspective of a virtual module or a virtual unit, and the following describes an electronic device from the perspective of a physical device structure, for executing the above method embodiments, specifically as follows:
an embodiment of the present application provides an electronic device, as shown in fig. 3, an electronic device 3000 shown in fig. 3 includes: a processor 3001 and a memory 3003. The processor 3001 is coupled to the memory 3003, such as via a bus 3002. Optionally, the electronic device 3000 may further comprise a transceiver 3004. It should be noted that the transceiver 3004 is not limited to one in practical applications, and the structure of the electronic device 3000 is not limited to the embodiment of the present application.
The processor 3001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 3001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 3002 may include a path that conveys information between the aforementioned components. The bus 3002 may be a PCI bus or an EISA bus, etc. The bus 3002 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
Memory 3003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 3003 is used for storing application program codes for performing the present scheme, and is controlled to be executed by the processor 3001. The processor 3001 is configured to execute application program code stored in the memory 3003 to implement any of the method embodiments shown above.
For the embodiment of the present application, the electronic device 3000 may be a terminal device or a server. The embodiments of the present application are not limited.
An embodiment of the present application provides an electronic device, where the electronic device includes: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: according to the method and the device, the display report is generated based on the emotion change information of the user in the process of watching the works and the content of the works watched by the user in the process of watching the works, namely, the emotion change of the user in the process of watching the works can be observed more intuitively based on the display report, and the love degree of the user for each work can be determined intuitively based on the emotion change information, so that the questionnaire survey time of the user can be saved, and the user experience is improved.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the method and the device have the advantages that the display report is generated based on the emotion change information of the user in the process of watching the works and the content of the works watched by the user in the process of watching the works, namely, the emotion change of the user in the process of watching the works can be observed more intuitively based on the display report, and the love degree of the user for each work can be determined intuitively based on the emotion change information, so that the questionnaire survey time of the user can be saved, and the user experience is improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A method for generating a presentation report, comprising:
acquiring emotion change information of a user in the process of watching a work as first information;
collecting information of the works watched by the user in the process of watching the works as second information;
acquiring third information and fourth information, wherein the third information is driving route information marked with a standing position; the fourth information includes: multimedia information collected by a user from a third person's view angle in the process of watching the works; the standing position is a position with the staying time larger than a preset time threshold; the multimedia information collected from the third person's view angle includes: behavior information of the user, content of a work viewed by the user, and environmental information around the user;
generating a display report based on the first information, the second information, the third information, the fourth information, and the matching relationship of the first information and the second information in time, the matching relationship of the second information and the third information in position, and the matching relationship of the third information and the fourth information in position; the display report is used for displaying emotion information corresponding to each work by the user.
2. The method of claim 1, wherein obtaining information about emotional changes of the user during the process of viewing the work comprises:
acquiring a physiological electric signal acquired by a physiological sensor in the process of watching a work by a user;
and determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
3. The method of claim 1 or 2, wherein the obtaining third information comprises:
acquiring position information of a user in the process of watching a work from a positioning sensor;
determining the location information of the user in the process of watching the work and the driving route information based on the location information of the user in the process of watching the work;
and marking the standing position of the user in the process of watching the work in the running route information to obtain the third information.
4. The method of claim 3, wherein generating the presentation report based on the first information, the second information and the matching relationship between the first information and the second information in time comprises:
generating a presentation report based on the following information:
first information; second information; third information; the first information and the second information are in a matching relation in time; and the second information and the third information are in a matching relationship on positions.
5. The method of claim 4, wherein determining emotional change information of the user during the viewing of the work based on the obtained physiological electrical signal comprises:
and determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal and multimedia information acquired from a third person perspective in the process of watching the work by the user.
6. The method of claim 1, further comprising:
and when the display operation of the display report triggered by the user is detected, controlling the display of the display report.
7. The method of claim 1, further comprising:
when a triggering operation of a user on the display report aiming at any one of the first information, the second information, the third information and the fourth information is detected, determining display information corresponding to the triggering operation in any one of the information and display information corresponding to other information respectively;
controlling to display corresponding display information of the trigger operation in any one of the information and display information corresponding to the other information respectively so as to realize linkage display of the first information, the second information, the third information and the fourth information on the display report;
the other information is information other than any one of the first information, the second information, the third information, and the fourth information.
8. An apparatus for generating a presentation report, comprising:
the first acquisition module is used for acquiring emotion change information of a user in the process of watching a work as first information;
the acquisition module is used for acquiring the information of the works watched by the user in the process of watching the works as second information;
the information acquisition module is used for acquiring third information and fourth information, wherein the third information is driving route information marked with a standing position; the fourth information includes: multimedia information collected by a user from a third person's view angle in the process of watching the works; the standing position is a position with the staying time larger than a preset time threshold; the multimedia information collected from the third person's view angle includes: behavior information of the user, content of a work viewed by the user, and environmental information around the user;
a generating module, configured to generate a display report based on the first information, the second information, the third information, the fourth information, and a temporal matching relationship between the first information and the second information, a location matching relationship between the second information and the third information, and a location matching relationship between the third information and the fourth information;
the display report is used for displaying emotion information corresponding to each work by the user.
9. The apparatus of claim 8, wherein the first obtaining module comprises: an acquisition unit and a determination unit, wherein,
the acquisition unit is used for acquiring the physiological electric signal acquired by the physiological sensor in the process of watching the works by the user;
the determining unit is used for determining emotion change information of the user in the process of watching the work based on the acquired physiological electric signal.
10. The apparatus according to claim 8 or 9, wherein the information obtaining module further comprises: a second obtaining module, a first determining module and a labeling module, wherein,
the second acquisition module is used for acquiring the position information of the user in the process of watching the works from the positioning sensor;
the first determination module is used for determining the standing position information and the driving route information of the user in the process of watching the work based on the position information of the user in the process of watching the work;
and the marking module is used for marking the standing position of the user in the process of watching the works in the running route information to obtain the third information.
11. The apparatus of claim 10,
the generation module is specifically configured to generate a presentation report based on the following information:
first information; second information; third information; the first information and the second information are in a matching relation in time; and the second information and the third information are in a matching relationship on positions.
12. The apparatus of claim 8, further comprising: a second determining module and a second control display module, wherein,
the second determining module is configured to, when a trigger operation of a user on the presentation report for any one of the first information, the second information, the third information, and the fourth information is detected, determine display information corresponding to the trigger operation in the any one of the information, and display information corresponding to the trigger operation in other information;
the second control display module is configured to control to display corresponding display information in any one of the information and corresponding display information in the other information respectively, so as to realize linked display of the first information, the second information, the third information, and the fourth information on the display report;
the other information is information other than any one of the first information, the second information, the third information, and the fourth information.
13. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: executing the method of generating a presentation report according to any one of claims 1 to 7.
14. A computer readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of generating a presentation report according to any one of claims 1 to 7.
CN201910797205.XA 2019-08-27 2019-08-27 Display report generation method and device, electronic equipment and storage medium Active CN110517085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910797205.XA CN110517085B (en) 2019-08-27 2019-08-27 Display report generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910797205.XA CN110517085B (en) 2019-08-27 2019-08-27 Display report generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110517085A CN110517085A (en) 2019-11-29
CN110517085B true CN110517085B (en) 2022-06-07

Family

ID=68627191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910797205.XA Active CN110517085B (en) 2019-08-27 2019-08-27 Display report generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110517085B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN105615902A (en) * 2014-11-06 2016-06-01 北京三星通信技术研究有限公司 Emotion monitoring method and device
US9788777B1 (en) * 2013-08-12 2017-10-17 The Neilsen Company (US), LLC Methods and apparatus to identify a mood of media
CN108225366A (en) * 2016-12-21 2018-06-29 丰田自动车株式会社 Car-mounted device and route information prompt system
CN108693974A (en) * 2018-05-11 2018-10-23 新华网股份有限公司 Data processing method, system and nonvolatile computer storage media
CN109211261A (en) * 2017-07-06 2019-01-15 新华网股份有限公司 data display method and device
CN109766759A (en) * 2018-12-12 2019-05-17 成都云天励飞技术有限公司 Emotion identification method and Related product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120233633A1 (en) * 2011-03-09 2012-09-13 Sony Corporation Using image of video viewer to establish emotion rank of viewed video
US20150088542A1 (en) * 2013-09-26 2015-03-26 Be Labs, Llc System and method for correlating emotional or mental states with quantitative data
CN107463874A (en) * 2017-07-03 2017-12-12 华南师范大学 The intelligent safeguard system of Emotion identification method and system and application this method
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
CN110096613B (en) * 2019-04-12 2021-07-20 北京奇艺世纪科技有限公司 Video recommendation method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9788777B1 (en) * 2013-08-12 2017-10-17 The Neilsen Company (US), LLC Methods and apparatus to identify a mood of media
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN105615902A (en) * 2014-11-06 2016-06-01 北京三星通信技术研究有限公司 Emotion monitoring method and device
CN108225366A (en) * 2016-12-21 2018-06-29 丰田自动车株式会社 Car-mounted device and route information prompt system
CN109211261A (en) * 2017-07-06 2019-01-15 新华网股份有限公司 data display method and device
CN108693974A (en) * 2018-05-11 2018-10-23 新华网股份有限公司 Data processing method, system and nonvolatile computer storage media
CN109766759A (en) * 2018-12-12 2019-05-17 成都云天励飞技术有限公司 Emotion identification method and Related product

Also Published As

Publication number Publication date
CN110517085A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
US20150234457A1 (en) System and method for content provision using gaze analysis
US9934425B2 (en) Collection of affect data from multiple mobile devices
US9204836B2 (en) Sporadic collection of mobile affect data
JP6424357B2 (en) Visual target efficiency measurement device
KR102039427B1 (en) Smart glass
US20140201207A1 (en) Mental state data tagging for data collected from multiple sources
CN112272302A (en) Multimedia resource display method, device, system and storage medium
JP2015528120A (en) Selective enhancement of parts of the display based on eye tracking
EP2954505B1 (en) Adding user-selected mark-ups to a video stream
CN107851324B (en) Information processing system, information processing method, and recording medium
CN111753135B (en) Video display method, device, terminal, server, system and storage medium
US9491507B2 (en) Content providing program, content providing method, and content providing apparatus
KR20130088645A (en) Method for providing advertising using eye-gaze
US20130024775A1 (en) Information processing apparatus, information processing method, and program
KR20140052263A (en) Contents service system, method and apparatus for service contents in the system
CN113553472B (en) Information display method and device, electronic equipment and storage medium
US20130052621A1 (en) Mental state analysis of voters
CN110517085B (en) Display report generation method and device, electronic equipment and storage medium
JP5115763B2 (en) Image processing apparatus, content distribution system, image processing method, and program
CN113255431B (en) Reminding method and device for remote teaching and head-mounted display equipment
CN115407879A (en) Information display method, device, equipment and storage medium
WO2014106216A1 (en) Collection of affect data from multiple mobile devices
WO2017192130A1 (en) Apparatus and method for eye tracking to determine types of disinterested content for a viewer
DE112019002928T5 (en) Terminal device, information processing method and program
CN111107293A (en) 360-degree video recording method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant