CN112887771A - Video evaluation method and device, computer readable medium and electronic equipment - Google Patents

Video evaluation method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN112887771A
CN112887771A CN202110115454.3A CN202110115454A CN112887771A CN 112887771 A CN112887771 A CN 112887771A CN 202110115454 A CN202110115454 A CN 202110115454A CN 112887771 A CN112887771 A CN 112887771A
Authority
CN
China
Prior art keywords
video
user
evaluation
data
watching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110115454.3A
Other languages
Chinese (zh)
Inventor
吴彦云
黄顺明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110115454.3A priority Critical patent/CN112887771A/en
Publication of CN112887771A publication Critical patent/CN112887771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie

Abstract

The disclosure provides a video evaluation method, a video evaluation device, a computer readable medium and an electronic device, and relates to the technical field of data processing. The method comprises the following steps: acquiring biological characteristic data of at least one user when watching a video; calculating an evaluation reference value of each user according to the biological characteristic data; and determining the evaluation result of the video according to the evaluation reference value of each user watching the video. According to the method, on one hand, the biological characteristic data is evaluated when a user watches the video, so that a new evaluation dimension aiming at the video is provided; on the other hand, since the evaluation result of the video is calculated according to the biological characteristic data of the user when watching the video, compared with the traditional manual scoring of the user, the evaluation result is more objective and accurate.

Description

Video evaluation method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a video evaluation method, a video evaluation device, a computer-readable medium, and an electronic device.
Background
With the advent of the electronic information age, people have come to rely on various types of video to obtain information and play entertainment. Common types of video include movies, television shows, live broadcasts, small videos, and the like. In the face of a huge amount of videos, users often refer to the network scores to decide whether to open the videos for viewing. For example, for videos such as movies and television shows, the watched users can leave information related to the movies and television shows by scoring and commenting for reference by other users.
In the related art, common evaluation methods include the following: (1) carrying out scoring weighting calculation through user click evaluation; (2) counting the watching time length and the number of people of the video to carry out weighted scoring; (3) carrying out weighted scoring according to the degree of attention of the user and the frequency of searching; (4) and converting the character evaluation after the user looks into the score. However, the above evaluation method often requires subjective evaluation by the user, so the obtained evaluation result is relatively subjective and cannot objectively reflect the actual situation.
Disclosure of Invention
The present disclosure is directed to a novel video evaluation method, a video evaluation apparatus, a computer-readable medium, and an electronic device, thereby improving the objectivity and accuracy of a video evaluation result at least to some extent.
According to a first aspect of the present disclosure, there is provided a video evaluation method, including: acquiring biological characteristic data of at least one user when watching a video; calculating an evaluation reference value of each user according to the biological characteristic data; and determining the evaluation result of the video according to the evaluation reference value of each user watching the video.
According to a second aspect of the present disclosure, there is provided a video evaluation apparatus including: the data acquisition module is used for acquiring biological characteristic data of at least one user when watching a video; the characteristic value calculating module is used for calculating an evaluation reference value of the user according to the biological characteristic data; and the video evaluation module is used for determining the evaluation result of the video according to the evaluation reference value of the user watching the video.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
a processor; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
According to the video evaluation method provided by the embodiment of the disclosure, the evaluation result of the video is determined by collecting the biological characteristic data of the user when watching the video, on one hand, the biological characteristic data of the user when watching the video is evaluated, and a new evaluation dimension aiming at the video is provided; on the other hand, since the evaluation result of the video is determined according to the biological characteristic data of the user when the user watches the video, the obtained evaluation result is more objective and accurate compared with the traditional manual scoring of the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
FIG. 3 schematically illustrates a flow chart of a video rating method in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of a method of acquiring biometric data in an exemplary embodiment of the disclosure;
FIG. 5 schematically illustrates a recommendation interface in an exemplary embodiment of the disclosure;
FIG. 6 schematically illustrates another recommendation interface diagram in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of another method of acquiring biometric data in an exemplary embodiment of the disclosure;
fig. 8 schematically illustrates a composition diagram of a video evaluation apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which a video evaluation method and apparatus according to an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 101, 102, 103 may be various electronic devices having a data processing function, including but not limited to desktop computers, portable computers, televisions, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The video evaluation method provided by the embodiment of the present disclosure is generally executed by the server 105, and accordingly, the video evaluation apparatus is generally disposed in the server 105. However, it is easily understood by those skilled in the art that the video evaluation method provided in the embodiment of the present disclosure may also be executed by the terminal devices 101, 102, and 103, and accordingly, the video evaluation apparatus may also be disposed in the terminal devices 101, 102, and 103, which is not particularly limited in this exemplary embodiment. For example, in an exemplary embodiment, the terminal devices 101, 102, and 103 may acquire biometric data of respective users when viewing respective videos, the server 105 acquires the biometric data acquired by the respective terminal devices, and then calculates an evaluation reference value of each user for the biometric data corresponding to each video, thereby determining an evaluation result corresponding to the video.
The exemplary embodiment of the present disclosure provides an electronic device for implementing a video evaluation method, which may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device includes at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the video evaluation method via execution of the executable instructions.
The following takes the mobile terminal 200 in fig. 2 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 2 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 200. In other embodiments, the mobile terminal 200 may also interface differently than shown in fig. 2, or a combination of multiple interfaces.
As shown in fig. 2, the mobile terminal 200 may specifically include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor module 280, a display 290, a camera module 291, an indicator 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. Wherein the sensor module 280 may include an optical heart rate sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, and the like.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
A memory is provided in the processor 210. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and execution is controlled by processor 210.
The mobile terminal 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information. In some embodiments, the mobile terminal 200 may implement the purpose of playing video on the mobile terminal display screen 290 through the GPU, the display screen 290, the application processor, and the like.
The mobile terminal 200 may implement an audio function through the audio module 270, the speaker 271, the receiver 272, the microphone 273, the earphone interface 274, the application processor, and the like. Such as sound playback, recording, etc. in a video.
Optical heart rate sensor 2801 is used to measure heart rate and other biometric indicators by electro-optical solvent pulse wave graphy (PPG). In some embodiments, the optical heart rate sensor 2801 may be disposed inside the mobile terminal 200 and used directly to collect the user's heart rate data while the user is watching a video.
The pressure sensor 2802 is used to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 2802 may be located inside the mobile terminal 200 and used directly to collect the user's blood pressure data while the user is watching a video.
The gyro sensor 2803 may be used to determine a motion gesture of the mobile terminal 200. In addition, sensors with other functions, such as a depth sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 280 according to actual needs.
In the related art, when a video is evaluated, an evaluation bullet box is usually popped up after the video is played, and a user can select a corresponding evaluation in the bullet box according to own subjective feeling. For example, for some movies, a 10-division method is generally adopted in many applications, and after the movie playing is finished, a rating window pops up, so that a user can give a rating to the movie according to his own subjective feeling. However, such a score is completely related to the subjective feeling of the user and cannot be objectively evaluated.
Based on one or more of the problems described above, the present example embodiment provides a video evaluation method. The video evaluation method may be applied to the server 105, and may also be applied to one or more of the terminal devices 101, 102, and 103, which is not particularly limited in this exemplary embodiment. Referring to fig. 3, the video evaluation method may include the following steps S310 and S320:
in step S310, biometric data of at least one user while watching a video is acquired.
The biometric data of a plurality of users watching the video can be acquired when the biometric data is acquired. For example, for a video a, there may be 2 users who have already viewed the video, and the biometric value of each user may be collected when the two users view the video a, respectively, to generate the biometric data of the user. The set of the 2 users' biometric data is then used as the biometric data corresponding to video a. In addition, the biometric data may include heart rate data, blood pressure data, pupil contraction degree data, and the like of the user, which is not particularly limited in this disclosure.
In an exemplary embodiment, in order to obtain the biometric data of the user when the user watches the video, the biometric data of the user needs to be collected in real time in the process of playing the video, so that a terminal device or a server and the like which execute the video evaluation method have a playing function or a biometric collection function, or a communication connection between a playing device with a playing function or a monitoring device with a biometric collection function and a terminal device or a server and the like which execute the video evaluation method can be established in a communication manner, the device with the playing function is controlled to play in a communication manner, and the device with the biometric collection function is controlled to collect the biometric.
It should be noted that the playing device and the monitoring device may be disposed in the same terminal device, or may be processed as two terminal devices respectively. The playing device can be, for example, a television, a mobile phone, a tablet computer, etc.; the monitoring device may be, for example, a smart watch, a bracelet, a cell phone, etc.
For example, assuming that a server of a movie theater has data processing and movie playing functions, the purpose of acquiring the biometric data of a movie user can be achieved by establishing a communication connection between the server and a monitoring device having a biometric acquisition function; for another example, in a mobile phone application, the application server itself has only a data processing function, and at this time, a video can be played through the mobile phone, and the biometric data of the user is collected through the mobile phone, and finally, the purpose of obtaining the biometric data of the video user is achieved through the communication connection between the application server and the mobile phone.
In an exemplary embodiment, when obtaining the biometric data of at least one user while watching the video, the user range of the obtained data may be defined first. Specifically, a user range satisfying the filtering condition may be determined according to the set filtering condition, and then the biometric data of at least one viewer in the user range when watching the video may be obtained. It should be noted that the filtering condition may include a user identity ID, attribute information of the user, such as user location, current member level of the user, credit, and the like, and the filtering condition may also include an application used when the user watches a video, and the filtering condition is not particularly limited in this disclosure.
For example, for a certain video, a certain target user can be accurately specified as a user range through a user ID, and only the biometric data of the target user when watching the video needs to be acquired when acquiring data; for another example, in another embodiment, the users may also be filtered through the locations of the users, for example, all users located in beijing may be filtered out, and then the biometric data of at least one of the users when watching the video may be acquired with all users located in beijing as a scope.
It should be noted that, assuming that the user range includes 20 users, when acquiring data, it is necessary to acquire biometric data of a video watched by the user, and only a part of the 20 users may have watched the video, so that the number of the biometric data that may be collected may be different from the number of users included in the user range; further, when biometric data is acquired for a plurality of videos, it is possible that the same user views several of the plurality of videos at the same time, and thus it is also possible that a plurality of sets of biometric data are acquired for the same user.
Further, in an exemplary embodiment, if a user is looking at a video, the user may have watched many times. In this case, the biometric data of the user watching the video can be generated in a manner that the last acquired data covers the past data; the data acquired for multiple times can also be averaged, weighted and averaged, and the like; and the data viewed each time can be uploaded as the biological characteristic data. Besides the above modes, different settings can be performed according to actual requirements, and the present disclosure is not limited to this.
In an exemplary embodiment, referring to fig. 4, acquiring the biometric data of at least one user while watching a video may include the following steps S410 to S420:
and step S410, controlling the monitoring equipment to monitor the biological characteristic value of each user watching the video in real time when the video is played.
In an exemplary embodiment, when capturing biometric data, a monitoring device having a biometric capture function may be required to monitor a biometric value of a user viewing a video. Therefore, before monitoring, whether the terminal device or the server executing the video evaluation method can be used as a monitoring device for biological characteristic monitoring or whether the monitoring device with the biological characteristic acquisition function is connected can be detected.
In an exemplary embodiment, it may occur that the terminal device or the server itself performing the video evaluation method may not perform the biometric acquisition as the monitoring device, and the monitoring device having the biometric acquisition function is not connected. At this moment, namely when the monitoring device is not detected, historical connection data aiming at the monitoring device can be obtained, and a connection prompt is generated according to the historical connection data, so that the user can establish communication connection with the monitoring device, and the biological characteristic value of the user can be monitored in real time through the connected monitoring device.
And step S420, matching the biological characteristic value with the time stamp of the video, and generating biological characteristic data of the user when the user watches the video when the video playing is finished.
The ending of the video playing may include finishing the playing of the entire video, or actively pausing or terminating the playing by the user. It should be noted that, when the playing of the entire video is completed or the user terminates the playing, the biometric feature collection may be considered to be completed; when the user pauses playing, the user may continue playing, so that the user cannot think that the biological feature acquisition is finished, but pauses the biological feature acquisition, and continues the biological feature acquisition when the video continues playing.
In an exemplary embodiment, when watching a longer video, the biometric value of the user is likely to follow the corresponding change in the video content. Therefore, the biometric value can be matched with the time stamp of the played video, so that the biometric value of the user when the video is played to a certain moment can be determined according to the finally obtained biometric data. It should be noted that, the real-time monitoring of the monitoring device actually performs data acquisition according to a fixed frequency, and therefore, in some exemplary embodiments, the biometric value may also be acquired according to a set frequency, and the time stamp of the video at each time of acquiring the biometric value after the determination is determined according to the time stamp of the video at the first time of acquiring the biometric value and the acquisition frequency. For example, the biometric value may be acquired once every minute, with the first acquisition being the 5 th minute of the timestamp of the video's start-of-play point, and the second acquisition having a timestamp of the 6 th minute.
In step S320, an evaluation reference value for each user is calculated from the biometric data.
In an exemplary embodiment, after the biometric data corresponding to the video is acquired, the evaluation result corresponding to the video may be determined according to the biometric data corresponding to the video. Because the biological characteristic data of the user when watching the video is directly related to the video content, the evaluation of the user on the video can be objectively and directly expressed according to the biological characteristic data acquired when the user watches the video, and the evaluation on the video can be realized only by converting the biological characteristic data which continuously changes along with the playing of the video into a value which can be compared.
It is noted that in some embodiments, the acquired biometric data may be biometric data acquired by a plurality of monitoring devices. For example, for a video application server, a certain video may be played through a plurality of application software installed on a mobile phone of a user, and biological feature data of the user is respectively collected through the plurality of mobile phones and then sent to the video application server, and after the video application server obtains the biological feature data sent by the plurality of mobile phones, the video is evaluated according to the data to obtain an evaluation result.
In an exemplary embodiment, an evaluation reference value of each user in the biometric data corresponding to the video may be calculated. For example, in some embodiments, the average biometric value of the user during the whole video watching process can be determined as the corresponding evaluation reference value of the user. The above process can be obtained according to the following formula (1):
the user evaluation reference value is the user biometric data/acquisition times formula (1).
In step S330, an evaluation result of the video is determined according to the evaluation reference value of each user viewing the video.
In an exemplary embodiment, after obtaining the evaluation reference value of each user, the evaluation result of the corresponding video may be determined according to the evaluation reference value of the user watching the video. For example, for a certain video, the evaluation reference values of all viewers watching the video may be averaged again to obtain the evaluation result of the video. The above process can be obtained according to the following formula (2):
and (3) a video evaluation result is ∑ user evaluation reference value/video playing time formula (2).
In addition, when the video belongs to a video set, in order to realize the evaluation of a plurality of videos, the obtained biometric data may further include a mark of the video, that is, a mark for marking which video the user has acquired when watching. For example, the user a may have watched videos 1, 2, and 3, and the collected biometric data needs to carry labels of the videos 1, 2, and 3, respectively, so as to distinguish which video corresponds to the biometric data.
In an exemplary embodiment, when a plurality of videos are included in one video set, after the evaluation results of the plurality of videos are respectively determined, the videos in the video set may be sorted and recommended according to the evaluation results of the videos. For example, 8 videos are total in the video set, which are videos 1 to 8, and after the evaluation results of the 8 videos are obtained, the videos may be sorted according to the evaluation results to obtain the following sorting results: video 1, video 3, video 5, video 6, video 8, video 7, video 2, and video 4, and correspondingly, the videos may be sorted and recommended according to the sorting result, and a recommendation interface may be as shown in fig. 5.
In addition, after obtaining the evaluation result, in order to enable the user to directly know the difference between the videos, the evaluation result may be displayed on the recommendation interface as a label of the video, as shown in fig. 6.
In some embodiments, a highlight segment of the video may also be needed in the video recommendation interface or the video detail interface to present a portion of the content of the video to the user. Therefore, the highlight clip can be determined in the video according to the biological feature data corresponding to the video, and is displayed in the recommendation interface or the video detail interface as the recommendation clip corresponding to the video.
In an exemplary embodiment, a preset time duration corresponding to the recommended segment may be set in advance, the video is segmented according to the preset time duration to obtain at least one first video segment, then an evaluation reference value corresponding to each first video segment is calculated according to the biological feature data of the video, and the first video segment corresponding to the largest evaluation reference value is determined as the recommended segment corresponding to the video. For example, the total duration of a video is 6 minutes, the preset duration is 2 minutes, and the 6 minutes video is divided into 3 first video segments, namely, 1 st, 2 nd, 3 rd, 4 th and 5 th, 6 th minutes. Then, the evaluation reference value corresponding to each first video clip is calculated, and then the first video clip with the maximum evaluation reference value is determined as the recommended clip.
Further, in addition to the method for determining recommended segments, at least one second video segment may be obtained by traversing a video through a time duration window with a preset time duration, then an evaluation reference value corresponding to each second video segment is calculated according to the biological feature data of the video, and the second video segment corresponding to the largest evaluation reference value is determined as the recommended segment corresponding to the video. For example, if the total duration of a video is 6 minutes, the duration of the duration window is 2 minutes, and the traversal step size is 1 minute, the following 5 second video segments can be obtained: 1, 2 minutes, 2, 3, 4, 5, 6 minutes. Then, the second video segment with the largest evaluation reference value can be selected from the above 5 second video segments as the recommended segment of the video. The video is segmented in a time duration window traversal mode, so that the problem that when the segments are divided averagely, continuous segments with larger evaluation reference values are divided into two segments, and the obtained recommended segments are inaccurate can be avoided.
In addition, in other embodiments, recommended segments corresponding to the video may also be obtained by means of splicing and the like, which is not particularly limited in this disclosure. For example, a plurality of segments with larger evaluation reference values can be selected in a video, and then the plurality of segments with larger evaluation reference values are spliced together to obtain a recommended segment corresponding to the video.
It should be noted that, in the method for determining a recommended segment, the evaluation reference value corresponding to the first video segment and the evaluation reference value corresponding to the second video segment may be determined according to a biometric value acquired when the first video segment or the second video segment is played. The specific determination may be determined in various ways, such as an average value, a maximum value, a minimum value, and the like. For example, when the average value is used as the evaluation reference value of the video segment, the evaluation reference value of the user collected by the monitoring device when the first video segment is played can be searched in the biometric data according to the start time stamp and the end time stamp of the first video segment. Assuming that 3 biometric values, namely, a biometric value a, a biometric value b, and a biometric value c, are found from the start time stamp and the end time stamp of the first video segment, an average value (a + b + c)/3 of the biometric value a, the biometric value b, and the biometric value c may be calculated as an evaluation reference value of the first video segment.
In the following, taking a television as a device for playing a movie and taking a bracelet as a monitoring device to detect heart rate data of a user, a method for acquiring biometric data of the user in the embodiment of the present disclosure is described in detail with reference to fig. 7:
step S710, starting playing the film;
step S720, detecting whether the heart rate data can be directly monitored;
step S730, when the heart rate data cannot be directly monitored, judging whether the television is connected with a bracelet or not;
step S740, when the television has historically connected to the bracelet 1, generating and displaying a prompt for prompting a user to establish the connection between the bracelet 1 and the television and to start the bracelet monitoring function;
step S750, detecting whether the heart rate data can be monitored again;
step S760, monitoring heart rate data of a user in real time when the heart rate data can be directly monitored, and transmitting the heart rate data to a television;
step S770, judging whether the movie is played completely;
step S780, when the playing of the movie is finished, generating heart rate data of the user watching the movie, and transmitting the heart rate data to the tv daemon for movie evaluation.
In summary, in the exemplary embodiment, on one hand, by collecting the biometric data for evaluation, a more objective evaluation result can be obtained, and a new evaluation dimension for evaluating the video is provided; on the other hand, after the video is evaluated based on the biological characteristic data of the user, the video can be recommended according to the evaluation result, and the video which can more activate the user is preferentially pushed; in addition, the video clip which can more arouse the user can be determined in the video through the biological characteristic data so as to determine the interest value of the user in the video content, and further provide a reference basis for the production of the video content.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 8, the present exemplary embodiment further provides a video evaluation apparatus 800, which includes a data obtaining module 810, a feature value calculating module 820, and a video evaluation module 830. Wherein:
the data acquisition module 810 may be configured to acquire biometric data of at least one user while viewing a video.
The feature value calculation module 820 may be configured to calculate an evaluation reference value of the user from the biometric data.
The video evaluation module 830 may be configured to determine an evaluation result of the video according to an evaluation reference value of a user watching the video.
In an exemplary embodiment, the data obtaining module 810 may be configured to determine a user range according to the filtering condition, and obtain the biometric data of at least one user in the user range while watching the video.
In an exemplary embodiment, the data obtaining module 810 may be configured to, for each user watching a video, control the monitoring device to monitor the biometric value of the user in real time when the video is played; and matching the biological characteristic value with the time stamp of the video, and generating biological characteristic data of the user when the user watches the video when the video is played.
In an exemplary embodiment, the data obtaining module 810 may be configured to obtain historical connection data of a connected monitoring device when the monitoring device is not detected; and generating a connection prompt according to the historical connection data so as to establish communication connection with the monitoring equipment.
In an exemplary embodiment, the video evaluation apparatus 800 may further include a video recommendation module, and the video recommendation module may be configured to obtain an evaluation result of each video in the video set; and sequencing and recommending the videos according to the evaluation result.
In an exemplary embodiment, the video recommendation module may be configured to determine a recommended segment corresponding to the video according to the biometric data corresponding to the video.
In an exemplary embodiment, the video recommendation module may be configured to segment a video according to a preset duration to obtain at least one first video segment; and calculating an evaluation reference value corresponding to at least one first video clip according to the biological characteristic data, and determining the first video clip with the maximum evaluation reference value as a recommended clip corresponding to the video.
In an exemplary embodiment, the video recommendation module may be configured to traverse the video through the duration window to obtain at least one second video segment; and calculating an evaluation reference value corresponding to at least one second video clip according to the biological characteristic data, and determining the second video clip with the maximum evaluation reference value as a recommended clip corresponding to the video.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3, fig. 4, and fig. 7 may be performed.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (11)

1. A video evaluation method, comprising:
acquiring biological characteristic data of at least one user when watching a video;
calculating an evaluation reference value of each user according to the biological characteristic data;
and determining the evaluation result of the video according to the evaluation reference value of each user watching the video.
2. The method of claim 1, wherein the obtaining biometric data of at least one user while watching a video comprises:
and determining a user range according to the screening conditions, and acquiring the biological characteristic data of at least one user in the user range when watching the video.
3. The method according to claim 1 or 2, wherein the obtaining of the biometric data of the at least one user while watching the video comprises:
for each user watching the video, controlling a monitoring device to monitor the biological characteristic value of the user in real time when the video is played;
and matching the biological characteristic value with the time stamp of the video, and generating biological characteristic data of the user when watching the video at the end of playing the video.
4. The method of claim 3, further comprising:
when the monitoring equipment is not detected, acquiring historical connection data of the connected monitoring equipment;
and generating a connection prompt according to the historical connection data so as to establish communication connection with the monitoring equipment.
5. The method of claim 1, wherein the video belongs to a video set, the method further comprising:
obtaining an evaluation result of each video in the video set;
and sequencing and recommending the videos according to the evaluation result.
6. The method of claim 1, further comprising:
and determining a recommended segment corresponding to the video according to the biological feature data corresponding to the video.
7. The method of claim 6, wherein the determining the recommended segment corresponding to the video according to the biometric data corresponding to the video comprises:
segmenting the video according to a preset time length to obtain at least one first video segment;
and calculating an evaluation reference value corresponding to the at least one first video clip according to the biological characteristic data, and determining the first video clip with the maximum evaluation reference value as a recommended clip corresponding to the video.
8. The method of claim 6, wherein the determining the recommended segment corresponding to the video according to the biometric data corresponding to the video comprises:
traversing the video through a duration window to obtain at least one second video segment;
and calculating an evaluation reference value corresponding to the at least one second video clip according to the biological characteristic data, and determining the second video clip with the maximum evaluation reference value as a recommended clip corresponding to the video.
9. A video evaluation apparatus, comprising:
the data acquisition module is used for acquiring biological characteristic data of at least one user when watching a video;
the characteristic value calculating module is used for calculating an evaluation reference value of the user according to the biological characteristic data;
and the video evaluation module is used for determining the evaluation result of the video according to the evaluation reference value of the user watching the video.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
11. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 8 via execution of the executable instructions.
CN202110115454.3A 2021-01-28 2021-01-28 Video evaluation method and device, computer readable medium and electronic equipment Pending CN112887771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110115454.3A CN112887771A (en) 2021-01-28 2021-01-28 Video evaluation method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110115454.3A CN112887771A (en) 2021-01-28 2021-01-28 Video evaluation method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112887771A true CN112887771A (en) 2021-06-01

Family

ID=76052876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110115454.3A Pending CN112887771A (en) 2021-01-28 2021-01-28 Video evaluation method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887771A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08289327A (en) * 1995-04-17 1996-11-01 Sanyo Electric Co Ltd Video display device
CN101755406A (en) * 2007-03-08 2010-06-23 埃姆申塞公司 A method and system for rating media and events in media based on physiological data
CN102318246A (en) * 2007-03-07 2012-01-11 埃姆申塞公司 A method and system for using coherence of biological responses as a measure of performance of a media
CN102523493A (en) * 2011-12-09 2012-06-27 深圳Tcl新技术有限公司 Method and system for grading television program according to mood
CN104349206A (en) * 2014-11-26 2015-02-11 乐视致新电子科技(天津)有限公司 Method, device and system for processing television information
CN104994409A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Media data editing method and device
CN105852838A (en) * 2016-03-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Multimedia evaluation method and terminal based on heart rate measurement
CN108319643A (en) * 2017-12-22 2018-07-24 新华网股份有限公司 The evaluating method and system of multimedia messages
CN108376147A (en) * 2018-01-24 2018-08-07 北京览科技有限公司 A kind of method and apparatus for obtaining the evaluation result information of video
CN108882045A (en) * 2017-05-11 2018-11-23 昆山研达电脑科技有限公司 A kind of film scoring apparatus and method based on wearable device
CN110888997A (en) * 2018-09-10 2020-03-17 北京京东尚科信息技术有限公司 Content evaluation method and system and electronic equipment
CN111385657A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Video recommendation method and device, storage medium and computer equipment
CN111723243A (en) * 2020-06-15 2020-09-29 南京领行科技股份有限公司 Action fragment detection method, device, equipment and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08289327A (en) * 1995-04-17 1996-11-01 Sanyo Electric Co Ltd Video display device
CN102318246A (en) * 2007-03-07 2012-01-11 埃姆申塞公司 A method and system for using coherence of biological responses as a measure of performance of a media
CN101755406A (en) * 2007-03-08 2010-06-23 埃姆申塞公司 A method and system for rating media and events in media based on physiological data
CN102523493A (en) * 2011-12-09 2012-06-27 深圳Tcl新技术有限公司 Method and system for grading television program according to mood
CN104349206A (en) * 2014-11-26 2015-02-11 乐视致新电子科技(天津)有限公司 Method, device and system for processing television information
CN104994409A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Media data editing method and device
CN105852838A (en) * 2016-03-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Multimedia evaluation method and terminal based on heart rate measurement
CN108882045A (en) * 2017-05-11 2018-11-23 昆山研达电脑科技有限公司 A kind of film scoring apparatus and method based on wearable device
CN108319643A (en) * 2017-12-22 2018-07-24 新华网股份有限公司 The evaluating method and system of multimedia messages
CN108376147A (en) * 2018-01-24 2018-08-07 北京览科技有限公司 A kind of method and apparatus for obtaining the evaluation result information of video
CN110888997A (en) * 2018-09-10 2020-03-17 北京京东尚科信息技术有限公司 Content evaluation method and system and electronic equipment
CN111385657A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Video recommendation method and device, storage medium and computer equipment
CN111723243A (en) * 2020-06-15 2020-09-29 南京领行科技股份有限公司 Action fragment detection method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN107801096B (en) Video playing control method and device, terminal equipment and storage medium
CA2808910C (en) System and method for measuring audience reaction to media content
US8667519B2 (en) Automatic passive and anonymous feedback system
CN112753228A (en) Techniques for generating media content
US8510156B2 (en) Viewing terminal apparatus, viewing statistics-gathering apparatus, viewing statistics-processing system, and viewing statistics-processing method
CN108632658B (en) Bullet screen display method and terminal
WO2015056742A1 (en) Device for measuring visual efficacy
US10088901B2 (en) Display device and operating method thereof
WO2015029393A1 (en) Information processing device and information processing method
CN104699958A (en) Method and device for recommending menu according to physical status of user
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
US20230403422A1 (en) Methods and apparatus for multi-television measurements
CN106341712A (en) Processing method and apparatus of multimedia data
CN103945140A (en) Method and system for generating video captions
US20220036427A1 (en) Method for managing immersion level and electronic device supporting same
JP2006020131A (en) Device and method for measuring interest level
JP5775837B2 (en) Interest degree estimation apparatus, method and program
CN106464971B (en) Method, system and the medium presented on the display device for detecting media content
KR20160054134A (en) Method and apparatus for providing a broadcasting sevice
JP2014238712A (en) Content analysis device, content analysis method and content analysis program
CN112887771A (en) Video evaluation method and device, computer readable medium and electronic equipment
CN108304076B (en) Electronic device, video playing application management method and related product
CN110209539B (en) Test method, terminal equipment and tester
KR20210051349A (en) Electronic device and control method thereof
CN114827651B (en) Information processing method, information processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210601