CN110248241B - Video processing method and related device - Google Patents

Video processing method and related device Download PDF

Info

Publication number
CN110248241B
CN110248241B CN201910503421.9A CN201910503421A CN110248241B CN 110248241 B CN110248241 B CN 110248241B CN 201910503421 A CN201910503421 A CN 201910503421A CN 110248241 B CN110248241 B CN 110248241B
Authority
CN
China
Prior art keywords
video
user
variety
eye tracking
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910503421.9A
Other languages
Chinese (zh)
Other versions
CN110248241A (en
Inventor
韩世广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910503421.9A priority Critical patent/CN110248241B/en
Publication of CN110248241A publication Critical patent/CN110248241A/en
Application granted granted Critical
Publication of CN110248241B publication Critical patent/CN110248241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a video processing method and a related device, which are applied to electronic equipment, wherein the electronic equipment comprises an eyeball tracking assembly, and the method comprises the following steps: when the currently played target video is detected to be an art-integrated video, determining a plurality of art-integrated characters in the target video; acquiring eye tracking information of a user through the eyeball tracking component, and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information; when it is detected that the playing mode of the target video is switched from a normal mode to a fast forward mode or an energy-saving mode, performing preset processing on a video clip associated with at least one of the multiple synthesis characters, wherein the preset processing comprises fast playing, and the interestingness of each of the at least one synthesis character is lower than a preset interestingness. The embodiment of the application is beneficial to avoiding that the user misses interesting segments in the process of watching the video by carrying out eyeball tracking on the user.

Description

Video processing method and related device
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to a video processing method and a related apparatus.
Background
With the widespread application of mobile terminals such as smart phones, smart phones can support more and more applications and have more and more powerful functions, and smart phones develop towards diversification and personalization directions and become indispensable electronic products in user life. When a user watches an art-integrated video program, the interest level of each main art figure in the program may be different, and therefore, when the user fast-forwards plays the video of the program, if all the segments are played at the fast-forward speed, the user may miss the video segment with high interest level.
Disclosure of Invention
The embodiment of the application provides a video processing method and a related device, which are beneficial to avoiding that a user misses an interested segment in the process of watching a video.
In a first aspect, embodiments of the present application provide an electronic device, comprising an eye tracking component, a memory, and a processor, wherein,
the eyeball tracking assembly is used for acquiring eye tracking information of a user;
the memory is used for storing the eye tracking information;
the processor is used for determining a plurality of variety characters in a target video when the currently played target video is detected to be a variety video; the eye tracking component is used for acquiring eye tracking information of the user and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information; and when the situation that the playing mode of the target video is switched from the normal mode to the fast-forward mode or the energy-saving mode is detected, performing preset processing on a video clip associated with at least one of the multiple heddle characters, wherein the preset processing comprises fast playing, and the interestingness of each heddle character in the at least one heddle character is lower than the preset interestingness.
In a second aspect, an embodiment of the present application provides a video processing method, which is applied to an electronic device, where the electronic device includes an eye tracking component; the method comprises the following steps:
when the currently played target video is detected to be an art-integrated video, determining a plurality of art-integrated characters in the target video;
acquiring eye tracking information of a user through the eyeball tracking component, and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information;
when it is detected that the playing mode of the target video is switched from a normal mode to a fast forward mode or an energy-saving mode, performing preset processing on a video clip associated with at least one of the multiple synthesis characters, wherein the preset processing comprises fast playing, and the interestingness of each of the at least one synthesis character is lower than a preset interestingness.
In a third aspect, an embodiment of the present application provides a video processing apparatus, which is applied to an electronic device, where the electronic device includes an eyeball tracking component; the video processing apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for determining a plurality of variety figures in the target video when the target video which is played currently is detected to be a variety video; the communication unit is used for informing the eyeball tracking component to acquire eye tracking information of a user and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information; and when the situation that the playing mode of the target video is switched from the normal mode to the fast-forward mode or the energy-saving mode is detected, performing preset processing on a video clip associated with at least one of the multiple heddle characters, wherein the preset processing comprises fast playing, and the interestingness of each heddle character in the at least one heddle character is lower than the preset interestingness.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a controller, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the controller, and the program includes instructions for executing the steps in any of the methods of the first aspect of the embodiment of the present application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another video processing method provided in the embodiment of the present application;
fig. 4 is a schematic flowchart of another video processing method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a block diagram of functional units of a video processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Electronic devices may include various handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth, having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure, where the electronic device 100 includes: the touch display screen comprises a shell 110, a circuit board 120 arranged in the shell 110, and an eyeball tracking component 130 arranged on the shell 110, wherein a processor 121 and a memory 122 are arranged on the circuit board 120, the memory 122 is connected with the processor 121, and the processor 121 is connected with the eyeball tracking component of the touch display screen; wherein,
the eyeball tracking component 130 is configured to obtain eye tracking information of a user;
the memory 122, configured to store the eye tracking information;
the processor 121 is configured to determine a plurality of anarchia characters in a target video when the target video currently played is detected to be an anarchia video; the eye tracking component is used for acquiring eye tracking information of the user and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information; and when the situation that the playing mode of the target video is switched from the normal mode to the fast-forward mode or the energy-saving mode is detected, performing preset processing on a video clip associated with at least one of the multiple heddle characters, wherein the preset processing comprises fast playing, and the interestingness of each heddle character in the at least one heddle character is lower than the preset interestingness.
The eyeball tracking is mainly used for researching acquisition, modeling and simulation of eyeball motion information, when eyes of people look in different directions, the eyes can slightly change, the eyeball tracking assembly can acquire characteristic information related to the change, for example, the change characteristics are extracted through image capture or scanning, the state and the demand of a user can be predicted through real-time tracking of the change of the eyes, response is carried out, and the purpose of controlling equipment through the eyes is achieved. The eye tracking assembly mainly comprises an infrared device (such as an infrared sensor) and an image acquisition device (such as a camera). When a user needs to use the eyeball tracking function of the electronic equipment, the eyeball tracking function needs to be started firstly, namely the eyeball tracking assembly is in an available state, after the eyeball tracking function is started, the user can be guided to correct the eyeball tracking function firstly, the fixation point position of the user on a screen can be calculated after the geometric characteristics and the motion characteristics of the eyeball of the user are collected in the correction process, and then whether the fixation point position of the user is the position for guiding the user to fix or not is determined, so that the correction process is completed.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a video processing method applied to an electronic device including an eye tracking assembly according to an embodiment of the present disclosure. As shown in the figure, the video processing method includes:
s201, when detecting that a target video played currently is a variety video, the electronic device determines a plurality of variety characters in the target video.
When the target video currently played by the electronic equipment is detected to be the variety video, a plurality of variety characters in the target video can be determined, wherein the plurality of variety characters are main characters of the variety program, such as a host or characters frequently appearing in the video, and related information of the plurality of variety characters, such as personal data information and the like, can be further acquired.
S202, the electronic equipment acquires eye tracking information of the user through the eyeball tracking assembly, and determines the interest degree of the user on each of the multiple art figures according to the eye tracking information.
If the eyeball tracking component of the electronic equipment is in an available state, the eyeball tracking component can acquire eye tracking information of a user, and the interest degree of the user in the multiple art figures in the process of watching a target video can be determined by calculating and processing the acquired eye tracking information. In addition, the face image of the user can be acquired in the process that the user watches the target video, and the interest degree of the user on a figure can be more accurately determined by combining the eyeball annotation time of the user on the figure and the corresponding face expression.
S203, when the electronic equipment detects that the playing mode of the target video is switched from the normal mode to the fast forward mode or the energy-saving mode, performing preset processing on a video clip associated with at least one of the multiple synthesis characters, wherein the preset processing comprises fast playing, and the interest level of each of the at least one synthesis character is lower than a preset interest level.
The user may have different degrees of love for each variety figure for a plurality of variety figures appearing in the variety program, the user tends to see more video clips about the variety figures liked by the user, and the user may not have too much demand for the video clips about the variety figures less liked by the user. When a user switches a play mode of a target video from a normal mode to a fast forward mode, the prior art directly and quickly plays the whole video, which may cause the user to manually switch the play mode back to the normal mode when seeing a favorite video clip, or even miss favorite content. In the embodiment of the application, after the interestingness of the user for each of a plurality of comprehensive art characters is determined, when it is detected that the playing mode of the target video is switched from the normal mode to the fast forward mode or the energy saving mode, only the segments of the target video related to the comprehensive art characters with the user interestingness lower than the preset interestingness are preprocessed, the preprocessing may be fast playing, and in the energy saving mode, the preprocessing may also be fast playing and/or volume reduction playing and/or skipping playing.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
In one possible example, the determining the interest level of the user in each of the multiple art figures according to the eye tracking information includes: acquiring a plurality of fixation points of a video interface aiming at the target video playing process of a user, wherein the fixation points are position points of the eye sight of the user, and the fixation time of the eye sight of the user is larger than a first preset threshold value; determining an integrated art figure corresponding to each gazing point in the plurality of gazing points, and determining the number of the gazing points of each integrated art figure in the plurality of integrated art figures; and determining the interest degree of the user for each comprehensive art figure according to the number of the fixation points of each comprehensive art figure.
According to the obtained eye tracking information, a plurality of fixation points of a user in a video interface can be determined in the playing process of a target video, due to the fact that human eyes have physiological characteristics of a visual perception gradual mode from the center to the outside, fixation positions when the user watches the video can be determined through an eyeball tracking technology and an algorithm, the fixation points are position points of a certain position of the user annotation video, the duration of the fixation points is larger than a first preset threshold value, such as larger than 1 second, the plurality of fixation points of the user in the video watching process are obtained, and an integrated figure corresponding to each fixation point in the plurality of fixation points is determined, namely each fixation point indicates that the user annotates one integrated figure, so that the number of the fixation points of each integrated figure can be determined, and the interest degree of the user on each integrated figure is determined according to the number of the fixation points.
For example, 100 fixation points are obtained through fixation point sampling, and each fixation point is a position point where the user's annotation duration for a certain position on the screen is greater than 1 second. Aiming at the three synthesis art characters related to the target video, namely a synthesis art character 1, a synthesis art character 2 and a synthesis art character 3, 30 fixation points in the 100 annotation points indicate that the user is annotating the synthesis art character 1, 50 fixation points indicate that the user is annotating the synthesis art character 2, 20 fixation points indicate that the user is annotating the synthesis art character 3, and therefore the interestingness of the 3 synthesis art characters of the user is ranked as the synthesis art character 2, the synthesis art character 1 and the synthesis art character 3 in sequence.
Therefore, in the example, the comprehensive art figures corresponding to each sampled fixation point are determined by sampling the fixation points of the user in the video playing process, so that the interest level of the user in each comprehensive art figure can be determined according to the number of the fixation points corresponding to each comprehensive art figure, and the interest level of the user in each comprehensive art figure can be determined quickly.
In one possible example, the method further comprises: displaying the obtained interest level ranking of the user on each of the multiple heddles on the video interface; and when the interestingness ranking is inaccurate, acquiring a correction request aiming at the eye tracking component and input by a user, and correcting the eye tracking component.
After determining the interest level of the user in each variety figure, displaying the interest level rank of each variety figure on a video interface so that the user can know the watching condition of each variety figure, correcting the eyeball tracking component if the user feels that the output interest level rank has errors, and determining the interest level of the user in each variety figure after correcting the eye tracking component.
As can be seen, in this example, by outputting the interestingness ranking of the user for each of the multiple art figures, the user can know the interestingness of each of the multiple art figures, and meanwhile, when the user suspects the interestingness ranking, the eyeball tracking component can be corrected, so that the accuracy of the eyeball tracking component can be improved.
In one possible example, the determining the interest level of the user in each of the multiple art figures according to the eye tracking information includes: determining the annotation duration of each of the multiple art figures by the user within a preset duration according to the eye tracking information; according to the annotation duration corresponding to each variety figure, ranking the interestingness of the user on the plurality of variety figures, and determining a first variety figure with the annotation duration being larger than a second preset threshold and a second variety figure with the annotation duration being smaller than the second preset threshold, wherein the first variety figure represents a variety figure with high interestingness among the plurality of variety figures, and the second variety figure represents a variety figure with low interestingness among the plurality of variety figures.
The duration of the annotation of the user to each variety figure can be determined within the preset duration according to the eye tracking information, and the interest degree of the user to each variety figure can be determined according to the duration of the annotation. For example, eye tracking information of the user within ten minutes is acquired, the user only annotates one synthesis figure at the same time, and analysis is performed on the eye tracking information within ten minutes, so that the duration of the user annotating the synthesis figure 1 is 2.5 minutes, the duration of the user annotating the synthesis figure 2 is 4.5 minutes, and the duration of the user annotating the synthesis figure 3 is 3 minutes, and accordingly, interestingness ranking is determined as the synthesis figure 2, the synthesis figure 3 and the synthesis figure 1.
According to the annotation duration of each synthesis art character, determining a first synthesis art character with the user annotation duration being greater than a second preset threshold value and a second synthesis art character with the annotation duration being less than the second preset threshold value in a plurality of synthesis art characters, wherein the first synthesis art character and the second synthesis art character both comprise at least one synthesis art character, and therefore the first synthesis art character is a synthesis art character with high interest of the user, and the second synthesis art character is a synthesis art character with low interest of the user.
As can be seen, in the present example, according to the annotation duration of the user for each synthesis figure within the preset duration, the plurality of synthesis figures are divided into a first synthesis figure with high interest of the user and a second synthesis figure with low interest of the user, so that the interest level of the user for each synthesis figure can be determined.
In one possible example, the performing the preset processing on the video clip associated with at least one of the multiple art figures includes: determining a video clip associated with the second variety character in the target video, wherein the video clip comprises a first video clip of a video picture only including the second variety character and a second video clip of the video picture simultaneously including the first variety character and the second variety character; and when the target video is played to the first video segment and the second video segment, playing at a fast forward speed.
And determining a video clip related to the second variety character in the target video, namely determining a plurality of video clips in which the second variety character appears, wherein the plurality of video clips can be divided into a first video clip in which only the second variety character appears in the picture and a second video clip in which the first variety character and the second variety character appear in the picture simultaneously, and the user's interest level in the second video clip is higher than that in the first video clip.
As can be seen, in this example, when it is determined that the first video segment and the second video segment of the second variety character appear in the video picture in the target video, when the target video is in the fast forward mode and the energy saving mode, it is only necessary to play the first video segment and the second video segment quickly, and the target video can still be played at a normal speed except for other video segments in the first video segment and the second video segment, so that it is possible to prevent the user from missing a video segment that the user likes to watch.
In one possible example, the fast forward speed playing when the target video is played to the first video segment and the second video segment comprises: determining a first speed multiple value corresponding to the fast forwarding mode; and playing the first video clip according to the fast forward speed corresponding to the first speed value, and playing the second video clip after the fast forward speed is reduced to a speed which is twice as fast as a second speed value, wherein the first speed value is greater than the second speed value, and the second speed value is greater than 1.
Since not only the second variety character but also the first variety character appears in the second video segment, and the first variety character is a variety character which is interested by the user, the user is interested in the second video segment to a higher degree than the first video segment, and when the target video is played in the fast forward mode, the first video segment and the second video segment are actually played in a fast forward mode.
The method includes determining a first speed multiple value corresponding to a fast forward mode, playing a first video segment at a fast forward speed corresponding to the first speed multiple value, switching the fast forward speed to a second speed multiple value when a second video segment is to be played, where the second speed multiple value is smaller than the first speed multiple value, for example, when the first video segment is played at 2 times, and when the second video segment is to be played, the speed can be reduced to 1.5 times.
As can be seen, in this example, since the video segment associated with the second artistic person is more finely divided into the first video segment and the second video segment, when the first video segment and the second video segment are played quickly, the first video segment is played at the first speed multiple value, and the second video segment is played at the second speed multiple value, so that the flexibility of video playing is improved in combination with the video interest level of the user.
In one possible example, the method further comprises: when the target video is about to be played, searching a video set related to the first variety figure in a networking mode, wherein the video set comprises multiple types of videos related to the first variety figure; and when the target video playing is finished, pushing the videos in the video set to the user.
When the target video is to be played and received, a video set related to the first synthesis art character can be searched through networking, for example, when a certain video is watched through an Tencent video, other videos of main characters in the pushed target video can be seen through a menu.
Therefore, in this example, when the target video is played, a video set related to a first variety figure in which the user is interested can be pushed to the user, and there are multiple types of videos of the first variety figure in the video set, so that the user experience of watching the videos is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application, and the video processing method is applied to an electronic device including an eye tracking component. As shown in the figure, the video processing method includes:
s301, when detecting that a target video played currently is a variety video, the electronic device determines a plurality of variety characters in the target video.
S302, the electronic equipment determines the annotation duration of each of the multiple art characters by the user within a preset duration according to the eye tracking information.
And S303, the electronic equipment determines the interestingness ranking of the user on the multiple comprehensive art characters according to the annotation duration corresponding to each comprehensive art character, and determines a first comprehensive art character with the annotation duration being greater than a second preset threshold and a second comprehensive art character with the annotation duration being less than the second preset threshold, wherein the first comprehensive art character represents a comprehensive art character with high interestingness among the multiple comprehensive art characters, and the second comprehensive art character represents a comprehensive art character with low interestingness among the multiple comprehensive art characters.
S304, when detecting that the playing mode of the target video is switched from the normal mode to the fast-forward mode or the energy-saving mode, the electronic equipment determines a video clip associated with the second variety character in the target video, wherein the video clip comprises a first video clip only comprising the second variety character in a video picture and a second video clip simultaneously comprising the first variety character and the second variety character in the video picture.
S305, when the target video is played to the first video segment and the second video segment, the electronic device plays at a fast forward speed.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
In addition, according to the annotation duration of the user for each synthesis figure in the preset duration, the multiple synthesis figures are divided into a first synthesis figure with high interest of the user and a second synthesis figure with low interest of the user, and therefore the interest degree of the user for each synthesis figure can be determined.
In addition, when the first video segment and the second video segment of the second variety figure appear in the video picture in the target video, when the target video is in the fast forward mode and the energy-saving mode, only the first video segment and the second video segment need to be played quickly, and other video segments in the first video segment and the second video segment in the target video can still be played at normal speed, so that the situation that the user misses favorite video segments can be avoided.
Referring to fig. 4, fig. 4 is a schematic flow chart of a video processing method according to an embodiment of the present application, and the video processing method is applied to an electronic device including a touch display screen, where the touch display screen includes a first display area and a second display area, the first display area does not have a fingerprint identification function, and the second display area has a fingerprint identification function. As shown in the figure, the video processing method includes:
s401, when detecting that a target video played currently is a variety video, the electronic device determines a plurality of variety characters in the target video.
S402, the electronic equipment determines the annotation duration of each of the multiple art characters by the user within a preset duration according to the eye tracking information.
And S403, the electronic equipment determines the interestingness ranking of the user on the multiple comprehensive art characters according to the annotation duration corresponding to each comprehensive art character, and determines a first comprehensive art character with the annotation duration being greater than a second preset threshold and a second comprehensive art character with the annotation duration being less than the second preset threshold, wherein the first comprehensive art character represents a comprehensive art character with high interestingness among the multiple comprehensive art characters, and the second comprehensive art character represents a comprehensive art character with low interestingness among the multiple comprehensive art characters.
S404, when the electronic equipment detects that the play mode of the target video is switched from a normal mode to a fast forward mode or an energy-saving mode, determining a video clip associated with the second synthesis figure in the target video, wherein the video clip comprises a first video clip only comprising the second synthesis figure in a video picture and a second video clip simultaneously comprising the first synthesis figure and the second synthesis figure in the video picture.
S405, when the target video is played to the first video clip and the second video clip, the electronic device plays at a fast forward speed.
S406, when the target video is about to be played, the electronic device searches a video set related to the first variety figure in a networking mode, wherein the video set comprises multiple types of videos related to the first variety figure.
S407, the electronic device pushes the videos in the video set to the user when the target video playing is finished.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
In addition, according to the annotation duration of the user for each synthesis figure in the preset duration, the multiple synthesis figures are divided into a first synthesis figure with high interest of the user and a second synthesis figure with low interest of the user, and therefore the interest degree of the user for each synthesis figure can be determined.
In addition, when the first video segment and the second video segment of the second variety figure appear in the video picture in the target video, when the target video is in the fast forward mode and the energy-saving mode, only the first video segment and the second video segment need to be played quickly, and other video segments in the first video segment and the second video segment in the target video can still be played at normal speed, so that the situation that the user misses favorite video segments can be avoided.
In addition, when the target video is played, a video set related to a first variety figure interested by the user can be pushed to the user, and the video set comprises multiple types of videos of the first variety figure, so that the user experience of watching the videos is improved.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, please refer to fig. 5, fig. 5 is a schematic structural diagram of an electronic device 500 provided in the embodiments of the present application, where the electronic device 500 runs one or more application programs and an operating system, as shown in the figure, the electronic device 500 includes a processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the processor 510, and the one or more programs 521 include instructions for performing the following steps;
when the currently played target video is detected to be an art-integrated video, determining a plurality of art-integrated characters in the target video;
acquiring eye tracking information of a user through the eyeball tracking component, and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information;
when it is detected that the playing mode of the target video is switched from a normal mode to a fast forward mode or an energy-saving mode, performing preset processing on a video clip associated with at least one of the multiple synthesis characters, wherein the preset processing comprises fast playing, and the interestingness of each of the at least one synthesis character is lower than a preset interestingness.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
In one possible example, in the determining a user's interest level in each of the plurality of art figures based on the eye tracking information, the instructions in the program are specifically configured to: acquiring a plurality of fixation points of a video interface aiming at the target video playing process of a user, wherein the fixation points are position points of the eye sight of the user, and the fixation time of the eye sight of the user is larger than a first preset threshold value; determining an integrated art figure corresponding to each gazing point in the plurality of gazing points, and determining the number of the gazing points of each integrated art figure in the plurality of integrated art figures; and determining the interest degree of the user for each comprehensive art figure according to the number of the fixation points of each comprehensive art figure.
In one possible example, the instructions in the program are specifically for performing the following: displaying the obtained interest level ranking of the user on each of the multiple heddles on the video interface; and when the interestingness ranking is inaccurate, acquiring a correction request aiming at the eye tracking component and input by a user, and correcting the eye tracking component.
In one possible example, in the determining a user's interest level in each of the plurality of art figures based on the eye tracking information, the instructions in the program are specifically configured to: determining the annotation duration of each of the multiple art figures by the user within a preset duration according to the eye tracking information; according to the annotation duration corresponding to each variety figure, ranking the interestingness of the user on the plurality of variety figures, and determining a first variety figure with the annotation duration being larger than a second preset threshold and a second variety figure with the annotation duration being smaller than the second preset threshold, wherein the first variety figure represents a variety figure with high interestingness among the plurality of variety figures, and the second variety figure represents a variety figure with low interestingness among the plurality of variety figures.
In one possible example, in the aspect of performing the preset processing on the video clip associated with at least one of the multiple art figures, the instructions in the program are specifically configured to perform the following operations: determining a video clip associated with the second variety character in the target video, wherein the video clip comprises a first video clip of a video picture only including the second variety character and a second video clip of the video picture simultaneously including the first variety character and the second variety character; and when the target video is played to the first video segment and the second video segment, playing at a fast forward speed.
In one possible example, in the aspect that the fast forward speed is played when the target video is played to the first video segment and the second video segment, the instructions in the program are specifically configured to perform the following operations: determining a first speed multiple value corresponding to the fast forwarding mode; and playing the first video clip according to the fast forward speed corresponding to the first speed multiple value, and playing the second video clip after the fast forward speed is reduced to the speed of a second speed multiple value, wherein the first speed multiple value is greater than the second speed multiple value, and the second speed multiple value is greater than 1.
In one possible example, the instructions in the program are specifically for performing the following: when the target video is about to be played, searching a video set related to the first variety figure in a networking mode, wherein the video set comprises multiple types of videos related to the first variety figure; and when the target video playing is finished, pushing the videos in the video set to the user.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 6 is a block diagram of functional units of an apparatus 600 according to an embodiment of the present application. The video processing apparatus 600 is applied to an electronic device, and the video processing apparatus 600 includes a processing unit 601 and a communication unit 602, where:
the processing unit 601 is configured to determine a plurality of anarchia characters in a target video when the target video currently played is detected to be an anarchia video; the communication unit 602 is used for notifying the eyeball tracking component to acquire eye tracking information of a user and determining the interest degree of the user on each of the multiple art figures according to the eye tracking information; and when the situation that the playing mode of the target video is switched from the normal mode to the fast-forward mode or the energy-saving mode is detected, performing preset processing on a video clip associated with at least one of the multiple heddle characters, wherein the preset processing comprises fast playing, and the interestingness of each heddle character in the at least one heddle character is lower than the preset interestingness.
It can be seen that, in the embodiment of the application, an electronic device first determines a plurality of synthesis characters in a target video when the target video currently played is detected to be a synthesis type video, then acquires eye tracking information of a user through an eyeball tracking component, determines a user interest level of each of the synthesis characters according to the eye tracking information, and finally performs preset processing on a video segment associated with at least one of the synthesis characters when it is detected that a playing mode of the target video is switched from a normal mode to a fast forward mode or an energy saving mode, where the preset processing includes fast playing, and the interest level of each of the synthesis characters in the at least one synthesis character is lower than a preset interest level. The eye tracking information of the user in the process of watching the video is acquired by the electronic equipment through the eyeball tracking component when the user watches the variety video, and the interest degree of the user for each variety character is determined after the eye tracking information is analyzed, so that when the video is played in a fast-forward mode or an energy-saving mode, only the video clip associated with the variety character with low interest degree can be played quickly, the user can be prevented from missing the video clip associated with the variety character with high interest degree, and the flexibility of video playing and the video watching experience of the user are improved.
In one possible example, in the aspect of determining the interest level of the user in each of the multiple art figures according to the eye tracking information, the processing unit 601 is specifically configured to: acquiring a plurality of fixation points of a video interface aiming at the target video playing process of a user, wherein the fixation points are position points of the eye sight of the user, and the fixation time of the eye sight of the user is larger than a first preset threshold value; the comprehensive art figure is used for determining the comprehensive art figure corresponding to each gazing point in the plurality of gazing points, and the number of the gazing points of each comprehensive art figure in the plurality of comprehensive art figures is determined; and determining the interest degree of the user for each comprehensive art figure according to the number of the fixation points of each comprehensive art figure.
In one possible example, the processing unit 601 is specifically configured to: displaying the obtained interest level ranking of the user on each of the multiple heddles on the video interface; and when the interestingness ranking is inaccurate, acquiring a correction request for the eye tracking component, which is input by a user, so as to correct the eye tracking component.
In one possible example, in the aspect of determining the interest level of the user in each of the multiple art figures according to the eye tracking information, the processing unit 601 is specifically configured to: determining the annotation duration of each of the multiple art figures by the user within a preset duration according to the eye tracking information; and the system is used for determining the interest degree ranking of the user on the multiple synthesis art characters according to the annotation duration corresponding to each synthesis art character, and determining a first synthesis art character with the annotation duration being larger than a second preset threshold and a second synthesis art character with the annotation duration being smaller than the second preset threshold, wherein the first synthesis art character represents a synthesis art character with high interest degree among the multiple synthesis art characters, and the second synthesis art character represents a synthesis art character with low interest degree among the multiple synthesis art characters.
In a possible example, in the aspect of performing preset processing on the video clip associated with at least one of the multiple synthesis characters, the processing unit 601 is specifically configured to: determining a video clip associated with the second variety character in the target video, wherein the video clip comprises a first video clip of a video picture only including the second variety character and a second video clip of the video picture simultaneously including the first variety character and the second variety character; and the fast forward speed is used for playing when the target video is played to the first video segment and the second video segment.
In a possible example, in terms of playing the target video at the fast forward speed when the target video is played to the first video segment and the second video segment, the processing unit 601 is specifically configured to: determining a first speed multiple value corresponding to the fast forwarding mode; and the video playing module is configured to play the first video segment according to a fast forward speed corresponding to the first speed multiple, and play the second video segment after the fast forward speed is reduced to a speed equal to a second speed multiple, where the first speed multiple is greater than the second speed multiple, and the second speed multiple is greater than 1.
In one possible example, the processing unit 601 is specifically configured to: when the target video is about to be played, searching a video set related to the first variety figure in a networking mode, wherein the video set comprises multiple types of videos related to the first variety figure; and the video in the video set is pushed to a user when the target video is played.
Wherein, the electronic device may further include a storage unit 603, the processing unit 601 and the communication unit 602 may be a controller or a processor, and the storage unit 603 may be a memory.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated into one control unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. A video processing method is applied to an electronic device, wherein the electronic device comprises an eye tracking component; the method comprises the following steps:
when the currently played target video is detected to be an art-integrated video, determining a plurality of art-integrated characters in the target video;
acquiring eye tracking information of a user through the eye tracking component, and determining the interest degree of the user on each of the multiple heddles according to the eye tracking information; when the situation that the playing mode of the target video is switched from a normal mode to a fast forward mode or an energy-saving mode is detected, performing preset processing on a video clip associated with at least one of the multiple synthesis characters, wherein the preset processing comprises fast playing, and the interestingness of each of the at least one synthesis character is lower than a preset interestingness;
wherein,
the preset processing of the video clip associated with at least one of the multiple heddles comprises the following steps: determining a video clip associated with a second variety character in the target video, wherein the video clip comprises a first video clip of a video picture only including the second variety character and a second video clip of the video picture simultaneously including the first variety character and the second variety character; when the target video is played to the first video clip and the second video clip, the target video is played at a fast forward speed, the first variety figure represents a variety figure with high interest degree in the multiple variety figures, and the second variety figure represents a variety figure with low interest degree in the multiple variety figures;
wherein,
when the target video is played to the first video segment and the second video segment, the fast forward speed is played, and the method comprises the following steps: determining a first speed multiple value corresponding to the fast forwarding mode; and playing the first video clip according to the fast forward speed corresponding to the first speed multiple value, and playing the second video clip after the fast forward speed is reduced to the speed of a second speed multiple value, wherein the first speed multiple value is greater than the second speed multiple value, and the second speed multiple value is greater than 1.
2. The method of claim 1, wherein determining a user interest level in each of the plurality of art figures based on the eye tracking information comprises:
acquiring a plurality of fixation points of a video interface aiming at the target video playing process of a user, wherein the fixation points are position points of the sight of the eyes of the user, and the fixation time of the sight of the eyes of the user on a screen is larger than a first preset threshold value;
determining an integrated art figure corresponding to each gazing point in the plurality of gazing points, and determining the number of the gazing points of each integrated art figure in the plurality of integrated art figures;
and determining the interest degree of the user for each comprehensive art figure according to the number of the fixation points of each comprehensive art figure.
3. The method of claim 1, further comprising:
displaying the obtained interest level ranking of the user on each of the multiple heddles on a video interface;
and when the interestingness ranking is inaccurate, acquiring a correction request aiming at the eye tracking component and input by a user, and correcting the eye tracking component.
4. The method of claim 2, further comprising:
displaying the obtained interest level ranking of the user on each of the multiple heddles on a video interface;
and when the interestingness ranking is inaccurate, acquiring a correction request aiming at the eye tracking component and input by a user, and correcting the eye tracking component.
5. The method of claim 1, wherein determining a user interest level in each of the plurality of art figures based on the eye tracking information comprises:
determining the watching time length of each of the multiple art figures of the user within a preset time length according to the eye tracking information;
according to the watching duration corresponding to each variety figure, determining the interestingness ranking of the user on the plurality of variety figures, and determining a first variety figure with the watching duration being larger than a second preset threshold value and a second variety figure with the watching duration being smaller than the second preset threshold value.
6. The method according to any one of claims 1 to 5,
the method further comprises the following steps: when the target video is about to be played, searching a video set related to the first variety figure in a networking mode, wherein the video set comprises multiple types of videos related to the first variety figure;
and when the target video playing is finished, pushing the videos in the video set to the user.
7. The video processing device is applied to electronic equipment, and the electronic equipment comprises an eye tracking component; the video processing apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for determining a plurality of variety figures in the target video when the target video which is played currently is detected to be a variety video; the communication unit is used for informing the eye tracking component to acquire eye tracking information of a user and determining the interest degree of the user on each of the multiple heddle characters according to the eye tracking information; the method comprises the steps of detecting whether the playing mode of a target video is switched from a normal mode to a fast forward mode or an energy-saving mode, and conducting preset processing on a video clip associated with at least one of a plurality of synthesis characters, wherein the preset processing comprises fast playing, and the interest level of each of the at least one synthesis character is lower than a preset interest level;
wherein,
the preset processing of the video clip associated with at least one of the multiple heddles comprises the following steps: determining a video clip associated with a second variety character in the target video, wherein the video clip comprises a first video clip of a video picture only including the second variety character and a second video clip of the video picture simultaneously including the first variety character and the second variety character; when the target video is played to the first video clip and the second video clip, the target video is played at a fast forward speed, the first variety figure represents a variety figure with high interest degree in the multiple variety figures, and the second variety figure represents a variety figure with low interest degree in the multiple variety figures;
wherein,
when the target video is played to the first video segment and the second video segment, the fast forward speed is played, and the method comprises the following steps: determining a first speed multiple value corresponding to the fast forwarding mode; and playing the first video clip according to the fast forward speed corresponding to the first speed multiple value, and playing the second video clip after the fast forward speed is reduced to the speed of a second speed multiple value, wherein the first speed multiple value is greater than the second speed multiple value, and the second speed multiple value is greater than 1.
8. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that,
a computer program for electronic data exchange is stored, wherein the computer program is executable by a processor to cause a computer to implement the method according to any of claims 1-6.
CN201910503421.9A 2019-06-11 2019-06-11 Video processing method and related device Active CN110248241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910503421.9A CN110248241B (en) 2019-06-11 2019-06-11 Video processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910503421.9A CN110248241B (en) 2019-06-11 2019-06-11 Video processing method and related device

Publications (2)

Publication Number Publication Date
CN110248241A CN110248241A (en) 2019-09-17
CN110248241B true CN110248241B (en) 2021-06-04

Family

ID=67886574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910503421.9A Active CN110248241B (en) 2019-06-11 2019-06-11 Video processing method and related device

Country Status (1)

Country Link
CN (1) CN110248241B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858399A (en) * 2019-01-15 2019-06-07 上海理工大学 A kind of hobby method of discrimination and judgement system watched attentively based on expression in the eyes
CN112770182B (en) * 2019-11-05 2022-07-29 腾讯科技(深圳)有限公司 Video playing control method, device, equipment and storage medium
CN110809188B (en) * 2019-12-03 2020-12-25 珠海格力电器股份有限公司 Video content identification method and device, storage medium and electronic equipment
CN111447239B (en) * 2020-04-13 2023-07-04 抖音视界有限公司 Video stream playing control method, device and storage medium
CN111930280A (en) * 2020-07-27 2020-11-13 联想(北京)有限公司 Progress change response method, system and computer storage medium
CN112004117B (en) * 2020-09-02 2023-03-24 维沃移动通信有限公司 Video playing method and device
CN115484498A (en) * 2021-05-31 2022-12-16 华为技术有限公司 Method and device for playing video

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588493A (en) * 2008-05-23 2009-11-25 波尔图科技有限责任公司 System and method for adaptive segment prefetching of streaming media
CN102710991A (en) * 2011-03-04 2012-10-03 索尼公司 Information processing apparatus, information processing method, and program
CN102934458A (en) * 2011-02-04 2013-02-13 松下电器产业株式会社 Interest estimation device and interest estimation method
CN103621104A (en) * 2011-06-17 2014-03-05 微软公司 Interest-based video streams
CN104423824A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and device
CN104662894A (en) * 2012-07-31 2015-05-27 谷歌公司 Customized video
CN104731335A (en) * 2015-03-26 2015-06-24 联想(北京)有限公司 Played content adjusting method and electronic equipment
WO2015148276A1 (en) * 2014-03-25 2015-10-01 Microsoft Technology Licensing, Llc Eye tracking enabled smart closed captioning
CN107810638A (en) * 2015-06-24 2018-03-16 汤姆逊许可公司 By the transmission for skipping redundancy fragment optimization order content
CN109155055A (en) * 2016-04-28 2019-01-04 夏普株式会社 Region-of-interest video generation device
CN109388232A (en) * 2017-08-09 2019-02-26 宏碁股份有限公司 Visual utility analysis method and related eyeball tracking device and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588493A (en) * 2008-05-23 2009-11-25 波尔图科技有限责任公司 System and method for adaptive segment prefetching of streaming media
CN102934458A (en) * 2011-02-04 2013-02-13 松下电器产业株式会社 Interest estimation device and interest estimation method
CN102710991A (en) * 2011-03-04 2012-10-03 索尼公司 Information processing apparatus, information processing method, and program
CN103621104A (en) * 2011-06-17 2014-03-05 微软公司 Interest-based video streams
CN104662894A (en) * 2012-07-31 2015-05-27 谷歌公司 Customized video
CN104423824A (en) * 2013-09-02 2015-03-18 联想(北京)有限公司 Information processing method and device
WO2015148276A1 (en) * 2014-03-25 2015-10-01 Microsoft Technology Licensing, Llc Eye tracking enabled smart closed captioning
CN104731335A (en) * 2015-03-26 2015-06-24 联想(北京)有限公司 Played content adjusting method and electronic equipment
CN107810638A (en) * 2015-06-24 2018-03-16 汤姆逊许可公司 By the transmission for skipping redundancy fragment optimization order content
CN109155055A (en) * 2016-04-28 2019-01-04 夏普株式会社 Region-of-interest video generation device
CN109388232A (en) * 2017-08-09 2019-02-26 宏碁股份有限公司 Visual utility analysis method and related eyeball tracking device and system

Also Published As

Publication number Publication date
CN110248241A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110248241B (en) Video processing method and related device
CN110221734B (en) Information display method, graphical user interface and terminal
CN107801096B (en) Video playing control method and device, terminal equipment and storage medium
CN110262659B (en) Application control method and related device
US8755782B2 (en) Mobile terminal and method of controlling operation of the mobile terminal
CN111615003B (en) Video playing control method, device, equipment and storage medium
CN110245250A (en) Image processing method and relevant apparatus
CN110231963B (en) Application control method and related device
CN106170094B (en) Live broadcasting method and device for panoramic video
CN110308860B (en) Screen capturing method and related device
CN110780742B (en) Eyeball tracking processing method and related device
EP3461136B1 (en) Video playing method and device
US10102830B2 (en) Method for adjusting screen displaying direction and terminal
CN110650294A (en) Video shooting method, mobile terminal and readable storage medium
WO2016110752A1 (en) Control method and control apparatus for electronic equipment and electronic equipment
WO2016200721A1 (en) Contextual video content adaptation based on target device
CN114302088A (en) Frame rate adjusting method and device, electronic equipment and storage medium
US20170161871A1 (en) Method and electronic device for previewing picture on intelligent terminal
CN108401173A (en) Interactive terminal, method and the computer readable storage medium of mobile live streaming
CN110021062A (en) A kind of acquisition methods and terminal, storage medium of product feature
CN111770374A (en) Video playing method and device
CN107329568A (en) Method of adjustment, device and electronic equipment that panorama is played
CN109284060A (en) Display control method and relevant apparatus
CN112019686A (en) Display method and device and electronic equipment
CN115237314B (en) Information recommendation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant