CN111615003B - Video playing control method, device, equipment and storage medium - Google Patents

Video playing control method, device, equipment and storage medium Download PDF

Info

Publication number
CN111615003B
CN111615003B CN202010476759.2A CN202010476759A CN111615003B CN 111615003 B CN111615003 B CN 111615003B CN 202010476759 A CN202010476759 A CN 202010476759A CN 111615003 B CN111615003 B CN 111615003B
Authority
CN
China
Prior art keywords
viewing
video
target video
watching
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010476759.2A
Other languages
Chinese (zh)
Other versions
CN111615003A (en
Inventor
高萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010476759.2A priority Critical patent/CN111615003B/en
Publication of CN111615003A publication Critical patent/CN111615003A/en
Application granted granted Critical
Publication of CN111615003B publication Critical patent/CN111615003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Abstract

The embodiment of the application discloses a video playing control method, a device, equipment and a storage medium, wherein the method comprises the following steps: in the process of playing the target video, obtaining the viewing state information of the viewing object, wherein the viewing state information is used for representing whether the viewing object is viewing the target video; when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video; when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information, a video playback service is provided for the viewing object, the video playback service including playing the target video based on the start time point. The method can rapidly and accurately provide video playback service for the user and improve the use experience of the user.

Description

Video playing control method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video playing control method, apparatus, device, and storage medium.
Background
With the rapid development of internet technology, a network video platform has become one of the main media for users to watch videos nowadays, and users can watch various videos such as dramas, movies, and various shows through the network video platform by using devices such as mobile phones, computers, smart televisions, and the like.
In a real life scenario, the following often occurs: in the process of watching the video, the user leaves halfway and does not pause the video currently being played for some reasons, and when the user returns to continue watching the video, the playing progress of the video needs to be adjusted to the playing progress before leaving so as to watch the missed video content. For the above situation, the user can currently adjust the playing progress of the video to the playing progress before leaving the video by adjusting the playing progress bar or accelerating the fast-rewinding.
However, in the above implementation manner, the user needs to manually adjust the playing progress, and in many cases, the user needs to repeatedly adjust the playing progress bar multiple times or repeatedly switch between fast-rewinding and fast-forwarding multiple times to adjust the playing progress to the desired position. It can be seen that, based on the above implementation manner, it is generally difficult to quickly and accurately adjust the playing progress to the playing progress before the user leaves, and there is a certain degree of deficiency in both adjustment efficiency and use experience for the user.
Disclosure of Invention
The embodiment of the application provides a video playing control method, a video playing control device, video playing control equipment and a storage medium, which can rapidly and accurately provide video playback service for users and improve the use experience of the users.
In view of this, a first aspect of the present application provides a video play control method, the method including:
in the process of playing the target video, obtaining the viewing state information of a viewing object; the viewing state information is used for representing whether the viewing object is viewing the target video;
when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video;
providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting point in time.
A second aspect of the present application provides a video play control apparatus, the apparatus comprising:
The state information acquisition module is used for acquiring the viewing state information of the viewing object in the process of playing the target video; the viewing state information is used for representing whether the viewing object is viewing the target video;
a start point marking module, configured to mark a start point in the target video based on a current playing progress of the target video when it is determined that a viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information;
a playback service module for providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting point in time.
A third aspect of the application provides an apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to execute the steps of the video play control method according to the first aspect described above according to the computer program.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program for executing the steps of the video play control method of the first aspect described above.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the video playback control method of the first aspect described above.
From the above technical solutions, the embodiment of the present application has the following advantages:
the embodiment of the application provides a video playing control method, which can acquire the watching state information of a watching object in real time in the process of playing a target video, and monitor whether the watching state of the watching object on the target video is changed or not according to the watching state information; when the viewing state of the viewing object for the target video is detected to be changed from viewing to non-viewing, marking a starting time point in the target video based on the current playing progress of the target video; when it is detected that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing, a video playback service may be provided for the viewing object based on the above-described start time point. Compared with the implementation mode of manually adjusting the playing progress in the related art, the method provided by the embodiment of the application can automatically mark the starting time point of the missed video content of the watched object based on the change of the watched object to the target video watching state, and provide the missed video content for the watched object based on the starting time point, and the whole process does not need to manually adjust the playing progress of the watched object, so that the watched object can watch the missed video content rapidly and accurately, and the use experience of a user is greatly improved.
Drawings
Fig. 1 is an application scenario schematic diagram of a video playing control method provided by an embodiment of the present application;
fig. 2 is a flow chart of a video playing control method according to an embodiment of the present application;
fig. 3 is a schematic diagram of implementation of a video playback service according to an embodiment of the present application;
fig. 4 is a schematic diagram of an implementation principle of a video playback process according to an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation principle of a video recommendation process according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a first video playing control device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a second video playing control device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a third video playing control device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a fourth video playing control device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a fifth video playing control device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a sixth video playing control device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a seventh video playing control device according to an embodiment of the present application;
Fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, if a user leaves during video playing, when the user wants to watch video content played during the leaving after returning, the user needs to manually adjust the playing progress bar or adjust the playing progress of the video to the playing progress before the leaving by using the fast-rewinding acceleration function. These ways of manually adjusting the playing progress often make it difficult to quickly and accurately adjust the playing progress to the position desired by the user, and the use experience is poor for the user.
Aiming at the problems of the related art, the embodiment of the application provides a video playing control method which can help a user to watch the missed video content in the leaving process rapidly and accurately and improve the use experience of the user.
Specifically, in the video playing control method provided by the embodiment of the application, in the process of playing the target video, the viewing state information of the viewing object is obtained in real time, and the viewing state information can represent whether the viewing object is viewing the target video; when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video; when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information, a video playback service is provided for the viewing object, the video playback service including a service of playing the target video based on the start time point.
Compared with the implementation mode of manually adjusting the playing progress by a user in the related art, the method provided by the embodiment of the application can monitor whether the viewing state of the viewing object for the target video is changed according to the viewing state information of the viewing object, when the viewing state of the viewing object for the target video is monitored to be changed from viewing to non-viewing (such as the fact that the user leaves is monitored), the starting time point of missing the video content is marked based on the current playing progress of the target video, and when the viewing state of the viewing object for the target video is monitored to be changed from non-viewing (such as the fact that the user returns is monitored), video playback service is automatically provided for the viewing object based on the marked starting time point. The whole process does not need to manually adjust the playing progress of the watched object, and can ensure that missed video content is provided for the watched object rapidly and accurately, thereby greatly improving video watching experience.
It should be understood that, the main execution body of the video playing control method provided by the embodiment of the present application may be an electronic device, such as a terminal device or a server. The terminal device may be a device with a video playing function, such as a smart phone, a computer, a smart television, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), and the like. The server may be a server for providing a video playing service, and the server may be an application server or a Web server; in actual deployment, the server may be an independent server, or may be a cluster server or a cloud server.
In order to facilitate understanding of the video play control method provided by the embodiment of the present application, an application scenario of the video play control method is described below by taking an execution body of the video play method as a terminal device as an example.
Referring to fig. 1, fig. 1 is an application scenario schematic diagram of a video playing control method according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a terminal device 110 (the scenario shown in fig. 1 takes the terminal device 110 as an example of smart electricity). The terminal device 110 has a video playing application running therein, and the viewing object may view the target video through the video playing application running in the terminal device 110. Further, the terminal device 110 is also provided with a capturing device of viewing state information, for example, assuming that the viewing state information is facial feature information of a viewing object, the terminal device 110 is provided with an image capturing device such as a camera, and the terminal device 110 can capture the facial feature information of the viewing object through the image capturing device.
In practical applications, if the terminal device 110 detects that the video playing application running therein plays the target video, the acquisition device of the viewing state information may be triggered accordingly, and the viewing state information of the viewing object that views the target video is continuously acquired in real time, where the viewing state information can represent whether the viewing object is viewing the target video. Taking the viewing state information as the facial feature information as an example, the terminal device 110 may continuously collect, in real time, the facial feature information of the viewing object as the viewing state information of the viewing object using the camera provided thereon during the playing of the target video.
Further, the terminal device 110 may monitor the viewing state of the viewing object with respect to the target video according to the viewing state information collected by the terminal device, and when it is monitored that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information, may mark a start time point in the target video, that is, a start time point of the video content missed by the viewing object, based on the current playing progress of the target video.
Thereafter, if the terminal device 110 monitors that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information collected by the terminal device, a corresponding video playback service may be provided for the viewing object based on the starting time point marked in the target video before, for example, the user may be prompted to select to play back the missed video content in the leaving process, and when the user is detected to select to play back the missed video content, the target video is played from the marked starting time point.
It should be understood that the application scenario shown in fig. 1 is merely an example, in practical application, the video playing control method provided by the embodiment of the present application may be independently executed by a server in addition to the video playing control method provided by the terminal device 110, for example, the terminal device for playing video may upload the viewing status information collected by the terminal device to the server in real time, and then the server provides a corresponding video playback service for the viewing object based on the video playing control method provided by the embodiment of the present application; the video playing control method provided by the embodiment of the application can also be cooperatively executed by the terminal equipment and the server, for example, the terminal equipment monitors the viewing state of the viewing object according to the acquired viewing state information, and when the viewing state is detected to change, the server marks the starting time point or provides corresponding video playback service. The application does not limit the application scenario of the video playing control method.
The video playing control method provided by the application is described in detail by the following embodiments.
Referring to fig. 2, fig. 2 is a flowchart of a video playing control method according to an embodiment of the present application. For convenience of description, the following embodiments will be described taking a terminal device as an execution subject. As shown in fig. 2, the video play control method includes the steps of:
step 201: in the process of playing the target video, obtaining the viewing state information of a viewing object; the viewing state information is used to characterize whether the viewing object is viewing the target video.
In practical application, a user can select a video to be watched as a target video through a video playing application running in the terminal equipment, and control the video playing application to play the target video; the target video herein may be any video of any form and any content, and the present application is not limited to this target video. In the process of playing the target video, the terminal equipment can continuously acquire the viewing state information of the viewing object of the target video in real time through corresponding viewing state information acquisition devices, such as cameras, infrared sensors and the like.
It should be noted that, the viewing state information in the embodiment of the present application refers to information capable of reflecting whether the viewing object is viewing the target video, according to which the viewing state of the viewing object with respect to the target video can be monitored, for example, whether the viewing object leaves the target video which is not being viewed, whether the viewing object returns to continue to view the target video, and so on.
For example, the above viewing state information may include facial feature information of the viewing object. Specifically, the terminal device may collect an image for a preset range (generally, a range in which a screen of the terminal device can be viewed) through the camera, and in a case where a head of a viewing object is included in the collected image, facial feature information of the viewing object may be extracted from the image by using a face recognition technology as viewing state information of the viewing object.
In one possible implementation, the terminal device may track a line of sight of the viewing object according to the facial feature information of the viewing object, and determine whether the viewing object is viewing the target video according to the line of sight of the viewing object. Specifically, the terminal device may determine the line of sight direction of the viewing object according to the facial feature information of the viewing object, if the line of sight direction of the viewing object indicates that the line of sight of the viewing object falls on the screen of the terminal device, it may be considered that the viewing object is currently playing the target video of the viewing terminal device, and if the line of sight direction of the viewing object indicates that the line of sight of the viewing object does not fall on the screen of the terminal device, it may be considered that the viewing object is not currently playing the target video of the viewing terminal device.
In another possible implementation manner, the terminal device may determine a face orientation of the viewing object according to the facial feature information of the viewing object, and determine whether the viewing object is viewing the target video based on the face orientation. Specifically, the terminal device may determine the orientation angle of the face of the viewing object according to the facial feature information of the viewing object, if the orientation angle of the face of the viewing object is within the preset angle range, it may be considered that the viewing object is currently viewing the target video, and if the orientation angle of the face of the viewing object exceeds the preset angle range, it may be considered that the viewing object is not currently viewing the target video. It should be appreciated that the above-mentioned preset angle range may be determined according to a screen configuration of the viewable terminal device and/or viewing habits of a user, and whether the viewing object is viewing the screen of the terminal device, that is, whether the viewing object is viewing the target video may be measured according to a relationship between the preset angle range and an orientation angle of the face of the viewing object.
It should be understood that, in practical applications, in the case where the viewing state information includes facial feature information, the terminal device may further determine other relevant information according to the facial feature information of the viewing object, and measure, based on the relevant information, whether the viewing object is viewing the target video by adopting a corresponding processing manner, and the present application does not make any limitation on an implementation manner of measuring, based on the facial feature information, whether the viewing object is viewing the target video.
For example, the viewing state information may include position information of the viewing object. Specifically, the terminal device may collect an image for a preset range (generally, a range in which a screen of the terminal device can be viewed) through the camera, and determine position information of a viewing object based on the collected image, as viewing state information of the viewing object; and/or the terminal device may detect the position information of the viewing object through the infrared sensor as viewing state information of the viewing object.
In one possible implementation manner, the terminal device may determine whether the viewing object is viewing the target video according to a relationship between the position information of the viewing object and a preset viewing range. Specifically, if the position information of the viewing object indicates that the viewing object is within the preset viewing range, the viewing object may be considered to be currently viewing the target video, and if the position information of the viewing object indicates that the viewing object is outside the preset viewing range, the viewing object may be considered to be not currently viewing the target video. It should be understood that the preset viewing range may be determined according to factors such as an arrangement position of the terminal device, a screen configuration of the terminal device, and a viewing habit of the user.
It should be understood that, in practical applications, besides facial feature information and position information of the viewing object may be used as viewing state information, other information capable of reflecting whether the viewing object is looking at the screen of the terminal device may be used as viewing state information, and the present application is not limited in detail herein.
It should be noted that, in a scene where multiple viewing objects jointly view a target video in the same space, in a process of playing the target video, the terminal device may continuously collect respective viewing state information of the multiple viewing objects in real time, that is, collect corresponding viewing state information for each viewing object. For example, in a scenario where a plurality of users watch a target video played on a smart television together in a living room, the smart television may collect viewing state information of each user accordingly while playing the target video.
Optionally, in the case that the viewing state information includes facial feature information, the terminal device may further identify an identity of the viewing object according to the collected viewing state information, and determine a video account number of the viewing object on the video playing platform. Specifically, a video account corresponding to facial feature information in viewing state information may be searched in a target mapping relationship, where a correspondence between the video account and the facial feature information is recorded, as a video account of the viewing object.
In specific implementation, the viewing object can input own facial feature information when the video playing platform registers the account, and the server of the video playing platform correspondingly constructs the corresponding relationship between the facial feature information and the registered video account and stores the corresponding relationship into the target mapping relationship. Or when the watching object uses the video playing application provided by the video playing platform, the video playing application can prompt the watching object to input own facial feature information, after detecting that the watching object inputs the facial feature information of the watching object, the facial feature information of the watching object is transmitted to a server of the video playing platform, and the server of the video playing platform correspondingly constructs a corresponding relation between the facial feature information and a video account number currently logged in by the video playing application and stores the corresponding relation into a target mapping relation.
When the terminal equipment plays the target video, facial feature information of a watching object can be collected and used as watching state information, the facial feature information is transmitted to a server of a video playing platform, and the server can search a video account number with a corresponding relation with the facial feature information in a target mapping relation, and then the video account number is determined to be the video account number of the watching object.
It should be understood that, in practical application, the server of the video playing platform may also issue the stored target mapping relationship to the terminal device, and the terminal device searches the video account corresponding to the collected facial feature information based on the target mapping relationship, as the video account of the viewing object. The application is not limited in any way herein to the implementation of determining a video account number for a viewing object.
In a scene that a plurality of viewing objects commonly view a target video in the same space, the terminal device can determine respective video accounts of the plurality of viewing objects according to respective viewing state information (including facial feature information) of the plurality of viewing objects in the above manner, and further distinguish and record the viewing state of each viewing object on the target video based on the video accounts.
It should be noted that, when the execution body of the video playing control method provided by the embodiment of the present application is a server, the server may obtain viewing state information of a viewing object uploaded by the terminal device in real time, for example, the server may obtain an image collected by the terminal device in real time, and determine facial feature information of the viewing object as viewing state information based on the image by using a face recognition technology, and for example, the server may obtain an image or infrared sensing information collected by the terminal device in real time, and determine location information of the viewing object as viewing state information according to the image or the infrared sensing information.
Step 202: when it is determined that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information, a start time point is marked in the target video based on the current playing progress of the target video.
In the process of playing the target video, the terminal equipment can acquire and analyze the acquired viewing state information of the viewing object in real time, monitor the viewing state of the viewing object on the target video in real time based on the acquired viewing state information, and mark the target video by dotting based on the playing progress of the target video at the moment when the terminal equipment determines that the viewing state of the viewing object on the target video changes from viewing to non-viewing according to the viewing state information, namely mark the starting time point of the video content missed by the viewing object in the target video.
It should be understood that the viewing state information collected by the terminal device generally reflects the instantaneous viewing state of the viewing object, and it is often difficult to objectively and accurately reflect whether the viewing state of the viewing object with respect to the target video changes. Taking the viewing state information as the facial feature information of the viewing object as an example, it is assumed that the viewing state information acquired at a certain moment indicates that the line of sight of the viewing object at the moment does not fall on the target video, but at this moment, only the viewing object may make a line of sight shifting action, and it is likely that the line of sight of the viewing object will be shifted back to the target video at the next moment. It can be seen that, in general, whether the viewing state of the viewing object with respect to the target video is changed cannot be objectively and accurately determined only according to the viewing state information at a certain moment, and it is often required to determine whether the viewing state of the viewing object with respect to the target video is changed based on the viewing state information within a period of time.
In one possible implementation, when the viewing state information includes facial feature information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object with respect to the target video is changed according to a stay time of the line of sight of the viewing object. That is, the terminal device may determine whether the line of sight of the viewing object falls on the target video according to the viewing state information; if the sight of the watching object is detected to leave the target video and the time that the sight of the watching object does not fall on the target video exceeds a first time threshold value, the watching state of the watching object for the target video is determined to be changed from watching to not watching; conversely, if it is detected that the line of sight of the viewing object returns to the target video and the time that it continues to fall on the target video exceeds the second time threshold, it may be determined that the viewing state of the viewing object for the target video is never viewed for viewing.
Specifically, the terminal device may track the line of sight direction of the viewing object based on the facial feature information of the viewing object in the viewing state information by using a line of sight tracking technique, and if it is determined that the line of sight of the viewing object leaves the target video (i.e., leaves the screen of the terminal device) according to the line of sight direction of the viewing object and the time that does not fall on the target video exceeds the first time threshold, it may be considered that the viewing state of the viewing object with respect to the target video has changed from viewing to not viewing. Conversely, if it is determined that the line of sight of the viewing object returns to the target video (i.e., falls back on the screen of the terminal device) according to the line of sight direction of the viewing object, and the time of continuing to fall on the target video exceeds the second time threshold, it can be considered that the viewing state of the viewing object with respect to the target video has changed from non-viewing to viewing.
In another possible implementation, when the viewing state information includes facial feature information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object with respect to the target video is changed according to a stay time of the viewing object's facial orientation. That is, the terminal device may determine, according to the viewing state information, whether the orientation angle of the face of the viewing object is within a preset angle range (that is, an angle range in which the screen of the terminal device can be viewed), and if it is detected that the orientation angle of the face of the viewing object exceeds the preset angle range and the time of continuously exceeding the preset angle range exceeds a first time threshold, determine that the viewing state of the viewing object with respect to the target video is changed from viewing to not viewing; otherwise, if the direction angle of the face of the viewing object is detected to return to the preset angle range and the duration time of the face of the viewing object belonging to the preset angle range exceeds the second time threshold, determining that the viewing state of the viewing object for the target video is changed from non-viewing to viewing.
It should be understood that the first time threshold and the second time threshold may be set according to actual requirements, for example, each set to 3 minutes, and the present application is not limited to the first time threshold and the second time threshold specifically.
In still another possible implementation manner, when the viewing state information includes position information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object with respect to the target video is changed according to a stay time of the viewing object within a preset viewing range. That is, the terminal device may determine whether the viewing object is located in a preset viewing range according to the viewing state information; if the fact that the viewing object leaves the preset viewing range and the duration of the time which is not in the preset viewing range exceeds a third time threshold value is detected, determining that the viewing state of the viewing object for the target video is changed from viewing to non-viewing; if the fact that the viewing object returns to the preset viewing range is detected, and the time continuously in the preset viewing range exceeds a fourth time threshold value, the viewing state of the viewing object for the target video is determined to be changed from non-viewing to viewing.
Specifically, the terminal device may set a preset viewing range in advance according to its own screen configuration, where the preset viewing range refers to a position range in which a screen of the terminal device can be viewed. In the process of playing the target video by the terminal equipment, the terminal equipment can acquire the position information of the viewing object in real time, monitor whether the viewing object is in a preset viewing range based on the position information, and if the viewing object is detected to leave the preset viewing range according to the viewing state information and the time which is not in the preset viewing range continuously exceeds a third time threshold, the viewing state of the viewing object for the target video can be considered to be changed, and the viewing state is changed into non-viewing state. On the contrary, if the viewing object is detected to return to the preset viewing range according to the viewing state information and the time of being in the preset viewing range exceeds the fourth time threshold, the viewing state of the viewing object for the target video can be considered to be changed from non-viewing to viewing.
It should be understood that the third time threshold and the fourth time threshold may be set according to practical requirements, for example, each set to 4 minutes, and the present application is not limited to the third time threshold and the fourth time threshold specifically.
It should be understood that in practical application, besides whether the viewing state of the viewing object with respect to the target video is changed according to the facial feature information and the position information of the viewing object, whether the viewing state of the viewing object with respect to the target video is changed may be monitored according to other relevant information as the viewing state information and based on the viewing state information in a corresponding manner.
Optionally, in order to enhance the video playback effect, it is ensured that the viewing object can watch the complete missed video content when returning to the viewing target video, and when it is monitored that the viewing state of the viewing object for the target video changes from viewing to non-viewing, the terminal device may mark a time point before the current playing progress of the target video and spaced from the current playing progress by a first preset duration in the target video as a starting time point of the missed video content.
That is, when the terminal device marks the starting time point, the first preset duration mark starting time point can be extended forward on the basis of determining the playing time point of the target video when the viewing state of the viewing object for the target video is changed, so that more complete missed video content is ensured to be provided for the viewing object. It should be understood that the first preset time period may be set according to actual requirements, for example, set to 3 minutes, and the present application is not limited herein specifically.
It should be noted that, in a scenario where multiple viewing objects commonly view a target video in the same space, if the terminal device determines, in advance, video accounts of the multiple viewing objects on the video playing platform according to respective viewing state information (including facial feature information) of the multiple viewing objects, when it is monitored that the viewing state of a certain viewing object with respect to the target video changes, the terminal device may record a starting time point marked with respect to the viewing object based on the video account of the viewing object.
That is, the terminal device may monitor the viewing state of the target video for each of the plurality of viewing objects, when it is determined that the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object, mark a starting time point in the target video based on the current playing progress of the target video, and construct an association relationship between the starting time point and the video account number of the viewing object, indicating that the starting time point is marked based on the viewing state of the viewing object for the target video.
To facilitate an understanding of the specific implementation of step 202 in the above scenario, an exemplary description of the implementation is provided below taking the example of user a and user B watching the target video together.
When the user A and the user B watch the target video played by the terminal equipment together in the same space, the terminal equipment can acquire facial feature information of the user A and facial feature information of the user B, determine a video account 1 of the user A on a video playing platform according to the facial feature information of the user A, and determine a video account 2 of the user B on the video playing platform according to the facial feature information of the user B. In the process of playing the target video, the terminal equipment can acquire the facial feature information of the user A and the user B in real time as respective watching state information, and monitor whether the watching states of the user A and the user B on the target video are changed according to the respective watching state information of the user A and the user B.
When the terminal device determines that the viewing state of the user a for the target video is changed from viewing to non-viewing according to the viewing state information of the user a, the terminal device may mark a starting time point in the target video based on the playing progress of the target video at this time, and set an identification tag corresponding to the video account 1 for the starting time point, so that the starting time point is associated with the video account 1, which indicates that the starting time point is marked based on the viewing state of the user a for the target video.
Step 203: providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting point in time.
In the process of playing the target video, even if the terminal device determines that the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object, the terminal device still continuously collects the viewing state information of the viewing object in real time and monitors the viewing state of the viewing object for the target video based on the viewing state information, when the viewing state of the viewing object for the target video is changed from non-viewing according to the collected viewing state information, for example, when the terminal device monitors that the viewing object returns, the terminal device can correspondingly provide video playback services for the viewing object.
In some embodiments, the terminal device may provide video playback services for the viewing object based on its starting point in time marked in step 202. For example, when the terminal device detects that the viewing state of the viewing object for the target video is changed from not viewing to viewing, a playback prompt dialog box may be popped up on the video playing interface to query whether the viewing object views the missed content, where the playback prompt dialog box includes a video playback control for triggering playback of the missed video content, and if the viewing object is detected to click the video playback control, the playing progress of the target video is correspondingly adjusted to the starting time point marked in step 202, and playing of the target video is started from the starting time point.
It should be noted that, in a scenario where multiple viewing objects commonly view a target video in the same space, if the terminal device determines, in advance, video accounts of the multiple viewing objects on the video playing platform according to respective viewing state information (including facial feature information) of the multiple viewing objects, and marks a starting time point associated with the video account of the viewing object that is halfway away based on the viewing state of each viewing object for the target video, when it is monitored that the viewing object that is halfway away returns, video playback service may be provided for the viewing object based on the starting time point associated with the video account of the viewing object.
That is, the terminal device may monitor a viewing state of a target video for each of a plurality of viewing objects, respectively, and may provide a video playback service for a viewing object when it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to viewing state information of the viewing object, the video playback service including playing the target video based on a start time point associated with a video account of the viewing object.
Taking the example that the user a and the user B watch the target video together in the same space, assuming that the user a leaves in the process of watching the target video, the terminal device marks a starting time point associated with the video account 1 of the user a on the video playing platform in the target video, if the terminal device detects that the watching state of the user a for the target video is changed from not watching to watching according to the watching state information of the user a, that is, detects that the user a returns to watching the target video, at this moment, the terminal device can provide video playback service for the user a, for example, the terminal device can pop up a playback prompt dialog box on a video playing interface to ask the user a whether to play back the missed content, the playback prompt dialog box includes a video playback control, and if the video playback control is detected to be clicked, the playing progress of the target video is correspondingly adjusted to the starting time point associated with the video account 1, and the target video is started to be played from the starting time point.
Optionally, when the terminal device determines that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to the viewing state information of the viewing object, the terminal device may further mark a termination time point in the target video based on the playing progress of the target video at this time; that is, when the terminal device determines that the viewing state of the viewing object with respect to the target video is changed from not viewing to viewing, the terminal device may also mark a termination time point in the target video accordingly to determine a video clip missed by the viewing object based on the start time point marked in the target video before and the termination time point. Further, the terminal device may provide a corresponding video playback service based on the video clip between the start time point and the end time point.
In order to enhance the video playback effect, the video playback segment determined based on the starting time point and the ending time point is ensured to be capable of completely covering the missed video content of the watched object, and when the terminal device monitors that the watched state of the watched object for the target video is changed from non-watched to watched, the terminal device marks a time point which is after the current playing progress of the target video and is spaced from the current playing progress by a second preset time length in the target video as the ending time point of the missed video content.
That is, when the terminal device marks the termination time point, the second preset duration mark termination time point can be extended backward on the basis of determining the play time point of the target video when the viewing state of the viewing object for the target video is changed, so that more complete missed video content is ensured to be provided for the viewing object. It should be understood that the second preset time period may be set according to actual requirements, for example, set to 3 minutes, and the present application is not limited herein specifically.
Illustratively, the implementation of providing video playback services for viewing objects based on video clips between a start time point and an end time point is as follows: and displaying a playback prompt dialog box, wherein the playback prompt dialog box comprises a video playback control and a playback fragment sending control, if the video playback control is detected to be clicked, jumping to a starting time point to play the video fragment, and if the playback fragment sending control is detected to be clicked, sending the video fragment to a video account of a watching object, wherein the video account of the watching object is determined by the terminal equipment according to facial feature information of the watching object.
Specifically, the terminal device may determine, in advance, a video account of the viewing object on the video playing platform according to viewing status information (including facial feature information) of the viewing object, mark, when it is detected that the viewing status of the viewing object with respect to the target video changes from viewing to non-viewing, a start time point associated with the video account of the viewing object in the target video, and mark, when it is detected that the viewing status of the viewing object with respect to the target video changes from non-viewing to viewing, an end time point associated with the video account of the viewing object in the target video.
When detecting that the watching state of the watching object on the target video is changed from watching without watching, the terminal device can also pop up a playback prompt dialog box on the video playing interface to inquire whether the watching object plays back the missed video content, wherein the playback prompt dialog box comprises a video playback control and a playback fragment sending control. If the terminal equipment detects that the video playback control is clicked, the playing progress of the target video is adjusted to the starting time point associated with the video account of the watching object, and the target video starts to be played from the starting time point until the playing progress of the target video reaches the ending time point associated with the video account of the watching object, so that the playback of the missed video content is completed. If the terminal equipment detects that the playback segment sending control is clicked, based on the starting time point and the ending time point which are associated with the video account of the watching object, the video segment is intercepted in the target video, the video segment is sent to the video account of the watching object, and in the process, the terminal equipment still continues to play the target video.
An exemplary description will be given below of an implementation of providing a video playback service for halfway-away viewing objects based on video clips between a start time point and an end time point in a scene where a plurality of viewing objects commonly view a target video in the same space, in conjunction with fig. 3.
Still assuming that the plurality of viewing objects include a user a and a user B, the terminal device has determined that the video account number of the user a on the video playing platform is a video account number 1 according to the viewing state information (including facial feature information) of the user a, and determined that the video account number of the user B on the video playing platform is a video account number 2 according to the viewing state information (including facial feature information) of the user B. It is assumed that in the process of playing the target video, the terminal device detects that the user a leaves halfway according to the viewing state information of the user a, at this time, based on the start time point a of the current playing progress mark of the target video associated with the video account 1, if the terminal device subsequently detects that the user a returns to view the target video according to the viewing state information of the user a, at this time, based on the end time point b of the current playing progress mark of the target video associated with the video account 1.
When the terminal equipment detects that the user A returns to watch the target video according to the watching state information of the user A, a playback prompt dialog box can be popped up on a video playing interface, the user A is inquired whether to review missing contents or not through the playback prompt dialog box, and the playback prompt dialog box comprises a video playback control of "immediately review" and a playback fragment sending control of "single sending to me". If the control is detected to be touched, the playing progress of the target video is adjusted to a starting time point a, and the target video is played from the starting time point a; if the control 'single-shot me' is detected to be touched, capturing a video fragment between a starting time point a and a stopping time point b in the target video, sending the video fragment to the video account 1, and enabling the user A to watch the missed video fragment independently.
It should be noted that, in a scene where multiple viewing objects commonly view a target video in the same space, since the terminal device may determine, according to respective viewing status information of each viewing object, a video account number of each viewing object on the video playing platform, if the terminal device detects that a certain viewing object leaves midway during playing of the target video, the terminal device may record a start time point and an end time point associated with the video account thereof for the viewing object, respectively, and directly transmit the video clip between the start time point and the end time point to the video account of the viewing object without particularly playing back the viewing object when detecting that the viewing object returns to the viewing target video. Thus, the influence on the watching of other watching objects in the same space on the target video can be avoided.
In addition, in the case that the viewing state information includes facial feature information of the viewing object, the method provided by the embodiment of the application can further determine an emotion state of the viewing object when the viewing object views each video segment in the target video based on the expression recognition technology, and intercept the video segment possibly interested by the viewing object from the target video based on the emotion state as a video segment to be recommended for the viewing object.
That is, in the process of playing the target video, the terminal device may determine the current emotional state of the viewing object according to the current viewing state information of the viewing object, and configure a classification tag for the video clip currently played in the target video based on the emotional state; furthermore, the video clips to be recommended can be intercepted from the target video according to the respective classification labels of the video clips in the target video.
Specifically, in the process of playing the target video, the terminal device may continuously collect facial feature information of the viewing object in real time, perform expression recognition based on the collected facial feature information, and determine an emotional state of the viewing object according to the expression recognition result, such as happiness, sadness, anger, boring, and the like. Then, the terminal device may configure a classification tag for a video clip currently played in the target video according to the current emotional state of the viewing object, for example, may configure a classification tag "premium-like" for a video clip currently played in the target video when the current emotional state of the viewing object is happy, may configure a classification tag "premium-wounded" for a video clip currently played in the target video when the current emotional state of the viewing object is sad and anger, and may configure a classification tag "uninteresting" for a video clip currently played in the target video when the current emotional state of the viewing object is boring. Further, the terminal device may extract, from the target video, a video segment of which the classification tag belongs to the target classification tag as a video segment to be recommended for the viewing object, for example, assuming that the target classification tag includes "good-like" and "good-wounded", the terminal device may extract, from the target video, a video segment configured with "good-like" and a video segment configured with "good-wounded" as a video segment to be recommended for the viewing object.
It should be understood that, in practical application, the classification label may be set according to practical requirements, and the present application is not limited in particular to the classification label configured for video clips herein.
It should be noted that, in a scenario where multiple viewing objects commonly view a target video in the same space, the terminal device may determine, for each viewing object, a corresponding video clip to be recommended according to respective viewing state information (including facial feature information) of the multiple viewing objects.
That is, in the process of playing the target video, the terminal device may determine, for each viewing object of the plurality of viewing objects, a current emotional state of the viewing object according to current viewing state information of the viewing object, and configure, for a video clip currently played in the target video, a personal classification label based on the current emotional state of the viewing object, where the personal classification label belongs to a video account of the viewing object. Furthermore, the terminal device may obtain, for each of the plurality of viewing objects, a personal classification tag under the video account of the viewing object, and intercept, from the target video, a video clip to be recommended corresponding to the video account of the viewing object according to a correspondence between the personal video tag under the video account of the viewing object and each video clip in the target video.
Taking the example that the viewing objects include a user a and a user B, the video account of the user a is a video account 1, the video account of the user B is a video account 2, in the process that the user a and the user B watch the target video together, the terminal device can determine the current emotion state of the user a according to the current viewing state information of the user a, and configure the personal classification tag of the video account 1 according to the current emotion state of the user a for the video clip currently played in the target video, that is, when configuring the classification tag for the video clip, the association relationship between the classification tag and the video account 1 is constructed, for example, the classification tag including the identifier of the video account 1 can be configured for the video clip; similarly, the terminal device may also configure, in the process of playing the target video, the personal classification tag belonging to the video account 2 for the video clip currently played in the target video according to the current viewing state information of the user B in the same manner.
When determining the video clips to be recommended for the user A, the terminal equipment can acquire the personal classification labels, which are marked by the video clips in the target video and belong to the video account 1, of the video clips configured with the target classification labels, and then intercept the video clips to be recommended for the user A; similarly, when determining the video clip to be recommended for the user B, the terminal device may acquire the personal classification tag belonging to the video account 2 marked for each video clip in the target video, and further intercept the video clip configured with the target classification tag as the video clip to be recommended for the user B.
In one possible implementation, the terminal device may recommend corresponding video clips to be recommended to the viewing object in the form of video highlights. That is, after the target video is played, the terminal device may recommend the video clip to be recommended, which is cut off based on the emotional state of the viewing object, to the viewing object. Therefore, the video content can be recommended to the watching object by combining the preference of the watching object, so that the video recommendation is more targeted and personalized.
In another possible implementation, when the target video is a non-first video in the continuous video set, such as when the target video is another set in the television series other than the first set, the terminal device may play the video clip to be recommended that is cut out from the target video based on the emotional state of the viewing object before playing the next video of the target video. For example, when playing a prose review of a television show, the video clip of interest (i.e., the video clip to be recommended) of the viewing object in the previous set may be played as the content of the prose review, so that the content of the prose review is more focused on the viewing object and more personalized.
It should be understood that, in practical application, the video clip to be recommended, which is intercepted based on the emotional state of the viewing object, may also be applied in other scenes, and the present application does not limit the application scene of the video clip to be recommended.
The video playing control method provided by the embodiment of the application can monitor whether the viewing state of the viewing object for the target video is changed according to the viewing state information of the viewing object, automatically mark the starting time point of missed viewing of the video content based on the current playing progress of the target video when the viewing state of the viewing object for the target video is changed from viewing to non-viewing, and automatically provide video playback service for the viewing object based on the marked starting time point when the viewing state of the viewing object for the target video is changed from non-viewing. The whole process does not need to manually adjust the playing progress of the watched object, so that missed video content of the watched object can be provided for the watched object rapidly and accurately, and video watching experience is improved.
In order to further understand the video play control method provided by the embodiment of the present application, a scene that the user a and the user B watch the target video together in the same space is taken as an example, and the video play control method provided by the embodiment of the present application is described in an overall exemplary manner with reference to fig. 4 and fig. 5.
Fig. 4 is a schematic diagram of an implementation principle of a video playback process. As shown in fig. 4, when the user a and the user B watch the target video played by the terminal device together, the terminal device may identify different user main bodies by using the face recognition technology, and bind the user a and the user B with their corresponding video account numbers. Specifically, the terminal device can respectively determine facial feature information of the user a and facial feature information of the user B through a face recognition technology, and further determine a video account of the user a according to the facial feature information of the user a and a video account of the user B based on a target mapping relation (in which a corresponding relation between the facial feature information of the user and the video account is stored); the user a and the user B will be synchronized in the video account of the user a and the target video, respectively.
In the process of playing the target video, the terminal equipment can continuously collect facial feature information of the user A and the user B in real time as viewing state information, monitor the viewing states of the user A and the user B on the target video in real time based on the facial feature information, and perform dotting processing of corresponding behaviors in the target video when the viewing states are changed. Taking the monitoring of the viewing state of the user a as an example, if the line of sight of the user a is detected to leave the target video for a certain period of time (for example, 3 minutes) based on the facial feature information of the user a during the playing of the target video, it may be determined that the viewing state of the user a with respect to the target video is changed from viewing to non-viewing, and the starting time point a is marked in the target video based on the current playing time point of the target video; if it is detected, based on the facial feature information of the user a, that the line of sight of the user a returns to the target video for a certain period of time, it may be determined that the user a returns to watch the target video, and marking an ending time point b in the target video based on the current playing time point of the target video, wherein the video segment between the starting time point a and the ending time point b is the video content missed by the user A.
In the aspect of enhancing the recommendation of the review video, the terminal device can perform certain time extension when marking the starting time point a and the ending time point b, namely, the dotting time is properly extended forward when the user is identified not to watch the target video, and the dotting time is properly extended backward when the user is identified to return to watch the target video, so that the video content missed by the user is ensured to be accurately covered by the video segment between the starting time point a and the ending time point b.
Furthermore, the terminal device can review and recommend the user, and make corresponding feedback according to the user operation. Taking the example of providing the video playback service for the user a, when the terminal device detects that the user a returns to watch the target video, the terminal device can correspondingly detect a playback prompt dialog box on a video playing interface, wherein the playback prompt dialog box comprises a control of 'immediately watching back' and a control of 'single sending to me', if the user selects 'immediately watching back', the terminal device can control the target video to jump to a starting time point a to play, if the user selects 'single sending to me', the terminal device can intercept a video fragment between the starting time point a and a terminating time point b and send the video fragment to a video account of the user a, the user a can later self-repair the missed video content of the user a, and if the user does not make any operation for the playback prompt dialog box in a preset time period (such as 5 s), the playback prompt dialog box is canceled, and the target video is normally played.
Fig. 5 is a schematic diagram of an implementation principle of the video recommendation process. As shown in fig. 5, the terminal device may bind the user with its corresponding video account based on facial feature information of the user. The implementation of this process is similar to that of the corresponding process in fig. 4.
In playing the target video, the terminal device may collect facial feature information of the user in real time and perform preprocessing (e.g., select the most representative facial feature information from among the plurality of facial feature information collected in succession), and then identify the current emotional state of the user, such as happy, sad, anger, boring, etc., based on the facial feature information obtained after the preprocessing, using the expression recognition technology.
Furthermore, according to the identified current emotional state of the user, a classification label is configured for the video clip currently playing in the target video, for example, when the current emotional state of the watching object is happy, a classification label of "good quality-like" can be configured for the video clip currently playing in the target video, when the current emotional state of the watching object is sad and anger, a classification label of "good quality-wounded" can be configured for the video clip currently playing in the target video, and when the current emotional state of the watching object is boring, a classification label of "uninteresting" can be configured for the video clip currently playing in the target video.
Finally, the terminal device can intercept the video clips configured with the target classification labels (such as 'high-quality-like', 'high-quality-feel') from the target video according to the respective classification labels of the video clips in the target video as video clips to be recommended, and intensively recommend the video clips to be recommended to the user after the target video is played.
With respect to the video play control method described above, the present application also provides a corresponding video play control device, so that the video play control method can be practically applied and realized.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a video play control apparatus 600 corresponding to the video play control method shown in fig. 2 above, the video play control apparatus comprising:
a state information obtaining module 601, configured to obtain viewing state information of a viewing object during a process of playing a target video; the viewing state information is used for representing whether the viewing object is viewing the target video;
a start point marking module 602, configured to mark a start point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information;
a playback service module 603 for providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting point in time.
Alternatively, on the basis of the video play control device shown in fig. 6, the viewing state information includes facial feature information; referring to fig. 7, fig. 7 is a schematic structural diagram of another video playing control device 700 according to an embodiment of the present application, where the device further includes:
the video account determining module 701 is configured to find a video account corresponding to the facial feature information in a target mapping relationship, as a video account of the viewing object; and the corresponding relation between the video account and the facial feature information is recorded in the target mapping relation.
Alternatively, on the basis of the video playing control device shown in fig. 6, in a case where a plurality of viewing objects commonly view the target video in the same space, the state information obtaining module 601 is specifically configured to:
in the process of playing the target video, acquiring the respective watching state information of the plurality of watching objects;
the start point marking module 602 is specifically configured to:
for each of the plurality of viewing objects, when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object, marking a starting time point in the target video based on the current playing progress of the target video, and constructing an association relationship between the starting time point and a video account of the viewing object;
The playback service module 603 is specifically configured to:
for each of the plurality of viewing objects, providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to viewing state information of the viewing object; the video playback service includes playing the target video based on a starting point in time associated with the video account of the viewing object.
Alternatively, on the basis of the video play control device shown in fig. 6, the viewing state information includes position information; the start point marking module 602 and the playback service module 603 determine a change in viewing state of the viewing object for the target video from the viewing state information by:
determining whether the line of sight of the viewing object falls on the target video according to the viewing state information;
if the sight of the watching object is detected to leave the target video and the duration of the sight of the watching object does not fall on the target video exceeds a first time threshold, determining that the watching state of the watching object for the target video is changed from watching to not watching;
And if the sight line of the watching object is detected to return to the target video and the time continuously falling on the target video exceeds a second time threshold, determining that the watching state of the watching object for the target video is changed from non-watching to watching.
Alternatively, on the basis of the video play control device shown in fig. 6, the viewing state information includes position information; the start point marking module 602 and the playback service module 603 determine a change in viewing state of the viewing object for the target video from the viewing state information by:
determining whether the watching object is positioned in a preset watching range or not according to the watching state information;
if the fact that the viewing object leaves the preset viewing range and the duration of the time which is not in the preset viewing range exceeds a third time threshold value is detected, determining that the viewing state of the viewing object for the target video is changed from viewing to non-viewing;
and if the fact that the viewing object returns to the preset viewing range is detected, and the time continuously in the preset viewing range exceeds a fourth time threshold value, determining that the viewing state of the viewing object for the target video is changed from non-viewing to viewing.
Alternatively, referring to fig. 8, fig. 8 is a schematic structural diagram of another video play control device 800 according to an embodiment of the present application, based on the video play control device shown in fig. 6. The apparatus further comprises:
an ending point marking module 801, configured to mark an ending time point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing according to the viewing state information;
the video playback service further comprises: and providing a video clip based on the start time point and the end time point.
Optionally, on the basis of the video play control device shown in fig. 8, the start point marking module 602 is specifically configured to:
marking a time point which is in front of the current playing progress and is spaced from the current playing progress by a first preset duration in the target video as the starting time point;
the termination point marking module 801 is specifically configured to:
and marking a time point which is after the current playing progress and is spaced from the current playing progress by a second preset time length in the target video as the termination time point.
Optionally, on the basis of the video playing device shown in fig. 8, the playback service module 603 is specifically configured to:
displaying a playback prompt dialog; the playback prompt dialog box comprises a video playback control and a playback fragment sending control;
if the video playback control is detected to be clicked, jumping to the starting time point to play the video clip;
if the playback segment sending control is detected to be clicked, the video segment is sent to the video account of the watching object; the video account number of the viewing object is determined based on the viewing state information.
Alternatively, on the basis of the video play control device shown in fig. 6, the viewing state information includes facial feature information; referring to fig. 9, fig. 9 is a schematic structural diagram of another video playing control device 900 according to an embodiment of the present application, where the device further includes:
the tag configuration module 901 is configured to determine, during playing the target video, a current emotional state of the viewing object according to current viewing state information of the viewing object; configuring a classification label for a video clip currently played in the target video based on the emotional state;
The video capturing module 902 is configured to capture a video clip to be recommended from the target video according to respective classification labels of video clips in the target video.
Alternatively, on the basis of the video play control device shown in fig. 6, the viewing state information includes facial feature information; in a case where a plurality of viewing objects commonly view the target video in the same space, referring to fig. 10, fig. 10 is a schematic structural diagram of another video playing control device 1000 according to an embodiment of the present application, where the device further includes:
a personal tag configuration module 1001, configured to determine, for each of the plurality of viewing objects, a current emotional state of the viewing object according to current viewing state information of the viewing object during playing the target video; configuring a personal classification label for a video clip currently played in the target video based on the current emotion state of the viewing object, wherein the personal classification label belongs to a video account of the viewing object;
the personal video capturing module 1002 is configured to obtain, for each of the plurality of viewing objects, a personal classification tag under a video account of the viewing object, and capture, from the target video, a video clip to be recommended corresponding to the video account of the viewing object according to a correspondence between the personal classification tag and each video clip in the target video.
Optionally, on the basis of the video play control device shown in fig. 9 or fig. 10, referring to fig. 11, fig. 11 is a schematic structural diagram of another video play control device 1100 provided in an embodiment of the present application, where the device further includes:
the first recommending module 1101 is configured to recommend the corresponding video clip to be recommended to the viewing object after the target video is played.
Optionally, on the basis of the video play control device shown in fig. 9 or fig. 10, when the target video is a non-first video in the continuous video set, referring to fig. 12, fig. 12 is a schematic structural diagram of another video play control device 1200 according to an embodiment of the present application, where the device further includes:
the second recommendation module 1201 is configured to play the corresponding video clip to be recommended before playing the next video of the target video.
The video playing control device provided by the embodiment of the application can monitor whether the viewing state of the viewing object for the target video is changed according to the viewing state information of the viewing object, automatically mark the starting time point of missed viewing of the video content based on the current playing progress of the target video when the viewing state of the viewing object for the target video is changed from viewing to non-viewing, and automatically provide video playback service for the viewing object based on the marked starting time point when the viewing state of the viewing object for the target video is changed from non-viewing. The whole process does not need to manually adjust the playing progress of the watched object, so that missed video content of the watched object can be provided for the watched object rapidly and accurately, and video watching experience is improved.
The embodiment of the application also provides a device for controlling video playing, which can be particularly a server and a terminal device, and the server and the terminal device provided by the embodiment of the application are introduced from the aspect of hardware materialization.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present application. The server 1300 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPU) 1322 (e.g., one or more processors) and memory 1332, one or more storage media 1330 (e.g., one or more mass storage devices) storing applications 1342 or data 1344. Wherein the memory 1332 and storage medium 1330 may be transitory or persistent. The program stored on the storage medium 1330 may include one or more modules (not shown), each module may include a series of instruction operations in a server. Further, the central processor 1322 may be configured to communicate with the storage medium 1330, and execute a series of instruction operations in the storage medium 1330 on the server 1300.
The server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input/output interfaces 1358, and/or one or more operating systems 1341, such as Windows server (tm), mac OS XTM, unixTM, linuxTM, freeBSDTM, and so forth.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 13.
Wherein CPU 1322 is configured to perform the following steps:
in the process of playing the target video, obtaining the viewing state information of a viewing object; the viewing state information is used for representing whether the viewing object is viewing the target video;
when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video;
when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information, providing video playback services for the viewing object; the video playback service includes playing the target video based on the starting point in time.
Optionally, CPU 1322 may be further configured to perform steps of any implementation of the video playback control method provided by an embodiment of the present application.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only those portions of the embodiments of the present application that are relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present application. The terminal can be any terminal equipment including a smart phone, a smart television, a computer, a tablet personal computer, a personal digital assistant and the like, and the terminal is taken as a smart electricity as an example:
fig. 14 is a block diagram showing a part of the structure of a smart television related to a terminal provided by an embodiment of the present application. Referring to fig. 14, the smart tv includes: radio Frequency (RF) circuit 1414, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuit 1460, wireless fidelity (wireless fidelity, wiFi) module 1470, processor 1480, power supply 1490, and the like. It will be appreciated by those skilled in the art that the smart tv structure shown in fig. 14 is not limiting of the smart tv and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
The memory 1420 may be used to store software programs and modules, and the processor 1480 performs various functional applications and data processing of the smart tv by executing the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data created according to the use of the smart tv (such as audio data, phonebook, etc.), and the like. In addition, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1480 is a control center of the smart tv, connects various parts of the entire smart tv using various interfaces and lines, performs various functions of the smart tv and processes data by running or executing software programs and/or modules stored in the memory 1420, and calling data stored in the memory 1420. In the alternative, processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1480.
In an embodiment of the present application, the processor 1480 included in the terminal further has the following functions:
in the process of playing the target video, obtaining the viewing state information of a viewing object; the viewing state information is used for representing whether the viewing object is viewing the target video;
when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video;
providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting point in time.
Optionally, the processor 1480 is further configured to execute steps of any implementation manner of the video play control method provided by the embodiment of the present application.
The embodiments of the present application also provide a computer readable storage medium storing a computer program for executing any one of the implementations of the video play control method described in the foregoing embodiments.
The embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform any one of the implementations of a video play control method described in the foregoing embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc. various media for storing computer program.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. A video play control method, the method comprising:
in the process of playing the target video, obtaining the viewing state information of a viewing object; the viewing state information is used for representing whether the viewing object is viewing the target video, the viewing state information includes facial feature information of the viewing object, the target video is not the first video in a continuous video set, wherein in the case that a plurality of viewing objects commonly view the target video in the same space, the obtaining the viewing state information of the viewing object in the process of playing the target video includes: in the process of playing the target video, acquiring the respective watching state information of the plurality of watching objects;
searching a video account corresponding to the facial feature information in a target mapping relation, wherein the target mapping relation is recorded with the corresponding relation between the video account and the facial feature information as the video account of the watching object;
when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video;
Providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting time point, wherein the orientation angle of the face of the viewing object is determined according to the facial feature information of the viewing object, if the orientation angle of the face of the viewing object is within a preset angle range, the current viewing target video of the viewing object is determined, if the orientation angle of the face of the viewing object is beyond the preset angle range, the current viewing target video of the viewing object is determined, and the preset angle range is determined according to the arrangement position of the terminal equipment, the screen configuration of the terminal equipment and the viewing habit of the user;
in the process of playing the target video, determining the current emotion state of the watching object according to the current watching state information of the watching object, and configuring a classification label for the video clip currently played in the target video based on the emotion state;
intercepting video clips to be recommended from the target video according to respective classification labels of the video clips in the target video;
Before playing the next video of the target video, playing the video segments to be recommended intercepted in the target video as the content of the antecedent review;
wherein when the viewing state of the viewing object with respect to the target video is determined to be changed from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on the current playing progress of the target video, comprising: when it is determined that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object for each of the plurality of viewing objects, a starting time point is marked in the target video based on the current playing progress of the target video, and an association relationship between the starting time point and the video account of the viewing object is constructed to provide video playback service for the viewing object based on the starting time point associated with the video account of the viewing object.
2. The method according to claim 1, wherein said providing a video playback service for said viewing object when it is determined from said viewing state information that a viewing state of said viewing object for said target video is changed from non-viewing to viewing, comprises:
For each of the plurality of viewing objects, providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to viewing state information of the viewing object; the video playback service includes playing the target video based on a starting point in time associated with the video account of the viewing object.
3. The method of claim 1, wherein the change in viewing state of the viewing object for the target video is determined from the viewing state information by:
determining whether the line of sight of the viewing object falls on the target video according to the viewing state information;
if the sight of the watching object is detected to leave the target video and the duration of the sight of the watching object does not fall on the target video exceeds a first time threshold, determining that the watching state of the watching object for the target video is changed from watching to not watching;
and if the sight line of the watching object is detected to return to the target video and the time continuously falling on the target video exceeds a second time threshold, determining that the watching state of the watching object for the target video is changed from non-watching to watching.
4. The method of claim 1, wherein the viewing state information comprises location information; determining a change in viewing state of the viewing object for the target video from the viewing state information by:
determining whether the watching object is positioned in a preset watching range or not according to the watching state information;
if the fact that the viewing object leaves the preset viewing range and the duration of the time which is not in the preset viewing range exceeds a third time threshold value is detected, determining that the viewing state of the viewing object for the target video is changed from viewing to non-viewing;
and if the fact that the viewing object returns to the preset viewing range is detected, and the time continuously in the preset viewing range exceeds a fourth time threshold value, determining that the viewing state of the viewing object for the target video is changed from non-viewing to viewing.
5. The method according to any one of claims 1 to 4, wherein when it is determined from the viewing state information that the viewing state of the viewing object for the target video is changed from non-viewing to viewing, before the providing of the video playback service for the viewing object, the method further comprises:
Marking a termination time point in the target video based on the current playing progress of the target video;
the video playback service further comprises:
and providing a video clip based on the start time point and the end time point.
6. The method of claim 5, wherein marking a starting point in the target video based on the current progress of playing the target video comprises:
marking a time point which is in front of the current playing progress and is spaced from the current playing progress by a first preset duration in the target video as the starting time point;
the marking the termination time point in the target video based on the current playing progress of the target video comprises the following steps:
and marking a time point which is after the current playing progress and is spaced from the current playing progress by a second preset time length in the target video as the termination time point.
7. The method of claim 5, wherein the providing video playback services for the viewing object comprises:
displaying a playback prompt dialog; the playback prompt dialog box comprises a video playback control and a playback fragment sending control;
If the video playback control is detected to be clicked, jumping to the starting time point to play the video clip;
if the playback segment sending control is detected to be clicked, the video segment is sent to the video account of the watching object; the video account number of the viewing object is determined based on the viewing state information.
8. The method of claim 1, wherein in the case where a plurality of viewing objects commonly view the target video in the same space, the method further comprises:
in the process of playing the target video, aiming at each of the plurality of watching objects, determining the current emotion state of the watching object according to the current watching state information of the watching object; configuring a personal classification label for a video clip currently played in the target video based on the current emotion state of the viewing object, wherein the personal classification label belongs to a video account of the viewing object;
and aiming at each of the plurality of watching objects, acquiring a personal classification label under a video account of the watching object, and intercepting video clips to be recommended corresponding to the video account of the watching object from the target video according to the corresponding relation between the personal classification label and each video clip in the target video.
9. The method according to claim 1, wherein the method further comprises:
and recommending the corresponding video clips to be recommended to the watching object after the target video is played.
10. A video playback control device, the device comprising:
the state information acquisition module is used for acquiring the viewing state information of the viewing object in the process of playing the target video; the viewing state information is used for representing whether the viewing object is viewing the target video, the viewing state information includes facial feature information of the viewing object, the target video is not the first video in a continuous video set, wherein in the case that a plurality of viewing objects commonly view the target video in the same space, the obtaining the viewing state information of the viewing object in the process of playing the target video includes: in the process of playing the target video, acquiring the respective watching state information of the plurality of watching objects;
the video account determining module is used for searching a video account corresponding to the facial feature information in a target mapping relation, wherein the target mapping relation is recorded with a corresponding relation between the video account and the facial feature information, and the video account is used as the video account of the watching object;
A start point marking module, configured to mark a start point in the target video based on a current playing progress of the target video when it is determined that a viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information;
a playback service module for providing a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the starting time point, wherein the orientation angle of the face of the viewing object is determined according to the facial feature information of the viewing object, if the orientation angle of the face of the viewing object is within a preset angle range, the current viewing target video of the viewing object is determined, if the orientation angle of the face of the viewing object is beyond the preset angle range, the current viewing target video of the viewing object is determined, and the preset angle range is determined according to the arrangement position of the terminal equipment, the screen configuration of the terminal equipment and the viewing habit of the user;
the tag configuration module is used for determining the current emotion state of the watching object according to the current watching state information of the watching object in the process of playing the target video, and configuring a classification tag for the video clip currently played in the target video based on the emotion state;
The video intercepting module is used for intercepting video clips to be recommended from the target video according to respective classification labels of the video clips in the target video;
the second recommendation module is used for playing the video clips to be recommended, which are intercepted in the target video, as the content of the pre-condition review before playing the next video of the target video;
the starting point marking module is specifically configured to: when it is determined that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object for each of the plurality of viewing objects, a starting time point is marked in the target video based on the current playing progress of the target video, and an association relationship between the starting time point and the video account of the viewing object is constructed to provide video playback service for the viewing object based on the starting time point associated with the video account of the viewing object.
11. An apparatus comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the video play control method according to any one of claims 1 to 9 according to the computer program.
12. A computer-readable storage medium storing a computer program for executing the video play control method according to any one of claims 1 to 9.
CN202010476759.2A 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium Active CN111615003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476759.2A CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476759.2A CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111615003A CN111615003A (en) 2020-09-01
CN111615003B true CN111615003B (en) 2023-11-03

Family

ID=72201848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476759.2A Active CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111615003B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261431B (en) * 2020-10-21 2022-01-14 联想(北京)有限公司 Image processing method and device and electronic equipment
CN112866809B (en) * 2020-12-31 2023-06-23 百度在线网络技术(北京)有限公司 Video processing method, device, electronic equipment and readable storage medium
CN112637678A (en) * 2021-03-09 2021-04-09 北京世纪好未来教育科技有限公司 Video playing method, device, storage medium and equipment
CN113038286B (en) * 2021-03-12 2023-08-11 维沃移动通信有限公司 Video playing control method and device and electronic equipment
CN113556611B (en) * 2021-07-20 2022-08-16 上海哔哩哔哩科技有限公司 Video watching method and device
CN113573151B (en) * 2021-09-23 2021-11-23 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113938748B (en) * 2021-10-15 2023-09-01 腾讯科技(成都)有限公司 Video playing method, device, terminal, storage medium and program product
CN114885201B (en) * 2022-05-06 2024-04-02 林间 Video comparison viewing method, device, equipment and storage medium
CN117692717A (en) * 2024-01-30 2024-03-12 利亚德智慧科技集团有限公司 Breakpoint continuous broadcasting processing method and device for light show, storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747346A (en) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 Multimedia video playing control method and multimedia video player
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN106303672A (en) * 2016-08-24 2017-01-04 上海卓易科技股份有限公司 A kind of synchronous broadcast method based on recorded broadcast video and device
WO2017113740A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Face recognition based video recommendation method and device
CN107484021A (en) * 2017-09-27 2017-12-15 广东小天才科技有限公司 A kind of video broadcasting method, system and terminal device
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN108650558A (en) * 2018-05-30 2018-10-12 互影科技(北京)有限公司 The generation method and device of video Previously on Desperate Housewives based on interactive video
CN109842805A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Generation method, device, computer equipment and the storage medium of video watching focus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN103747346A (en) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 Multimedia video playing control method and multimedia video player
WO2017113740A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Face recognition based video recommendation method and device
CN106303672A (en) * 2016-08-24 2017-01-04 上海卓易科技股份有限公司 A kind of synchronous broadcast method based on recorded broadcast video and device
CN107484021A (en) * 2017-09-27 2017-12-15 广东小天才科技有限公司 A kind of video broadcasting method, system and terminal device
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN108650558A (en) * 2018-05-30 2018-10-12 互影科技(北京)有限公司 The generation method and device of video Previously on Desperate Housewives based on interactive video
CN109842805A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Generation method, device, computer equipment and the storage medium of video watching focus

Also Published As

Publication number Publication date
CN111615003A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111615003B (en) Video playing control method, device, equipment and storage medium
CN110225369B (en) Video selective playing method, device, equipment and readable storage medium
US20200014979A1 (en) Methods and systems for providing relevant supplemental content to a user device
CN105229629B (en) For estimating the method to the user interest of media content, electronic equipment and medium
CN111083512A (en) Switching method and device of live broadcast room, electronic equipment and storage medium
US8699862B1 (en) Synchronized content playback related to content recognition
KR101846756B1 (en) Tv program identification method, apparatus, terminal, server and system
CN109168037B (en) Video playing method and device
US9538251B2 (en) Systems and methods for automatically enabling subtitles based on user activity
CN107247733B (en) Video clip watching popularity analysis method and system
US10019058B2 (en) Information processing device and information processing method
US20160055879A1 (en) Systems and methods for automatically performing media actions based on status of external components
US11630862B2 (en) Multimedia focalization
EP2553937A2 (en) Media fingerprinting for social networking
CN105933772B (en) Exchange method, interactive device and interactive system
CN109189986B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN109389088B (en) Video recognition method, device, machine equipment and computer readable storage medium
CN111444415B (en) Barrage processing method, server, client, electronic equipment and storage medium
US8898259B2 (en) Bookmarking system
US11763720B2 (en) Methods, systems, and media for detecting a presentation of media content on a display device
CN105159524A (en) Interface display method and apparatus
CN104933071A (en) Information retrieval method and corresponding device
CN110636379A (en) Recording method of television watching history, television and computer readable storage medium
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN114025242A (en) Video processing method, video processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028104

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant