CN111615003A - Video playing control method, device, equipment and storage medium - Google Patents

Video playing control method, device, equipment and storage medium Download PDF

Info

Publication number
CN111615003A
CN111615003A CN202010476759.2A CN202010476759A CN111615003A CN 111615003 A CN111615003 A CN 111615003A CN 202010476759 A CN202010476759 A CN 202010476759A CN 111615003 A CN111615003 A CN 111615003A
Authority
CN
China
Prior art keywords
viewing
video
watching
target video
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010476759.2A
Other languages
Chinese (zh)
Other versions
CN111615003B (en
Inventor
高萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010476759.2A priority Critical patent/CN111615003B/en
Publication of CN111615003A publication Critical patent/CN111615003A/en
Application granted granted Critical
Publication of CN111615003B publication Critical patent/CN111615003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Abstract

The embodiment of the application discloses a video playing control method, a video playing control device, video playing control equipment and a storage medium, wherein the method comprises the following steps: in the process of playing the target video, acquiring the watching state information of the watching object, wherein the watching state information is used for representing whether the watching object watches the target video; when the watching state of the watching object for the target video is determined to be changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video; when it is determined from the viewing state information that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing, a video playback service is provided for the viewing object, the video playback service including playing the target video on the basis of the start time point. The method can quickly and accurately provide video playback service for the user and improve the use experience of the user.

Description

Video playing control method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for controlling video playback.
Background
With the rapid development of internet technology, a network video platform has become one of the main media for users to watch videos, and users can watch various videos such as dramas, movies, and art programs through the network video platform by using devices such as mobile phones, computers, smart televisions, and the like.
In a real life scenario, the following often occurs: when the user returns to continue watching the video, the playing progress of the video needs to be adjusted to the playing progress before the user leaves so as to watch the overlooked video content. For the above situation, the user can adjust the playing progress of the video to the playing progress before the video leaves by adjusting the playing progress bar or accelerating the fast backward.
However, the above implementation methods all require the user to manually adjust the playing progress, and in many cases, the user needs to repeatedly adjust the playing progress bar multiple times or repeatedly switch between fast-forward and fast-backward for multiple times to adjust the playing progress to the desired position. Therefore, based on the implementation manner, it is generally difficult to quickly and accurately adjust the playing progress to the playing progress before the user leaves, and for the user, there are certain losses in both the adjustment efficiency and the use experience.
Disclosure of Invention
The embodiment of the application provides a video playing control method, a video playing control device, video playing control equipment and a storage medium, which can quickly and accurately provide video playback service for a user and improve the use experience of the user.
In view of this, a first aspect of the present application provides a video playback control method, including:
in the process of playing the target video, acquiring the watching state information of a watching object; the viewing state information is used for representing whether the viewing object views the target video;
when the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video;
when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information, providing a video playback service for the watching object; the video playback service includes playing the target video based on the start time point.
A second aspect of the present application provides a video playback control apparatus, including:
the state information acquisition module is used for acquiring the watching state information of a watching object in the process of playing the target video; the viewing state information is used for representing whether the viewing object views the target video;
a starting point marking module, configured to mark a starting time point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video changes from viewing to non-viewing according to the viewing state information;
the playback service module is used for providing video playback service for the watching object when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information; the video playback service includes playing the target video based on the start time point.
A third aspect of the application provides an apparatus comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to execute the steps of the video playback control method according to the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium for storing a computer program for executing the steps of the video playback control method according to the first aspect.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the video playback control method of the first aspect described above.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides a video playing control method, which can acquire the watching state information of a watching object in real time in the process of playing a target video and monitor whether the watching state of the watching object to the target video changes or not according to the watching state information; when the watching state of the watching object to the target video is detected to be changed from watching to not watching, marking a starting time point in the target video based on the current playing progress of the target video; when it is detected that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing, a video playback service may be provided for the viewing object based on the above-described start time point. Compared with the implementation mode of manually adjusting the playing progress in the related art, the method provided by the embodiment of the application can automatically mark the starting time point of the missed video content of the watching object based on the change of the watching object to the target video watching state, and provide the missed video content for the watching object based on the starting time point, and the playing progress does not need to be manually adjusted by the watching object in the whole process, so that the missed video content can be quickly and accurately watched by the watching object, and the use experience of a user is greatly improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a video playing control method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a video playing control method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an implementation of a video playback service provided in an embodiment of the present application;
fig. 4 is a schematic diagram illustrating an implementation principle of a video playback process provided in an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an implementation principle of a video recommendation process according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a first video playback control apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a second video playback control apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a third video playback control apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a fourth video playback control apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a fifth video playback control apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a sixth video playback control apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a seventh video playback control apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, if a user leaves during the video playing process, when the user wants to watch the video content played during the leaving process after returning, the user needs to manually adjust the playing progress bar or adjust the playing progress of the video to the playing progress before the user leaves by using the accelerating and fast-backing function. These methods for manually adjusting the playing progress are often difficult to quickly and accurately adjust the playing progress to a position desired by the user, and the user experience is poor.
In view of the problems in the related art, the embodiments of the present application provide a video playing control method, which can help a user to quickly and accurately view video content that the user overlooks during leaving, and improve the user experience of the user.
Specifically, in the video playing control method provided in the embodiment of the present application, in the process of playing the target video, the viewing state information of the viewing object is obtained in real time, and the viewing state information can represent whether the viewing object is viewing the target video; when the watching state of the watching object for the target video is determined to be changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video; when it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to the viewing state information, a video playback service is provided for the viewing object, the video playback service including a service for playing the target video on the basis of a start time point.
Compared with an implementation mode that a user manually adjusts the playing progress in the related art, the method provided by the embodiment of the application can monitor whether the watching state of the watching object for the target video changes according to the watching state information of the watching object, automatically mark the starting time point of missing watching video content based on the current playing progress of the target video when the watching state of the watching object for the target video changes from watching to not watching (for example, the user leaves is monitored), and automatically provide video playback service for the watching object based on the marked starting time point when the watching state of the watching object for the target video changes from not watching to watching (for example, the user returns to is monitored). The whole process does not need to manually adjust the playing progress of the watching object, can ensure that the overlooked video content can be quickly and accurately provided for the watching object, and greatly improves the video watching experience.
It should be understood that an execution main body of the video playing control method provided by the embodiment of the present application may be an electronic device, such as a terminal device or a server. The terminal device may specifically be a device with a video playing function, such as a smart phone, a computer, a smart television, a tablet computer, a Personal Digital Assistant (PDA), and so on. The server may specifically be a server for providing a video playing service, and the server may be an application server or a Web server; in actual deployment, the server may be an independent server, or may also be a cluster server or a cloud server.
In order to facilitate understanding of the video playing control method provided in the embodiment of the present application, an application scenario of the video playing control method is described below by taking an execution subject of the video playing method as a terminal device as an example.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a video playing control method provided in an embodiment of the present application. As shown in fig. 1, the application scenario includes a terminal device 110 (the scenario shown in fig. 1 takes the terminal device 110 as an example of an intelligent electronic device). A video playing application runs in the terminal device 110, and the viewing object can view the target video through the video playing application running in the terminal device 110. Furthermore, the terminal device 110 is also provided with a device for acquiring the viewing state information, for example, assuming that the viewing state information is facial feature information of the viewing object, the terminal device 110 is provided with an image acquisition device such as a camera, and the terminal device 110 can acquire the facial feature information of the viewing object through the image acquisition device.
In practical applications, if it is detected that the video playing application running in the terminal device 110 plays the target video, the terminal device 110 may correspondingly trigger the acquisition device of the viewing state information to continuously acquire, in real time, the viewing state information of the viewing object viewing the target video, where the viewing state information can represent whether the viewing object is viewing the target video. Taking the viewing state information as the facial feature information as an example, the terminal device 110 may continuously collect the facial feature information of the viewing object in real time by using a camera arranged thereon as the viewing state information of the viewing object in the process of playing the target video.
Furthermore, the terminal device 110 may monitor the viewing status of the target video by the viewing object according to the viewing status information collected by the terminal device, and when it is monitored that the viewing status of the target video by the viewing object changes from viewing to non-viewing according to the viewing status information, may mark a starting time point in the target video based on the current playing progress of the target video, that is, mark a starting time point of video content missed by the viewing object.
Thereafter, if the terminal device 110 monitors that the viewing state of the target video of the viewing object is changed from non-viewing to viewing according to the collected viewing state information, a corresponding video playback service may be provided for the viewing object based on the starting time point marked in the target video, for example, the user may be prompted to select to play back the video content missed during leaving, and when it is detected that the user selects to play back the missed video content, the target video may be played from the marked starting time point.
It should be understood that the application scenario shown in fig. 1 is only an example, and in practical applications, in addition to the video playing control method provided by the embodiment of the present application being independently executed by the terminal device 110, the video playing control method provided by the embodiment of the present application may also be independently executed by a server, for example, the terminal device for playing a video may upload the viewing state information collected by the terminal device to the server in real time, and then the server provides a corresponding video playback service for a viewing object based on the video playing control method provided by the embodiment of the present application; the video playing control method provided by the embodiment of the present application may also be executed by the terminal device and the server in cooperation, for example, the terminal device monitors the viewing state of the viewing object according to the viewing state information collected by the terminal device, and when detecting that the viewing state changes, the server marks a starting time point or provides a corresponding video playback service. The application of the video playing control method is not limited herein.
The following describes the video playback control method provided by the present application in detail by embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart of a video playing control method according to an embodiment of the present application. For convenience of description, the following embodiments are described taking a terminal device as an example of an execution subject. As shown in fig. 2, the video playback control method includes the following steps:
step 201: in the process of playing the target video, acquiring the watching state information of a watching object; the viewing state information is used for representing whether the viewing object views the target video.
In practical application, a user can select a video to be watched as a target video through a video playing application running in terminal equipment, and control the video playing application to play the target video; the target video may be a video with any content in any form, and the application does not limit the target video in any way. In the process of playing the target video, the terminal device can continuously collect the watching state information of the watching object of the target video in real time through the corresponding watching state information collecting device, such as a camera, an infrared sensor and the like.
It should be noted that the viewing state information in the embodiment of the present application refers to information that can reflect whether the viewing object is viewing the target video, and the viewing state of the viewing object with respect to the target video can be monitored according to the viewing state information, for example, whether the viewing object leaves the target video that is not being viewed, whether the viewing object returns to continue viewing the target video, and the like.
Illustratively, the viewing state information may include facial feature information of the viewing object. Specifically, the terminal device may capture an image for a preset range (generally, a range in which a screen of the terminal device can be viewed) by using the camera, and in a case where a head of a viewing object is included in the captured image, may extract facial feature information of the viewing object from the image by using a face recognition technology as viewing state information of the viewing object.
In one possible implementation manner, the terminal device may track the line of sight of the viewing object according to the facial feature information of the viewing object, and determine whether the viewing object is viewing the target video according to the line of sight of the viewing object. Specifically, the terminal device may determine a viewing direction of the viewing object according to the facial feature information of the viewing object, and if the viewing direction of the viewing object indicates that the viewing line of the viewing object falls on the screen of the terminal device, it may be considered that the viewing object is currently playing the target video at the viewing terminal device, and if the viewing direction of the viewing object indicates that the viewing line of the viewing object does not fall on the screen of the terminal device, it may be considered that the viewing object is not currently playing the target video at the viewing terminal device.
In another possible implementation manner, the terminal device may determine a face orientation of the viewing object according to the face feature information of the viewing object, and determine whether the viewing object is viewing the target video based on the face orientation. Specifically, the terminal device may determine an orientation angle of the face of the viewing object according to the facial feature information of the viewing object, and if the orientation angle of the face of the viewing object is within a preset angle range, it may be determined that the viewing object is currently viewing the target video, and if the orientation angle of the face of the viewing object exceeds the preset angle range, it may be determined that the viewing object is not currently viewing the target video. It should be understood that the preset angle range may be determined according to the screen configuration of the viewable terminal device and/or the viewing habit of the user, and according to the relationship between the preset angle range and the orientation angle of the face of the viewing object, whether the viewing object is on the screen of the viewable terminal device, that is, whether the viewing object is viewing the target video may be measured.
It should be understood that, in practical applications, in the case that the viewing state information includes facial feature information, the terminal device may further determine other related information according to the facial feature information of the viewing object, and adopt a corresponding processing manner to measure whether the viewing object is viewing the target video based on the related information.
Illustratively, the viewing state information may include position information of the viewing object. Specifically, the terminal device may acquire an image for a preset range (generally, a range in which a screen of the terminal device can be viewed) through the camera, and determine position information of a viewing object based on the acquired image as viewing state information of the viewing object; and/or the terminal device may detect position information of the viewing object as viewing state information of the viewing object through the infrared sensor.
In a possible implementation manner, the terminal device may determine whether the viewing object is viewing the target video according to a relationship between the position information of the viewing object and a preset viewing range. Specifically, if the position information of the viewing object indicates that the viewing object is within the preset viewing range, it may be determined that the viewing object is currently viewing the target video, and if the position information of the viewing object indicates that the viewing object is outside the preset viewing range, it may be determined that the viewing object is not currently viewing the target video. It should be understood that the preset viewing range may be determined according to factors such as an arrangement position of the terminal device, a screen configuration of the terminal device, and a viewing habit of the user.
It should be understood that in practical applications, besides the facial feature information and the position information of the viewing object, other information capable of reflecting whether the viewing object is looking at the screen of the terminal device may be used as the viewing state information, and the viewing state information is not specifically limited in this application.
It should be noted that, in a scene in which a plurality of viewing objects view a target video together in the same space, during the process of playing the target video, the terminal device may continuously acquire viewing state information of the plurality of viewing objects in real time, that is, acquire corresponding viewing state information for each viewing object. For example, in a scenario where a plurality of users watch a target video played on a smart tv together in a living room, the smart tv may collect viewing status information of each user accordingly while playing the target video.
Optionally, under the condition that the viewing state information includes facial feature information, the terminal device may further identify the identity of the viewing object according to the collected viewing state information, and determine the video account of the viewing object on the video playing platform. Specifically, a video account corresponding to the facial feature information in the viewing status information may be searched in a target mapping relationship, as the video account of the viewing object, where the target mapping relationship records a corresponding relationship between the video account and the facial feature information.
In specific implementation, the viewing object can enter the facial feature information of the viewing object when the video playing platform registers the account, and the server of the video playing platform correspondingly constructs the corresponding relationship between the facial feature information and the registered video account and stores the corresponding relationship into the target mapping relationship. Or when the watching object uses the video playing application provided by the video playing platform, the video playing application can prompt the watching object to enter the facial feature information of the watching object, after the fact that the watching object enters the facial feature information of the watching object is detected, the facial feature information of the watching object is transmitted to the server of the video playing platform, the server of the video playing platform correspondingly constructs the corresponding relation between the facial feature information and the currently logged video account number of the video playing application, and the corresponding relation is stored in the target mapping relation.
When the terminal equipment plays the target video, the facial feature information of the watching object can be collected to serve as the watching state information, the facial feature information is transmitted to the server of the video playing platform, the server can search the video account number corresponding to the facial feature information in the target mapping relation, and then the video account number is determined as the video account number of the watching object.
It should be understood that, in practical application, the server of the video playing platform may also issue the target mapping relationship stored therein to the terminal device, and the terminal device searches the video account corresponding to the collected facial feature information based on the target mapping relationship, and uses the video account as the video account of the viewing object. The implementation manner of determining the video account of the viewing object is not limited in any way herein.
In a scene in which a plurality of viewing objects view a target video together in the same space, the terminal device may determine, according to viewing status information (including facial feature information) of each of the plurality of viewing objects, a video account of each of the plurality of viewing objects in the manner described above, and further distinguish and record the viewing status of each viewing object with respect to the target video based on the video account.
It should be noted that, when the execution subject of the video playing control method provided in the embodiment of the present application is the server, the server may obtain viewing state information of the viewing object uploaded by the terminal device in real time, for example, the server may obtain an image acquired by the terminal device in real time, and determine facial feature information of the viewing object as the viewing state information based on the image by using a face recognition technology, and for example, the server may obtain the image acquired by the terminal device or the infrared sensing information in real time, and determine position information of the viewing object as the viewing state information according to the image or the infrared sensing information.
Step 202: when the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information, a starting time point is marked in the target video based on the current playing progress of the target video.
In the process of playing the target video, the terminal device can acquire and analyze the collected watching state information of the watching object in real time, the watching state of the watching object on the target video is monitored in real time based on the information, when the terminal device determines that the watching state of the watching object on the target video is changed from watching to not watching according to the watching state information, if the watching object is monitored to leave midway, the terminal device can mark the target video based on the playing progress of the target video at the moment, namely, the starting time point of the video content missed by the watching object is marked in the target video.
It should be understood that the viewing state information collected by the terminal device generally reflects the instant viewing state of the viewing object, and the instant viewing state of the viewing object often has difficulty in objectively and accurately reflecting whether the viewing state of the viewing object for the target video changes. Taking the viewing state information as the facial feature information of the viewing object as an example, it is assumed that the viewing state information collected at a certain time indicates that the line of sight of the viewing object at the certain time does not fall on the target video, and at this time, the viewing object may only make a line of sight shifting behavior, and it is likely that the line of sight of the viewing object at the next time will be shifted back to the target video. Therefore, in general, whether the viewing state of the viewing object for the target video changes cannot be objectively and accurately determined only from the viewing state information at a certain time, and it is often necessary to determine whether the viewing state of the viewing object for the target video changes based on the viewing state information over a period of time.
In one possible implementation, when the viewing state information includes facial feature information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object with respect to the target video changes according to a dwell time of a line of sight of the viewing object. That is, the terminal device may determine whether the line of sight of the viewing object falls on the target video according to the viewing state information; if the fact that the sight line of the watching object leaves the target video and the time which does not fall on the target video continuously exceeds a first time threshold value is detected, the watching state of the watching object for the target video can be determined to be changed from watching to not watching; on the contrary, if it is detected that the line of sight of the viewing object returns to the target video and the time of continuously falling on the target video exceeds the second time threshold, it may be determined that the viewing state of the viewing object for the target video is changed from not viewing to viewing.
Specifically, the terminal device may track, by using a gaze tracking technique, a gaze direction of the viewing object based on the facial feature information of the viewing object in the viewing state information, and if it is determined that the gaze of the viewing object leaves the target video (i.e., leaves the screen of the terminal device) according to the gaze direction of the viewing object and the time that the gaze of the viewing object does not continuously fall on the target video exceeds a first time threshold, it may be considered that the viewing state of the viewing object with respect to the target video is changed, and the viewing state is changed from viewing to non-viewing. On the contrary, if it is determined that the sight line of the viewing object returns to the target video (i.e. falls on the screen of the terminal device again) according to the sight line direction of the viewing object, and the time of continuously falling on the target video exceeds the second time threshold, it can be considered that the viewing state of the viewing object for the target video is changed, and the viewing state is changed from non-viewing to viewing.
In another possible implementation manner, when the viewing state information includes facial feature information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object for the target video changes according to a dwell time of the viewing object face orientation. That is, the terminal device may determine, according to the viewing state information, whether the orientation angle of the face of the viewing object is within a preset angle range (which refers to an angle range in which the screen of the terminal device can be viewed), and if it is detected that the orientation angle of the face of the viewing object exceeds the preset angle range and the time continuously exceeding the preset angle range exceeds a first time threshold, determine that the viewing state of the viewing object with respect to the target video is changed from viewing to non-viewing; on the contrary, if it is detected that the orientation angle of the face of the viewing object returns to the preset angle range and the time continuously belonging to the preset angle range exceeds the second time threshold, it is determined that the viewing state of the viewing object for the target video is changed from non-viewing to viewing.
It should be understood that the first time threshold and the second time threshold may be set according to actual requirements, for example, both set to 3 minutes, and the first time threshold and the second time threshold are not specifically limited herein.
In yet another possible implementation manner, when the viewing state information includes position information of the viewing object, the terminal device may monitor whether the viewing state of the viewing object with respect to the target video changes according to a dwell time of the viewing object within a preset viewing range. That is, the terminal device may determine whether the viewing object is located within the preset viewing range according to the viewing state information; if the fact that the watching object leaves the preset watching range and the time which is not in the preset watching range continuously exceeds a third time threshold value is detected, it is determined that the watching state of the watching object for the target video is changed from watching to not watching; and if the fact that the watching object returns to the preset watching range and the time continuously positioned in the preset watching range exceeds a fourth time threshold value is detected, the watching state of the watching object for the target video is changed from watching-free state to watching-target state.
Specifically, the terminal device may set a preset viewing range in advance according to a screen configuration of the terminal device, where the preset viewing range is a position range in which the terminal device can view a screen. In the process that the terminal device plays the target video, the terminal device can acquire the position information of the watching object in real time, and monitor whether the watching object is in the preset watching range or not based on the position information, if it is detected that the watching object leaves the preset watching range according to the watching state information, and the time which is not in the preset watching range continuously exceeds a third time threshold, the watching state of the watching object for the target video can be considered to be changed, and the watching state is changed into the non-watching state. On the contrary, if it is detected that the viewing object returns to the preset viewing range according to the viewing state information and the time continuously located in the preset viewing range exceeds the fourth time threshold, it can be considered that the viewing state of the viewing object with respect to the target video is changed from non-viewing to viewing.
It should be understood that the third time threshold and the fourth time threshold may be set according to actual requirements, for example, both set to 4 minutes, and the third time threshold and the fourth time threshold are not specifically limited herein.
It should be understood that, in practical applications, in addition to monitoring whether the viewing state of the viewing object for the target video changes according to the facial feature information and the position information of the viewing object, other related information may be used as the viewing state information, and whether the viewing state of the viewing object for the target video changes is monitored based on the viewing state information in a corresponding manner.
Optionally, in order to enhance the video playback effect and ensure that the viewing object can view the complete missed-viewing video content when returning to view the target video, the terminal device may mark, as the starting time point of the missed-viewing video content, a time point that is before the current playing progress of the target video and is separated from the current playing progress by a first preset time length in the target video when monitoring that the viewing state of the viewing object for the target video is changed from viewing to non-viewing.
That is, when the terminal device marks the start time point, the start time point can be marked by extending the first preset time length forward on the basis of determining the play time point of the target video when the watching state of the watching object to the target video changes, so that more complete overlooked video content can be provided for the watching object. It should be understood that the first preset time period may be set according to actual requirements, for example, set to 3 minutes, and the first preset time period is not specifically limited herein.
It should be noted that, in a scene where multiple viewing objects view a target video together in the same space, if the terminal device determines, in advance, video accounts of the multiple viewing objects on the video playing platform according to viewing state information (including facial feature information) of the multiple viewing objects, when it is monitored that a viewing state of a certain viewing object with respect to the target video changes, the terminal device may record a start time point marked for the viewing object based on the video account of the viewing object.
That is, the terminal device may monitor the viewing status of the target video for each of the multiple viewing objects, and when it is determined that the viewing status of the viewing object for the target video changes from viewing to non-viewing according to the viewing status information of a certain viewing object, mark a starting time point in the target video based on the current playing progress of the target video, and construct an association relationship between the starting time point and the video account of the viewing object, indicating that the starting time point is marked based on the viewing status of the viewing object for the target video.
To facilitate understanding of a specific implementation of step 202 in the foregoing scenario, the foregoing implementation is exemplarily described below by taking user a and user B jointly watch a target video as an example.
When a user A and a user B watch a target video played by a terminal device together in the same space, the terminal device can acquire facial feature information of the user A and facial feature information of the user B, determine a video account 1 of the user A on a video playing platform according to the facial feature information of the user A, and determine a video account 2 of the user B on the video playing platform according to the facial feature information of the user B. The terminal equipment can collect the facial feature information of the user A and the user B in real time as the respective watching state information in the process of playing the target video, and monitors whether the watching states of the user A and the user B for the target video are changed or not according to the respective watching state information of the user A and the user B.
When the terminal device determines that the viewing state of the user a for the target video is changed from viewing to non-viewing according to the viewing state information of the user a, the terminal device may mark a starting time point in the target video based on the playing progress of the target video at that time, and set an identification tag corresponding to the video account 1 for the starting time point, so that the starting time point is associated with the video account 1, which indicates that the starting time point is marked based on the viewing state of the user a for the target video.
Step 203: when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information, providing a video playback service for the watching object; the video playback service includes playing the target video based on the start time point.
In the process of playing the target video, even if the terminal equipment determines that the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information of the watching object, the terminal equipment continuously collects the watching state information of the watching object in real time and monitors the watching state of the watching object for the target video based on the watching state information, and when the watching state of the watching object for the target video is changed from not watching to watching according to the collected watching state information, if the watching object is monitored to return, the terminal equipment can correspondingly provide video playback service for the watching object.
In some embodiments, the terminal device may provide video playback services for the viewing object based on its starting point in time marked in step 202. For example, when the terminal device detects that the viewing state of the target video by the viewing object changes from non-viewing to viewing, a playback prompt dialog box may pop up on the video playing interface to inquire whether the viewing object reviews the missed content, where the playback prompt dialog box includes a video playback control for triggering playback of the missed video content, and if it is detected that the viewing object clicks the video playback control, the playing progress of the target video is adjusted to the starting time point marked in step 202, and the target video is played from the starting time point accordingly.
In a scene in which a plurality of viewing objects view a target video together in the same space, if the terminal device determines, in advance, video accounts of the plurality of viewing objects on the video playing platform according to viewing state information (including facial feature information) of the plurality of viewing objects, and correspondingly marks a start time point associated with the video account of the viewing object leaving halfway based on the viewing state of each viewing object for the target video, when it is monitored that the viewing object leaving halfway returns, a video playback service may be provided for the viewing object based on the start time point associated with the video account of the viewing object.
That is, the terminal device may monitor the viewing status of the target video for each of the plurality of viewing objects, and when it is determined from the viewing status information of a certain viewing object that the viewing status of the viewing object for the target video changes from not viewing to viewing, may provide a video playback service for the viewing object, the video playback service including playing the target video based on the start time point associated with the video account of the viewing object.
Still taking the example that the user a and the user B watch the target video together in the same space, assuming that the user a leaves during watching the target video, the terminal device marks a starting time point associated with the video account 1 of the user a on the video playing platform in the target video, if the terminal device detects that the viewing status of the user a for the target video changes from not viewing to viewing according to the viewing status information of the user a, that is, it detects that the user a returns to watch the target video, at this time, the terminal device may provide video playback service for the user a, for example, the terminal device may pop up a playback prompt dialog box on the video playing interface to inquire whether the user a plays back the content that the user a misses, the playback prompt dialog box includes a video playback control, and if it is detected that the video playback control is clicked, the playing progress of the target video is correspondingly adjusted to the starting time point associated with the video account 1, the target video is played from the start time point.
Optionally, when the terminal device determines that the viewing state of the target video of the viewing object is changed from non-viewing to viewing according to the viewing state information of the viewing object, the terminal device may further mark a termination time point in the target video based on the current playing progress of the target video; that is, when the terminal device determines that the viewing state of the viewing object for the target video is changed from non-viewing to viewing, the terminal device may further mark an end time point in the target video accordingly, so as to determine the video segment missed by the viewing object based on the start time point and the end time point marked in the target video before. Further, the terminal device may provide a corresponding video playback service based on the video segment between the start time point and the end time point.
In order to enhance the video playback effect and ensure that the video playback segment determined based on the starting time point and the ending time point can completely cover the video content missed by the watching object, the terminal device may mark, as the ending time point of the missed video content, a time point which is after the current playing progress of the target video and is separated from the current playing progress by a second preset time length in the target video when it is monitored that the watching state of the watching object for the target video is changed from non-watching to watching.
That is, when the terminal device marks the end time point, the end time point may be marked with a second preset duration backwards on the basis of determining the play time point of the target video when the viewing state of the viewing object with respect to the target video changes, so as to ensure that more complete missed-viewing video content is provided for the viewing object. It should be understood that the second preset time period may be set according to actual requirements, for example, set to 3 minutes, and the second preset time period is not specifically limited herein.
Illustratively, the implementation of providing a video playback service for viewing objects based on a video segment between a start time point and an end time point is as follows: and displaying a playback prompt dialog box, wherein the playback prompt dialog box comprises a video playback control and a playback segment sending control, if the video playback control is detected to be clicked, jumping to a starting time point to play the video segment, and if the playback segment sending control is detected to be clicked, sending the video segment to a video account of a watching object, wherein the video account of the watching object is determined by the terminal equipment according to the facial feature information of the watching object.
Specifically, the terminal device may determine, in advance, a video account of the viewing object on the video playing platform according to viewing status information (including facial feature information) of the viewing object, and mark, in the target video, a start time point associated with the video account of the viewing object when it is detected that the viewing status of the viewing object for the target video changes from viewing to non-viewing, and mark, in the target video, an end time point associated with the video account of the viewing object when it is detected that the viewing status of the viewing object for the target video changes from non-viewing to viewing.
When the viewing state of the viewing object for the target video is changed from non-viewing to viewing, the terminal device may further pop up a playback prompt dialog box on the video playing interface to inquire whether the viewing object plays back the video content missed to be viewed, where the playback prompt dialog box includes a video playback control and a playback segment sending control. If the terminal device detects that the video playback control is clicked, the playing progress of the target video is adjusted to the starting time point associated with the video account of the watching object, the target video is played from the starting time point until the playing progress of the target video reaches the ending time point associated with the video account of the watching object, and the playback of the missed-watching video content is completed. If the terminal device detects that the playback segment sending control is clicked, the video segment is intercepted from the target video based on the starting time point and the ending time point which are associated with the video account of the watching object, the video segment is sent to the video account of the watching object, and the terminal device continues to play the target video in the process.
In the following, with reference to fig. 3, an exemplary description is given of an implementation manner of providing a video playback service for a viewing object that leaves halfway based on a video segment between a start time point and an end time point in a scene in which a plurality of viewing objects collectively view a target video in the same space.
Still assume that the plurality of viewing objects include a user a and a user B, and the terminal device has determined that the video account of the user a on the video playing platform is a video account 1 according to the viewing state information (including facial feature information) of the user a, and determined that the video account of the user B on the video playing platform is a video account 2 according to the viewing state information (including facial feature information) of the user B. If the terminal device detects that the user a leaves halfway according to the viewing state information of the user a during the process of playing the target video, the starting time point a is based on the current playing progress mark of the target video and is associated with the video account 1, and if the terminal device subsequently detects that the user a returns to view the target video according to the viewing state information of the user a, the ending time point b is based on the current playing progress mark of the target video and is associated with the video account 1.
When the terminal device detects that the user A returns to watch the target video according to the watching state information of the user A, a playback prompt dialog box can pop up on the video playing interface, whether the user A watches missed content again is inquired through the playback prompt dialog box, and the playback prompt dialog box comprises a video playback control "watch back immediately" and a playback segment sending control "send to me singly". If the control is detected to be touched, adjusting the playing progress of the target video to an initial time point a, and starting to play the target video from the initial time point a; if the control is detected to be touched, capturing a video segment between a starting time point a and an ending time point b in the target video, and sending the video segment to the video account 1, so that the user A can independently watch the missed video segment.
It should be noted that, in a scene where a plurality of viewing objects view a target video together in the same space, since the terminal device may determine the video account of each viewing object on the video playing platform according to the viewing state information of each viewing object, if the terminal device detects that a certain viewing object leaves in the middle of the target video playing process, the terminal device may record a start time point and an end time point associated with the video account of the viewing object correspondingly, and directly send a video segment between the start time point and the end time point to the video account of the viewing object, instead of performing a special playback prompt for the viewing object when detecting that the viewing object returns to the viewing target video. Therefore, the target video watching of other watching objects in the same space can be prevented from being influenced.
In addition, in the case that the viewing state information includes facial feature information of the viewing object, the method provided by the embodiment of the application may further determine, based on an expression recognition technology, an emotional state of the viewing object when the viewing object views each video segment in the target video, and based on this, intercept, from the target video, a video segment that may be interested by the viewing object as a video segment to be recommended for the viewing object.
That is, in the process of playing the target video, the terminal device may determine the current emotional state of the viewing object according to the current viewing state information of the viewing object, and configure a classification tag for the currently played video segment in the target video based on the emotional state; furthermore, the video clips to be recommended can be intercepted from the target video according to the respective classification labels of the video clips in the target video.
Specifically, during the playing of the target video, the terminal device may continuously collect facial feature information of the viewing object in real time, perform expression recognition based on the collected facial feature information, and determine an emotional state of the viewing object according to an expression recognition result, such as joy, sadness, anger, boredom, and the like. Then, the terminal device may configure a category label for the currently played video clip in the target video according to the current emotional state of the viewing object, for example, when the current emotional state of the viewing object is happy, the category label "quality-like" may be configured for the currently played video clip in the target video, when the current emotional state of the viewing object is sad and angry, the category label "quality-impaired" may be configured for the currently played video clip in the target video, and when the current emotional state of the viewing object is bored, the category label "uninteresting" may be configured for the currently played video clip in the target video. Further, the terminal device may cut out, from the target video, a video clip whose classification tag belongs to the target classification tag as the video clip to be recommended for the viewing object, and for example, assuming that the target classification tag includes "quality-like" and "quality-impairment", the terminal device may cut out, from the target video, a video clip in which "quality-like" is arranged and a video clip in which "quality-impairment" is arranged as the video clip to be recommended for the viewing object.
It should be understood that, in practical applications, the classification tag may be set according to actual requirements, and the application does not specifically limit the classification tag configured for the video clip.
It should be noted that, in a scene in which multiple viewing objects view a target video together in the same space, the terminal device may determine, according to the viewing state information (including facial feature information) of each of the multiple viewing objects, a corresponding video clip to be recommended for each viewing object accordingly.
That is, in the process of playing the target video, the terminal device may determine, for each of the multiple viewing objects, a current emotional state of the viewing object according to the current viewing state information of the viewing object, and configure, based on the current emotional state of the viewing object, a personal classification tag for the video segment currently played in the target video, where the personal classification tag belongs to the video account of the viewing object. Furthermore, the terminal device may acquire, for each of the multiple viewing objects, a personal classification tag under the video account of the viewing object, and intercept, from the target video, a video clip to be recommended corresponding to the video account of the viewing object according to a correspondence between the personal video tag under the video account of the viewing object and each video clip in the target video.
Still taking the example that the viewing object includes a user a and a user B, the video account of the user a is a video account 1, and the video account of the user B is a video account 2, in the process that the user a and the user B view the target video together, the terminal device may determine the current emotional state of the user a according to the current viewing state information of the user a, and configure a personal classification tag belonging to the video account 1 for the video clip currently played in the target video based on the current emotional state of the user a, that is, when configuring the classification tag for the video clip, construct an association relationship between the classification tag and the video account 1, for example, configure a classification tag including the identifier of the video account 1 for the video clip; similarly, the terminal device may also configure, in the same manner, in the process of playing the target video, the personal classification tag belonging to the video account 2 for the video clip currently played in the target video according to the current viewing state information of the user B.
When the user A determines a video clip to be recommended, the terminal device may acquire a personal classification tag belonging to the video account 1 and labeled for each video clip in the target video, and further, intercept the video clip configured with the target classification tag as the video clip to be recommended for the user A; similarly, when determining the video clip to be recommended for the user B, the terminal device may obtain the personal classification tag belonging to the video account 2 and labeled for each video clip in the target video, and further, intercept the video clip configured with the target classification tag as the video clip to be recommended for the user B.
In a possible implementation manner, the terminal device may recommend a corresponding video clip to be recommended to the viewing object in the form of video highlights. That is, after the target video is played, the terminal device may recommend the video clip to be recommended to the viewing object, which is captured based on the emotional state of the viewing object. Therefore, the video content can be recommended to the watching object more accurately by combining the personal preference of the watching object, so that the video recommendation is more targeted and personalized.
In another possible implementation manner, when the target video is a video other than the first video in the continuous video set, such as when the target video is a video other than the first video in the television series, the terminal device may play a video clip to be recommended that is cut out from the target video based on the emotional state of the viewing object before playing the next video of the target video. For example, when a pre-empt of a tv series is played, a video segment of interest (i.e., a video segment to be recommended) of a viewing object in a previous set can be played as a content of the pre-empt, so that the content of the pre-empt can attract the attention of the viewing object and is personalized.
It should be understood that, in practical applications, the to-be-recommended video clip intercepted based on the emotional state of the viewing object may also be applied to other scenes, and the application scenario of the to-be-recommended video clip is not limited in any way herein.
The video playing control method provided by the embodiment of the application may monitor whether the viewing state of the viewing object for the target video changes according to the viewing state information of the viewing object, automatically mark a starting time point of missing viewing of the video content based on the current playing progress of the target video when it is monitored that the viewing state of the viewing object for the target video changes from viewing to non-viewing, and automatically provide a video playback service for the viewing object based on the marked starting time point when it is monitored that the viewing state of the viewing object for the target video changes from non-viewing to viewing. The whole process does not need to manually adjust the playing progress of the watching object, can ensure that the overlooked video content is quickly and accurately provided for the watching object, and improves the video watching experience.
In order to further understand the video playing control method provided in the embodiment of the present application, taking a scene in which the user a and the user B watch the target video together in the same space as an example, a whole video playing control method provided in the embodiment of the present application is described in an exemplary manner with reference to fig. 4 and 5.
Fig. 4 is a schematic diagram of an implementation principle of a video playback process. As shown in fig. 4, when a user a and a user B watch a target video played by a terminal device together, the terminal device may first identify different user subjects through a face recognition technology, and bind the user a and the user B with their respective corresponding video accounts. Specifically, the terminal device may respectively determine facial feature information of the user a and facial feature information of the user B through a face recognition technology, and further determine a video account of the user a according to the facial feature information of the user a and determine a video account of the user B according to the facial feature information of the user B based on a target mapping relationship (in which a corresponding relationship between the facial feature information of the user and the video account is stored); and the subsequent watching data of the user A to the target video is synchronously recorded in the video account of the user A, and the subsequent watching data of the user B to the target video is synchronously recorded in the video account of the user B.
In the process of playing the target video, the terminal equipment can continuously collect the facial feature information of the user A and the user B in real time as the watching state information, monitor the watching states of the user A and the user B on the target video in real time on the basis of the facial feature information, and perform dotting processing of corresponding behaviors in the target video when the watching states are changed. Taking monitoring of the watching state of the user a as an example, if it is detected that the line of sight of the user a leaves the target video for a certain time period (for example, leaves for 3 minutes) based on the facial feature information of the user a in the process of playing the target video, it may be determined that the watching state of the user a on the target video changes from watching to not watching at this time, and a starting time point a is marked in the target video based on the current playing time point of the target video; if it is detected that the sight line of the user A returns to the target video for a certain time based on the facial feature information of the user A, it can be determined that the user A returns to watch the target video, an ending time point b is marked in the target video based on the current playing time point of the target video, and a video segment between the starting time point a and the ending time point b is the video content missed by the user A.
In the aspect of enhancing recommendation of review videos, the terminal device may perform a certain time extension when marking the starting time point a and the ending time point b, that is, when recognizing that the user does not view the target video, the terminal device appropriately extends the dotting time forward, and when recognizing that the user returns to view the target video, the terminal device appropriately extends the dotting time backward, so as to ensure that the video segment between the starting time point a and the ending time point b more accurately covers the video content missed by the user.
Furthermore, the terminal equipment can perform review recommendation on the user and make corresponding feedback according to the user operation. Taking the example of providing the video playback service for the user a, when the terminal device detects that the user a returns to watch the target video, the playback prompt dialog box can be correspondingly detected on the video playing interface, the method comprises the steps of controlling 'immediate review' and controlling 'single-shot to me', if the user is detected to select 'immediate review', the terminal device may control the target video to jump to the starting time point a for playing, if it is detected that the user selects "send me singly", the terminal device may intercept the video segment between the start time point a and the end time point b, send to the video account of the user a, the user a can subsequently review the missed video content by himself, and if the user does not perform any operation on the playback prompt dialog box within a preset time period (e.g., 5s), the playback prompt dialog box is cancelled and the target video is played normally.
Fig. 5 is a schematic diagram of an implementation principle of a video recommendation process. As shown in fig. 5, the terminal device may bind the user with its corresponding video account based on the facial feature information of the user. The implementation of this process is similar to the implementation of the corresponding process in fig. 4.
During the playing of the target video, the terminal device may collect facial feature information of the user in real time and perform preprocessing (e.g., select the most representative facial feature information from a plurality of pieces of facial feature information collected continuously), and then identify the current emotional state of the user, such as joy, sadness, anger, boredom, and the like, based on the facial feature information obtained after preprocessing by using an expression recognition technology.
Furthermore, a classification label is configured for the currently played video segment in the target video according to the identified current emotional state of the user, for example, when the current emotional state of the watching object is happy, the classification label "good-like" may be configured for the currently played video segment in the target video, when the current emotional state of the watching object is sad and angry, the classification label "good-impairment" may be configured for the currently played video segment in the target video, and when the current emotional state of the watching object is bored, the classification label "uninteresting" may be configured for the currently played video segment in the target video.
Finally, the terminal device can intercept the video clips configured with the target classification labels (such as 'high-quality-like' and 'high-quality-impairment') from the target video according to the respective classification labels of the video clips in the target video as the video clips to be recommended, and after the target video is played, the video clips to be recommended are recommended to the user in a centralized manner.
For the video playing control method described above, the present application also provides a corresponding video playing control device, so that the video playing control method described above can be applied and implemented in practice.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a video playback control apparatus 600 corresponding to the video playback control method shown in fig. 2, and the video playback control apparatus includes:
a state information obtaining module 601, configured to obtain viewing state information of a viewing object in a process of playing a target video; the viewing state information is used for representing whether the viewing object views the target video;
a starting point marking module 602, configured to mark a starting time point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video changes from viewing to non-viewing according to the viewing state information;
a playback service module 603, configured to provide a video playback service for the viewing object when it is determined that the viewing state of the viewing object with respect to the target video changes from non-viewing to viewing according to the viewing state information; the video playback service includes playing the target video based on the start time point.
Alternatively, on the basis of the video playback control apparatus shown in fig. 6, the viewing state information includes facial feature information; referring to fig. 7, fig. 7 is a schematic structural diagram of another video playback control apparatus 700 according to an embodiment of the present application, where the apparatus further includes:
a video account determining module 701, configured to search a video account corresponding to the facial feature information in a target mapping relationship, where the video account is used as a video account of the viewing object; and the target mapping relation records the corresponding relation between the video account and the facial feature information.
Optionally, on the basis of the video playback control apparatus shown in fig. 6, in a case that a plurality of viewing objects view the target video together in the same space, the state information obtaining module 601 is specifically configured to:
in the process of playing the target video, acquiring the viewing state information of each of the plurality of viewing objects;
the starting point marking module 602 is specifically configured to:
for each viewing object in the plurality of viewing objects, when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object, marking a starting time point in the target video based on the current playing progress of the target video, and constructing an association relationship between the starting time point and a video account of the viewing object;
the playback service module 603 is specifically configured to:
for each viewing object in the plurality of viewing objects, when the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to the viewing state information of the viewing object, providing a video playback service for the viewing object; the video playback service includes playing the target video based on a start time point associated with a video account of the viewing object.
Optionally, on the basis of the video playback control apparatus shown in fig. 6, the viewing state information includes position information; the starting point marking module 602 and the playback service module 603 determine a change in the viewing state of the viewing object for the target video according to the viewing state information by:
determining whether the sight line of the watching object falls on the target video or not according to the watching state information;
if the fact that the sight line of the watching object leaves the target video and the time of the sight line of the watching object continuously not falling on the target video exceeds a first time threshold value is detected, the watching state of the watching object for the target video is changed from watching to not watching;
and if the fact that the sight line of the watching object returns to the target video and the time of continuously falling on the target video exceeds a second time threshold value is detected, the watching state of the watching object for the target video is changed from watching not to watching.
Optionally, on the basis of the video playback control apparatus shown in fig. 6, the viewing state information includes position information; the starting point marking module 602 and the playback service module 603 determine a change in the viewing state of the viewing object for the target video according to the viewing state information by:
determining whether the watching object is positioned in a preset watching range or not according to the watching state information;
if the fact that the watching object leaves the preset watching range and the time which is not in the preset watching range continuously exceeds a third time threshold value is detected, it is determined that the watching state of the watching object for the target video is changed from watching to not watching;
and if the fact that the watching object returns to the preset watching range and the time continuously located in the preset watching range exceeds a fourth time threshold value is detected, the watching state of the watching object for the target video is changed from watching-free state to watching-free state.
Optionally, on the basis of the video playback control apparatus shown in fig. 6, referring to fig. 8, fig. 8 is a schematic structural diagram of another video playback control apparatus 800 provided in this embodiment of the present application. The device also includes:
a termination point marking module 801, configured to mark a termination point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video changes from non-viewing to viewing according to the viewing state information;
the video playback service further comprises: providing a video segment based on the start time point and the end time point.
Optionally, on the basis of the video playback control apparatus shown in fig. 8, the starting point marking module 602 is specifically configured to:
marking a time point which is before the current playing progress and is separated from the current playing progress by a first preset time length in the target video as the starting time point;
the end point marking module 801 is specifically configured to:
and marking a time point which is behind the current playing progress and is separated from the current playing progress by a second preset time length in the target video as the termination time point.
Optionally, on the basis of the video playing apparatus shown in fig. 8, the playback service module 603 is specifically configured to:
displaying a playback prompt dialog; the playback prompt dialog box comprises a video playback control and a playback segment sending control;
if the video playback control is detected to be clicked, jumping to the starting time point to play the video clip;
if the playback segment sending control is detected to be clicked, sending the video segment to the video account of the watching object; the video account number of the viewing object is determined based on the viewing status information.
Alternatively, on the basis of the video playback control apparatus shown in fig. 6, the viewing state information includes facial feature information; referring to fig. 9, fig. 9 is a schematic structural diagram of another video playback control apparatus 900 according to an embodiment of the present application, the apparatus further includes:
a tag configuration module 901, configured to determine, according to the current viewing state information of the viewing object, a current emotional state of the viewing object in the process of playing the target video; configuring a classification label for a video clip currently played in the target video based on the emotional state;
the video capturing module 902 is configured to capture a video segment to be recommended from the target video according to the respective classification label of each video segment in the target video.
Alternatively, on the basis of the video playback control apparatus shown in fig. 6, the viewing state information includes facial feature information; in a case that a plurality of viewing objects collectively view the target video in the same space, referring to fig. 10, fig. 10 is a schematic structural diagram of another video playback control apparatus 1000 provided in an embodiment of the present application, and the apparatus further includes:
a personal tag configuration module 1001, configured to determine, for each of the multiple viewing objects, a current emotional state of the viewing object according to current viewing state information of the viewing object in the process of playing the target video; configuring a personal classification label for a video clip currently played in the target video based on the current emotional state of the watching object, wherein the personal classification label belongs to the video account of the watching object;
the personal video intercepting module 1002 is configured to, for each of the multiple viewing objects, acquire a personal classification tag under a video account of the viewing object, and intercept, from the target video, a video clip to be recommended that corresponds to the video account of the viewing object according to a correspondence between the personal classification tag and each video clip in the target video.
Optionally, on the basis of the video playback control apparatus shown in fig. 9 or fig. 10, referring to fig. 11, fig. 11 is a schematic structural diagram of another video playback control apparatus 1100 provided in the embodiment of the present application, and the apparatus further includes:
the first recommending module 1101 is configured to recommend the video clip to be recommended to the viewing object after the target video is played.
Optionally, on the basis of the video playback control apparatus shown in fig. 9 or fig. 10, when the target video is a non-first video in a continuous video set, referring to fig. 12, fig. 12 is a schematic structural diagram of another video playback control apparatus 1200 provided in this embodiment of the present application, and the apparatus further includes:
and a second recommending module 1201, configured to play the corresponding video clip to be recommended before playing a next video of the target video.
The video playing control device provided by the embodiment of the application can monitor whether the watching state of the watching object to the target video changes according to the watching state information of the watching object, automatically mark the starting time point of missing watching video content based on the current playing progress of the target video when the watching state of the watching object to the target video changes from watching to not watching, and automatically provide video playback service for the watching object based on the marked starting time point when the watching state of the watching object to the target video changes from not watching to watching. The whole process does not need to manually adjust the playing progress of the watching object, can ensure that the overlooked video content is quickly and accurately provided for the watching object, and improves the video watching experience.
The embodiment of the present application further provides a device for controlling video playing, where the device may specifically be a server and a terminal device, and the server and the terminal device provided in the embodiment of the present application will be introduced from the perspective of hardware materialization.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present disclosure. The server 1300 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 1322 (e.g., one or more processors) and memory 1332, one or more storage media 1330 (e.g., one or more mass storage devices) storing applications 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored on the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, the central processor 1322 may be arranged in communication with the storage medium 1330, executing a sequence of instruction operations in the storage medium 1330 on the server 1300.
The server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 13.
CPU 1322 is configured to perform the following steps:
in the process of playing the target video, acquiring the watching state information of a watching object; the viewing state information is used for representing whether the viewing object views the target video;
when the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video;
when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information, providing a video playback service for the watching object; the video playback service includes playing the target video based on the start time point.
Optionally, CPU 1322 may also be configured to execute the steps of any implementation manner of the video playing control method provided in this embodiment of the application.
Referring to fig. 14, fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application. For convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the specific technology are not disclosed. The terminal can be any terminal equipment including a smart phone, a smart television, a computer, a tablet computer, a personal digital assistant and the like, and the terminal is taken as an example of a smart phone:
fig. 14 is a block diagram illustrating a partial structure of a smart television related to a terminal provided in an embodiment of the present application. Referring to fig. 14, the smart tv includes: radio Frequency (RF) circuitry 1414, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuitry 1460, wireless fidelity (WiFi) module 1470, processor 1480, and power supply 1490. Those skilled in the art will appreciate that the smart tv architecture shown in fig. 14 does not constitute a limitation of the smart tv, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
The memory 1420 may be used to store software programs and modules, and the processor 1480 executes various functional applications and data processing of the smart tv by operating the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smart tv, and the like. Further, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1480 is a control center of the smart tv, connects various parts of the entire smart tv using various interfaces and lines, and performs various functions of the smart tv and processes data by running or executing software programs and/or modules stored in the memory 1420 and calling data stored in the memory 1420, thereby performing overall monitoring of the smart tv. Alternatively, the processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor, which handles primarily operating systems, user interfaces, and applications, among others, with a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1480.
In the embodiment of the present application, the processor 1480 included in the terminal also has the following functions:
in the process of playing the target video, acquiring the watching state information of a watching object; the viewing state information is used for representing whether the viewing object views the target video;
when the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video;
when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information, providing a video playback service for the watching object; the video playback service includes playing the target video based on the start time point.
Optionally, the processor 1480 is further configured to execute the steps of any implementation manner of the video playing control method provided in this embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, configured to store a computer program, where the computer program is configured to execute any one implementation manner of the video playing control method described in the foregoing embodiments.
The present application further provides a computer program product including instructions, which when run on a computer, causes the computer to execute any one of the implementation manners of the video playing control method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing computer programs.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A video playback control method, the method comprising:
in the process of playing the target video, acquiring the watching state information of a watching object; the viewing state information is used for representing whether the viewing object views the target video;
when the watching state of the watching object for the target video is changed from watching to not watching according to the watching state information, marking a starting time point in the target video based on the current playing progress of the target video;
when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information, providing a video playback service for the watching object; the video playback service includes playing the target video based on the start time point.
2. The method of claim 1, wherein the viewing state information includes facial feature information; the method further comprises the following steps:
searching a video account corresponding to the facial feature information in the target mapping relation, and taking the video account as the video account of the watching object; and the target mapping relation records the corresponding relation between the video account and the facial feature information.
3. The method according to claim 2, wherein in a case where a plurality of viewing objects collectively view the target video in the same space, the acquiring viewing state information of the viewing objects during playing the target video comprises:
in the process of playing the target video, acquiring the viewing state information of each of the plurality of viewing objects;
when it is determined that the viewing state of the viewing object for the target video changes from viewing to non-viewing according to the viewing state information, marking a starting time point in the target video based on a current playing progress of the target video, including:
for each viewing object in the plurality of viewing objects, when the viewing state of the viewing object for the target video is changed from viewing to non-viewing according to the viewing state information of the viewing object, marking a starting time point in the target video based on the current playing progress of the target video, and constructing an association relationship between the starting time point and a video account of the viewing object;
when it is determined that the viewing state of the viewing object for the target video changes from non-viewing to viewing according to the viewing state information, providing a video playback service for the viewing object, including:
for each viewing object in the plurality of viewing objects, when the viewing state of the viewing object for the target video is changed from non-viewing to viewing according to the viewing state information of the viewing object, providing a video playback service for the viewing object; the video playback service includes playing the target video based on a start time point associated with a video account of the viewing object.
4. The method of claim 1, wherein the viewing state information includes facial feature information; determining a change in viewing status of the viewing object for the target video from the viewing status information by:
determining whether the sight line of the watching object falls on the target video or not according to the watching state information;
if the fact that the sight line of the watching object leaves the target video and the time of the sight line of the watching object continuously not falling on the target video exceeds a first time threshold value is detected, the watching state of the watching object for the target video is changed from watching to not watching;
and if the fact that the sight line of the watching object returns to the target video and the time of continuously falling on the target video exceeds a second time threshold value is detected, the watching state of the watching object for the target video is changed from watching not to watching.
5. The method of claim 1, wherein the viewing state information comprises location information; determining a change in viewing status of the viewing object for the target video from the viewing status information by:
determining whether the watching object is positioned in a preset watching range or not according to the watching state information;
if the fact that the watching object leaves the preset watching range and the time which is not in the preset watching range continuously exceeds a third time threshold value is detected, it is determined that the watching state of the watching object for the target video is changed from watching to not watching;
and if the fact that the watching object returns to the preset watching range and the time continuously located in the preset watching range exceeds a fourth time threshold value is detected, the watching state of the watching object for the target video is changed from watching-free state to watching-free state.
6. The method according to any one of claims 1 to 5, wherein when it is determined from the viewing state information that the viewing state of the viewing object for the target video changes from non-viewing to viewing, before the providing of the video playback service for the viewing object, the method further comprises:
marking an end time point in the target video based on the current playing progress of the target video;
the video playback service further comprises:
providing a video segment based on the start time point and the end time point.
7. The method of claim 6, wherein the marking a starting time point in the target video based on the current playing progress of the target video comprises:
marking a time point which is before the current playing progress and is separated from the current playing progress by a first preset time length in the target video as the starting time point;
the marking an end time point in the target video based on the current playing progress of the target video comprises:
and marking a time point which is behind the current playing progress and is separated from the current playing progress by a second preset time length in the target video as the termination time point.
8. The method of claim 6, wherein providing video playback services for the viewing object comprises:
displaying a playback prompt dialog; the playback prompt dialog box comprises a video playback control and a playback segment sending control;
if the video playback control is detected to be clicked, jumping to the starting time point to play the video clip;
if the playback segment sending control is detected to be clicked, sending the video segment to the video account of the watching object; the video account number of the viewing object is determined based on the viewing status information.
9. The method of claim 1, wherein the viewing state information includes facial feature information; the method further comprises the following steps:
in the process of playing the target video, determining the current emotional state of the watching object according to the current watching state information of the watching object; configuring a classification label for a video clip currently played in the target video based on the emotional state;
and intercepting the video clips to be recommended from the target video according to the respective classification labels of the video clips in the target video.
10. The method of claim 2, wherein the viewing state information includes facial feature information; in a case where a plurality of viewing objects collectively view the target video within the same space, the method further includes:
in the process of playing the target video, aiming at each viewing object in the plurality of viewing objects, determining the current emotional state of the viewing object according to the current viewing state information of the viewing object; configuring a personal classification label for a video clip currently played in the target video based on the current emotional state of the watching object, wherein the personal classification label belongs to the video account of the watching object;
and for each watching object in the plurality of watching objects, acquiring a personal classification label under the video account of the watching object, and intercepting a video clip to be recommended corresponding to the video account of the watching object from the target video according to the corresponding relation between the personal classification label and each video clip in the target video.
11. The method according to claim 8 or 9, characterized in that the method further comprises:
and after the target video is played, recommending the corresponding video clip to be recommended to the watching object.
12. The method according to claim 8 or 9, wherein when the target video is not the first video in the continuous video set, the method further comprises:
and playing the corresponding video clip to be recommended before playing the next video of the target video.
13. A video playback control apparatus, characterized in that the apparatus comprises:
the state information acquisition module is used for acquiring the watching state information of a watching object in the process of playing the target video; the viewing state information is used for representing whether the viewing object views the target video;
a starting point marking module, configured to mark a starting time point in the target video based on a current playing progress of the target video when it is determined that the viewing state of the viewing object with respect to the target video changes from viewing to non-viewing according to the viewing state information;
the playback service module is used for providing video playback service for the watching object when the watching state of the watching object for the target video is changed from non-watching to watching according to the watching state information; the video playback service includes playing the target video based on the start time point.
14. An apparatus, comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the video playback control method according to any one of claims 1 to 12 in accordance with the computer program.
15. A computer-readable storage medium for storing a computer program for executing the video playback control method according to any one of claims 1 to 12.
CN202010476759.2A 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium Active CN111615003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476759.2A CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476759.2A CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111615003A true CN111615003A (en) 2020-09-01
CN111615003B CN111615003B (en) 2023-11-03

Family

ID=72201848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476759.2A Active CN111615003B (en) 2020-05-29 2020-05-29 Video playing control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111615003B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261431A (en) * 2020-10-21 2021-01-22 联想(北京)有限公司 Image processing method and device and electronic equipment
CN112637678A (en) * 2021-03-09 2021-04-09 北京世纪好未来教育科技有限公司 Video playing method, device, storage medium and equipment
CN112866809A (en) * 2020-12-31 2021-05-28 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113038286A (en) * 2021-03-12 2021-06-25 维沃移动通信有限公司 Video playing control method and device and electronic equipment
CN113556611A (en) * 2021-07-20 2021-10-26 上海哔哩哔哩科技有限公司 Video watching method and device
CN113573151A (en) * 2021-09-23 2021-10-29 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113938748A (en) * 2021-10-15 2022-01-14 腾讯科技(成都)有限公司 Video playing method, device, terminal, storage medium and program product
CN113992992A (en) * 2021-10-25 2022-01-28 深圳康佳电子科技有限公司 Fragmented film viewing processing method and device based on face recognition and smart television
CN114885201A (en) * 2022-05-06 2022-08-09 林间 Video contrast viewing method, device, equipment and storage medium
CN117692717A (en) * 2024-01-30 2024-03-12 利亚德智慧科技集团有限公司 Breakpoint continuous broadcasting processing method and device for light show, storage medium and electronic equipment
CN113992992B (en) * 2021-10-25 2024-04-19 深圳康佳电子科技有限公司 Fragmentation film watching processing method and device based on face recognition and intelligent television

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747346A (en) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 Multimedia video playing control method and multimedia video player
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN106303672A (en) * 2016-08-24 2017-01-04 上海卓易科技股份有限公司 A kind of synchronous broadcast method based on recorded broadcast video and device
WO2017113740A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Face recognition based video recommendation method and device
CN107484021A (en) * 2017-09-27 2017-12-15 广东小天才科技有限公司 A kind of video broadcasting method, system and terminal device
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN108650558A (en) * 2018-05-30 2018-10-12 互影科技(北京)有限公司 The generation method and device of video Previously on Desperate Housewives based on interactive video
CN109842805A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Generation method, device, computer equipment and the storage medium of video watching focus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video
CN103747346A (en) * 2014-01-23 2014-04-23 中国联合网络通信集团有限公司 Multimedia video playing control method and multimedia video player
WO2017113740A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Face recognition based video recommendation method and device
CN106303672A (en) * 2016-08-24 2017-01-04 上海卓易科技股份有限公司 A kind of synchronous broadcast method based on recorded broadcast video and device
CN107484021A (en) * 2017-09-27 2017-12-15 广东小天才科技有限公司 A kind of video broadcasting method, system and terminal device
CN107911745A (en) * 2017-11-17 2018-04-13 武汉康慧然信息技术咨询有限公司 TV replay control method
CN108650558A (en) * 2018-05-30 2018-10-12 互影科技(北京)有限公司 The generation method and device of video Previously on Desperate Housewives based on interactive video
CN109842805A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Generation method, device, computer equipment and the storage medium of video watching focus

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261431A (en) * 2020-10-21 2021-01-22 联想(北京)有限公司 Image processing method and device and electronic equipment
CN112866809A (en) * 2020-12-31 2021-05-28 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and readable storage medium
CN112637678A (en) * 2021-03-09 2021-04-09 北京世纪好未来教育科技有限公司 Video playing method, device, storage medium and equipment
CN113038286A (en) * 2021-03-12 2021-06-25 维沃移动通信有限公司 Video playing control method and device and electronic equipment
CN113038286B (en) * 2021-03-12 2023-08-11 维沃移动通信有限公司 Video playing control method and device and electronic equipment
CN113556611A (en) * 2021-07-20 2021-10-26 上海哔哩哔哩科技有限公司 Video watching method and device
CN113573151B (en) * 2021-09-23 2021-11-23 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113573151A (en) * 2021-09-23 2021-10-29 深圳佳力拓科技有限公司 Digital television playing method and device based on focusing degree value
CN113938748A (en) * 2021-10-15 2022-01-14 腾讯科技(成都)有限公司 Video playing method, device, terminal, storage medium and program product
CN113938748B (en) * 2021-10-15 2023-09-01 腾讯科技(成都)有限公司 Video playing method, device, terminal, storage medium and program product
CN113992992A (en) * 2021-10-25 2022-01-28 深圳康佳电子科技有限公司 Fragmented film viewing processing method and device based on face recognition and smart television
CN113992992B (en) * 2021-10-25 2024-04-19 深圳康佳电子科技有限公司 Fragmentation film watching processing method and device based on face recognition and intelligent television
CN114885201A (en) * 2022-05-06 2022-08-09 林间 Video contrast viewing method, device, equipment and storage medium
CN114885201B (en) * 2022-05-06 2024-04-02 林间 Video comparison viewing method, device, equipment and storage medium
CN117692717A (en) * 2024-01-30 2024-03-12 利亚德智慧科技集团有限公司 Breakpoint continuous broadcasting processing method and device for light show, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111615003B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN111615003B (en) Video playing control method, device, equipment and storage medium
CN110225369B (en) Video selective playing method, device, equipment and readable storage medium
CN105229629B (en) For estimating the method to the user interest of media content, electronic equipment and medium
CN109168037B (en) Video playing method and device
EP2717564B1 (en) Method, device and system for realizing video retrieval
CN107247733B (en) Video clip watching popularity analysis method and system
US20170251262A1 (en) System and Method for Segment Relevance Detection for Digital Content Using Multimodal Correlations
US20110246560A1 (en) Social context for inter-media objects
CN109189986B (en) Information recommendation method and device, electronic equipment and readable storage medium
CN105868685A (en) Advertisement recommendation method and device based on face recognition
CN104995639B (en) terminal and video file management method
WO2015029393A1 (en) Information processing device and information processing method
US20210073277A1 (en) Multimedia focalization
CN105933772B (en) Exchange method, interactive device and interactive system
CN111263170B (en) Video playing method, device and equipment and readable storage medium
CN109389088B (en) Video recognition method, device, machine equipment and computer readable storage medium
US10638197B2 (en) System and method for segment relevance detection for digital content using multimodal correlations
CN105847976A (en) Method and device for skipping advertisement according to facial features
CN107547922B (en) Information processing method, device, system and computer readable storage medium
KR102592904B1 (en) Apparatus and method for summarizing image
CN109257649B (en) Multimedia file generation method and terminal equipment
CN108616775A (en) The method, apparatus of intelligence sectional drawing, storage medium and intelligent terminal when video playing
US20140195647A1 (en) Bookmarking system
CN110636379A (en) Recording method of television watching history, television and computer readable storage medium
CN114025242A (en) Video processing method, video processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028104

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant