CN110798736B - Video playing method, device, equipment and medium - Google Patents

Video playing method, device, equipment and medium Download PDF

Info

Publication number
CN110798736B
CN110798736B CN201911195372.3A CN201911195372A CN110798736B CN 110798736 B CN110798736 B CN 110798736B CN 201911195372 A CN201911195372 A CN 201911195372A CN 110798736 B CN110798736 B CN 110798736B
Authority
CN
China
Prior art keywords
video
coincidence
candidate
image pair
current video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911195372.3A
Other languages
Chinese (zh)
Other versions
CN110798736A (en
Inventor
李飞
董立强
陈国庆
赵向明
陶淑媛
于灵珊
李有江
晏青云
贠挺
林赛群
赵世奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911195372.3A priority Critical patent/CN110798736B/en
Publication of CN110798736A publication Critical patent/CN110798736A/en
Application granted granted Critical
Publication of CN110798736B publication Critical patent/CN110798736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Abstract

The application discloses a video playing method, a video playing device, video playing equipment and a video playing medium, and relates to the technical field of artificial intelligence. The specific implementation scheme is as follows: in the current video playing process, determining a candidate video which has content superposition with the current video; determining a content coincidence time period between the current video and the candidate video; selecting a target video having time continuity with the current video from the candidate videos according to the content coincidence time period; and when the current video is played, continuing to play the target video. According to the method and the device, the target video which is related to the current video content and has time continuity is determined, and the target video continues to be played after the current video is played, so that the related video is played continuously, the problem that playing is discontinuous after the current video is played is solved, and the requirement that a user continues to watch the subsequent related video is met.

Description

Video playing method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for playing a video.
Background
Video is a way of internet content dissemination. The video content covers a plurality of aspects such as movies, TV shows, integrated art, music and the like, and can meet the requirements of various aspects of users.
When a user watches videos, the user often selects to watch wonderful segments of movie dramas or art programs, due to the fact that the duration of the videos is limited, the users may end playing at wonderful positions, the users feel endless, and if the users want to continuously watch related contents, the users need to search and screen subsequent videos related to the current short video contents by themselves, and user experience is affected.
Disclosure of Invention
The embodiment of the application provides a video playing method, a video playing device, video playing equipment and a video playing medium, so that video clips which are related to current video content and have time continuity can be automatically played.
The embodiment of the application discloses a video playing method, which comprises the following steps:
in the current video playing process, determining a candidate video which has content superposition with the current video;
determining a content coincidence time period between the current video and the candidate video;
selecting a target video having time continuity with the current video from the candidate videos according to the content coincidence time period;
and when the current video is played, continuing to play the target video.
The above embodiment has the following advantages or beneficial effects: the candidate video with content overlapped with the current video is determined, and the target video is selected from the candidate videos according to the content overlapped time period, so that the problem that the subsequent video cannot be continuously played after the current video is played is solved, the target video which is related to the current video content and is continuous in time is automatically played when the current video is played, and the requirement of a user on continuous watching of the related video is met.
Further, determining a content coincidence time period between the current video and the candidate video comprises:
determining candidate coincidence segments between the current video and the candidate videos and coincidence values of image pairs in the candidate coincidence segments;
and selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments, and determining the content coincidence time period of the target coincidence segment.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: the coincidence value of the image pair can reflect the coincidence degree between the current video and the candidate video, so that the content coincidence time period of the candidate video and the current video can be accurately determined.
Further, determining candidate coincidences segments between the current video and the candidate video, and coincidence values of image pairs in the candidate coincidences segments, comprises:
determining a correlation value of the current video and an image pair in the candidate video;
for each image pair, taking the candidate adjacent image pair of the image pair with the largest correlation value as the last image pair;
taking the sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair;
candidate coincident segments are generated that include image pairs having an up-down relationship.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: and determining the coincidence value of the image pair according to the correlation of the image pair and the correlation values of the candidate adjacent image pairs, so that the coincidence value can embody the connection path of the image pair with the maximum correlation value, and the content coincidence time period can be conveniently determined according to the coincidence value.
Further, before the candidate neighboring image pair with the largest correlation value is taken as the previous image pair, the method further includes:
and performing enhancement or attenuation processing on the correlation value of the image pair.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: the difference between the correlation values of the image pair is obvious by enhancing or attenuating the correlation values of the image pair, so that the influence of one key frame associating a plurality of key frames on the determination of the content coincidence time period is avoided, and the subsequent determination of the content coincidence time period according to the coincidence values of the image pair is facilitated more accurately.
Further, performing enhancement or attenuation processing on the correlation value of the image pair, including:
if any image pair is associated with the missing frame in the candidate video, performing attenuation processing on the correlation value of the image pair by adopting a first attenuation coefficient;
if the correlation value of any image pair is greater than or equal to the correlation threshold value, the correlation value of the image pair is enhanced by adopting an enhancement coefficient;
if the correlation value of any image pair is smaller than the correlation threshold value, performing attenuation processing on the correlation value of the image pair by adopting a second attenuation coefficient; wherein the second attenuation coefficient is less than the first attenuation coefficient.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: by enhancing the correlation degree of which the correlation degree is greater than or equal to the correlation degree threshold value and attenuating the correlation degree of which the correlation degree is less than the correlation degree threshold value and is associated with the missing frame, the value difference between the correlation degrees is obvious, the influence of associating one key frame with a plurality of key frames on the determination of the content overlapping time period is avoided, and the subsequent determination of the content overlapping time period according to the correlation degree is facilitated more accurately.
Further, selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments, and determining the content coincidence time period of the target coincidence segment, includes:
taking the candidate coincidence segment to which the image pair with the maximum coincidence value belongs as the target coincidence segment, and taking the image pair with the maximum coincidence value as a termination coincidence image pair of the target coincidence segment;
and backtracking the target coincidence segment by the termination coincidence image pair, taking the image pair with the minimum coincidence value as the initial coincidence image pair of the target coincidence segment, and obtaining the content coincidence time period.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: by determining the target coincidence segment, taking the image pair with the maximum coincidence value as the termination coincidence image pair of the target coincidence segment, and obtaining the initial coincidence image pair by backtracking the termination coincidence image pair, the content coincidence time period is accurately determined according to the connection path of the maximum correlation value of the image pair, so that the problem that the determined content coincidence time period is inaccurate because one key frame is associated with a plurality of key frames is solved.
Further, selecting a target video having time continuity with the current video from the candidate videos according to the content coincidence time period includes:
and if the content overlapping time period is positioned at the tail of the current video and at the head of any candidate video, taking the candidate video as a target video having time continuity with the current video.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: the candidate video with the head time period coincident with the tail time period of the current video is used as the target video, so that the temporal continuity of the target video to be played and the current video is ensured, and the requirement of a user for watching related videos continuous with the current video is met.
Further, when the current video playing is finished, continuing to play the target video, including:
displaying continuous playing prompt information of the target video through a current video playing interface before the current video playing is finished;
when the current video playing is finished, detecting whether a user performs refusing operation on the continuous playing prompt information;
if not, continuing to play the target video.
Accordingly, the above-described embodiments have the following advantages or advantageous effects: by playing the prompt information, the user can receive the prompt of continuously playing the video, and the continuous playing of the video is realized by continuously playing the target video, so that the problem that the user needs to search for the related video by himself after the current video is played is solved, and the user experience is improved.
The embodiment of the present application further discloses a video playing device, which includes:
the candidate video determining module is used for determining candidate videos with content overlapped with the current video in the current video playing process;
a content coincidence time period determination module, configured to determine a content coincidence time period between the current video and the candidate video;
the target video selection module is used for selecting a target video which has time continuity with the current video from the candidate videos according to the content coincidence time period;
and the playing module is used for continuously playing the target video when the current video is played.
Further, the content coincidence time period determination module includes:
a candidate coincident segment determining unit, configured to determine a candidate coincident segment between the current video and the candidate video, and a coincident value of an image pair in the candidate coincident segment;
and the target coincidence segment selecting unit is used for selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments and determining the content coincidence time period of the target coincidence segment.
Further, the candidate coincident segment determination unit includes:
a relevance value determining subunit, configured to determine a relevance value of an image pair in the current video and the candidate video;
a previous image pair determining subunit, configured to determine, for each image pair, a candidate neighboring image pair of the image pair with the largest correlation value as a previous image pair;
a coincidence value determining subunit, configured to determine a sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair;
a candidate coincident segment generation subunit operable to generate a candidate coincident segment including an image pair having an upper and lower relationship.
Further, before the candidate neighboring image pair with the largest correlation value is taken as the previous image pair, the method further includes:
and the processing module is used for performing enhancement or attenuation processing on the correlation value of the image pair.
Further, the processing module includes:
the first attenuation unit is used for carrying out attenuation processing on the correlation value of any image pair by adopting a first attenuation coefficient if the image pair is associated with the missing frame in the candidate video;
the enhancement unit is used for enhancing the correlation value of any image pair by adopting an enhancement coefficient if the correlation value of the image pair is greater than or equal to a correlation threshold value;
the second attenuation unit is used for carrying out attenuation processing on the correlation value of any image pair by adopting a second attenuation coefficient if the correlation value of the image pair is smaller than the correlation threshold value; wherein the second attenuation coefficient is less than the first attenuation coefficient.
Further, the target coincidence section selecting unit includes:
a termination coincident image pair determining subunit, configured to use the candidate coincident segment to which the image pair with the largest coincidence value belongs as the target coincident segment, and use the image pair with the largest coincidence value as a termination coincident image pair of the target coincident segment;
and the initial coincident image pair determining subunit is used for backtracking the target coincident segment by the termination coincident image pair, taking the image pair with the minimum coincident value as the initial coincident image pair of the target coincident segment, and obtaining the content coincident time period.
Further, the target video selection module is specifically configured to:
and if the content overlapping time period is positioned at the tail of the current video and at the head of any candidate video, taking the candidate video as a target video having time continuity with the current video.
Further, the playing module includes:
the prompt information display unit is used for displaying the continuous playing prompt information of the target video through a current video playing interface before the current video playing is finished;
the detection unit is used for detecting whether the user performs refusing operation on the continuous playing prompt information when the current video playing is finished;
and the continuous playing unit is used for continuously playing the target video if the target video is not played.
The embodiment of the application also discloses an electronic device, which comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as described in any one of the embodiments of the present application.
Also disclosed in embodiments herein is a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the embodiments herein.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video playing method provided in an embodiment of the present application;
FIG. 3 is a key frame association diagram provided in accordance with an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video playback device according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing a video playing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application. The embodiment can be applied to the situation that the next video is automatically played when the current video is played. Typically, the present embodiment may be applied to a case where a target video that is related to the current video content and is read through time is determined, and when the current video playing is finished, the target video is automatically played. The video playing method disclosed by the embodiment can be executed by a video playing device, and the device can be realized by software and/or hardware. Referring to fig. 1, the video playing method provided in this embodiment includes:
and S110, in the process of playing the current video, determining a candidate video which has content superposition with the current video.
The current video and the candidate video may be short videos, or may also be videos of an entire set of a television show, a movie, or a program. After the current short video is played, the user still wants to watch the continuous short video related to the current short video, and the short video generally has no correlation or continuity, so that the related video cannot be continuously played for the user, and the user can only search the video related to the current short video by himself and click to watch the video. The embodiment of the application can provide a solution to the problem that the short video cannot be continuously played, so as to automatically play the video which is related to the content and has continuity when the current video is played.
For example, if there is an association between two pieces of video, there is generally an overlapping portion of the content, that is, the same segment as that in the previous piece of video appears in the next piece of video, for example, the content of the beginning portion of the next piece of video is the same as that of the end portion of the previous piece of video. And if the video has the content overlapped with the current video, the video is considered to be associated with the current video, and the video is taken as a candidate video.
Optionally, a candidate video with content overlapping with the current video is determined according to the video fingerprint characteristics. The video fingerprint features comprise image features and audio features. And extracting key frames of the current video and other videos, extracting image characteristics to obtain image characteristics, and similarly, obtaining audio characteristics of the current video and other videos to form video fingerprint characteristics. The video fingerprint characteristics of the current video and the video fingerprint characteristics of other videos are obtained, the video fingerprint characteristics are matched, if the video fingerprint characteristics are successfully matched, content superposition exists between the current video and the other videos, and the other videos are determined to be candidate videos.
In the embodiment of the application, the candidate video of the current video is determined by judging whether the content is overlapped, so that the time continuous video associated with the current video is determined from the candidate video for continuous playing.
And S120, determining a content coincidence time period between the current video and the candidate video.
The content coincidence time period can be a time period in which the coincidence content appears in the current video and a time period in which the coincidence content appears in the candidate video. Since short videos are typically several minutes, and are short in time, subsequent highlights in the short video may not be played in the current video, and the user may wish to continue watching the subsequent highlights. Therefore, in the embodiment of the application, the content coincidence time period between the current video and the candidate video is determined, and whether the candidate video is a previous video, a same video or a subsequent video of the current video can be determined through the content coincidence time period, so that the video which the user wants to watch is conveniently and pertinently selected.
Illustratively, since the key frames of the current video and the candidate video are extracted in the process of determining the overlapped content, the time period corresponding to the overlapped content in the current video and the candidate video can be determined according to the time information of the matched key frames.
And S130, selecting a target video having time continuity with the current video from the candidate videos according to the content coincidence time period.
Since the user may prefer to watch the connected video related to and having continuity with the current video after watching the current video, and does not desire to watch the similar or same video having content overlapping with the current video again, in the embodiment of the present application, the target video having time continuity with the current video is selected from the candidate videos according to the content overlapping time period, so as to meet the requirement of the user for watching the previous video or the subsequent video of the current video continuously.
Optionally, selecting a target video having time continuity with the current video from the candidate videos according to the content overlapping time period includes: and if the content overlapping time period is positioned at the tail of the current video and at the head of any candidate video, taking the candidate video as a target video having time continuity with the current video.
Illustratively, if the content overlapping time period is located at the tail of the current video and at the head of any candidate video, it indicates that the candidate video is a video that continues to display content from the current video, and therefore, the video is taken as a target video with time continuity, so that the user can watch the target video continuously after watching the current video. The candidate video bearing the current video is used as the target video, so that the subsequent video of the current video can be continuously played, and the requirement of a user for watching the subsequent video is met.
And S140, when the current video is played, continuing to play the target video.
In order to solve the problem that playing is interrupted because no other video continues to be played after the current video is played, in the embodiment of the application, when the current video is played, the target video can be directly and automatically continued to be played, and whether the target video is continued to be played or not can be determined according to user interaction operation. By playing the target video which is related to the current video content and has time continuity for the user, the video playing has continuity, the user experience is improved, and the requirement of the user for continuously watching the related video is met.
Optionally, in this embodiment of the application, if the content overlapping time period is located at the tail of the current video and at the head of any candidate video, the candidate video is a subsequent video; and if the content coincidence time period is positioned at the head of the current video and at the tail of any candidate video, the candidate video is a previous video. The user may be provided with a selection prompt for continuing to play the previous video or the subsequent video before the current video is finished, so that the user can select the previous video or the subsequent video of interest to continue to play. Or displaying a prompt for the user to play the subsequent video before the current video is finished, and automatically playing the subsequent video when the current video is finished.
According to the technical scheme of the embodiment of the application, the candidate videos with content overlapped with the current video are determined, and the target video is selected from the candidate videos according to the content overlapped time period, so that the problem that the video related to the current video cannot be continuously played after the current video is played is solved, the target video related to the current video content and continuous in time is determined, the target video is automatically played when the current video is played, and the requirement of a user for continuously watching the related video is met.
Fig. 2 is a schematic flowchart of another video playing method provided in an embodiment of the present application. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 2, the video playing method provided in this embodiment includes:
s210, in the process of playing the current video, determining a candidate video which has content superposition with the current video.
S220, determining candidate coincident segments between the current video and the candidate videos and coincident values of image pairs in the candidate coincident segments.
Wherein, the image is a key frame extracted from the current video and the candidate video. The candidate overlapped sections can be content overlapped sections corresponding to any time period in the current video and the candidate video, and the time progresses of the candidate overlapped sections in the current video and the candidate video can be different. The content in the candidate coincidental fragments is generally the same or similar. And the image pair is formed by the image which is correspondingly positioned in the current video and the image which is correspondingly positioned in the candidate video in the candidate overlapped segment.
Optionally, determining a candidate coincidence segment between the current video and the candidate video and a coincidence value of an image pair in the candidate coincidence segment includes: determining a correlation value of the current video and an image pair in the candidate video; for each image pair, taking the candidate adjacent image pair of the image pair with the largest correlation value as the last image pair; taking the sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair; candidate coincident segments are generated that include image pairs having an up-down relationship.
For example, the correlation value between two images may be determined based on brightness, contrast, and structure of the images, or may be determined based on cosine distance, histogram, mutual information, and the like. The candidate neighboring image pair may be an image pair composed of neighboring images of the image pair in the current video and the candidate video. Illustratively, for the image pair composed of the 3 rd key frame in the current video frame and the 5 th key frame in the candidate video frame, the image pair composed of the 2 nd key frame in the current video frame and the 4 th and 5 th key frames in the candidate video frame, and the image pair composed of the 3 rd key frame in the current video frame and the 4 th key frame in the candidate video frame are taken as the candidate adjacent image pair. And if the correlation value of the image pair formed by the 2 nd key frame in the current video frame and the 4 th key frame in the candidate video frame is the maximum, taking the sum of the correlation value and the correlation value of the image pair formed by the 3 rd key frame in the current video frame and the 5 th key frame in the candidate video frame as the coincidence value of the image pair formed by the 3 rd key frame in the current video frame and the 5 th key frame in the candidate video frame. In the process of determining each coincidence value, each image pair is connected with a candidate adjacent image pair to form an image pair connecting path, and the segment corresponding to the path is used as a candidate coincidence segment of the image pair with the upper and lower relations. Having an up-down relationship means that, among the images in the candidate adjacent image pair, there is a key frame image that is previous to any one of the images in the pair, so that the candidate adjacent image pair and the image pair form an up-down relationship. By determining the coincidence value of the image pair, the connection path of the image pair with the maximum correlation value can be determined, and the candidate coincidence segment and the target coincidence segment can be determined accurately according to the change path so as to determine the content coincidence time period.
Optionally, before the candidate neighboring image pair with the largest correlation value of the image pair is taken as the previous image pair, the method further includes: and performing enhancement or attenuation processing on the correlation value of the image pair. Performing enhancement or attenuation processing on the correlation value of the image pair, including: if any image pair is associated with the missing frame in the candidate video, performing attenuation processing on the correlation value of the image pair by adopting a first attenuation coefficient; if the correlation value of any image pair is greater than or equal to the correlation threshold value, the correlation value of the image pair is enhanced by adopting an enhancement coefficient; if the correlation value of any image pair is smaller than the correlation threshold value, performing attenuation processing on the correlation value of the image pair by adopting a second attenuation coefficient; wherein the second attenuation coefficient is less than the first attenuation coefficient.
The correlation threshold may be set according to actual conditions. Specifically, if two key frames exist in the current video and are respectively associated with two key frames in the candidate video frames, and the current video frame is located in the key frame between the two key frames, and there is no associated key frame between the two key frames in the candidate video frames, it is indicated that a missing frame exists in the candidate video. For example, as shown in fig. 3, if 7 key frames are extracted from the current video, n key frames are extracted from the candidate video, wherein the correlation degree of the n key frames is analyzed to determine that the 2 nd, 4 th, 5 th, and 6 th key frames in the current video key frames are respectively associated with the 12 th, 13 th, 14 th, and 15 th video frames in the candidate video key frames, and since the video frames have continuity, it can be determined that the 3 rd key frame in the current video key frame should also be associated with one key frame in the candidate video, and the key frame should be located after the 12 th key frame and before the 13 th key frame in the candidate video key frames, so that it can be determined that the key frame corresponding to the 3 rd key frame in the current video key frame is missing frame in the key frames of the candidate video, which is the missing frame. By determining the missing frame, the unsuccessfully recalled key frame is determined, so that the element corresponding to the missing frame is attenuated, and the influence of the element on the determination of the content in the time period is weakened.
If the candidate video has a missing frame, the first attenuation coefficient is used to attenuate the correlation value of the image pair to which the image in the current video corresponding to the missing frame belongs, for example, the correlation value is multiplied by a positive coefficient and then transformed into a negative value, and the positive coefficient is the first attenuation coefficient. If any correlation value in the first matrix is greater than or equal to the correlation threshold, an enhancement coefficient is used to enhance the correlation value, for example, a positive coefficient greater than 1 is multiplied. If any correlation value in the first matrix is smaller than the correlation threshold, performing attenuation processing on the correlation value by adopting a second attenuation coefficient; the second attenuation coefficient is smaller than the first attenuation coefficient, for example, a positive coefficient smaller than the first attenuation coefficient is multiplied and then converted into a negative value, and the positive coefficient is the second attenuation coefficient. The correlation value is enhanced or attenuated, so that the correlation value is displayed more clearly and obviously to reflect the correlation between each key frame in the current video and each key frame in the candidate video, and the coincidence degree of the image pair is determined more accurately according to the correlation values.
Optionally, the correlation value of the image pair is enhanced or attenuated. Performing enhancement or attenuation processing on the correlation value of the image pair; for each image pair, taking the candidate adjacent image pair of the image pair with the largest correlation value as the last image pair; taking the sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair; generating candidate coincident segments comprising image pairs having an up-down relationship may be embodied by:
taking the duration of the current video and the candidate video as the number of rows and columns of the first matrix, and taking the correlation value between the image in the current video and the image in the candidate video as the value of an element in the first matrix; performing enhancement or attenuation treatment on elements in the first matrix to obtain a second matrix; and constructing a dynamic matching matrix of the current video and the candidate video according to the relationship between the elements in the second matrix.
For example, the duration of the current video may be taken as the number of rows of the first matrix, and the duration of the candidate video may be taken as the number of columns of the first matrix, or the duration of the current video may be taken as the number of columns of the first matrix, and the duration of the candidate video may be taken as the number of rows of the first matrix. In this embodiment, the frame numbers of the key frames extracted from the current video and the candidate video may also be used as the number of rows and columns of the first matrix. And regarding the element in the first matrix, taking the correlation value of the current video key frame and the candidate video key frame corresponding to the element as the value of the element. In order to enhance the difference contrast of the correlation values of the current video key frame and the candidate video key frame, the elements in the first elements are enhanced or attenuated to obtain a second matrix so as to more clearly represent the difference of the correlation values between the key frames.
Optionally, constructing a dynamic matching matrix of the current video and the candidate video according to the relationship between the elements in the second matrix, including: regarding each element in the second matrix, taking the candidate adjacent element with the largest value as the last element; and taking the sum of the value of the last element and the value of the element as a new value of the element to obtain a dynamic matching matrix of the current video and the candidate video.
Wherein the candidate neighboring elements of the element may be elements of the left position, the top position and the top left position of the element in the second matrix. For example, for the second matrixFor element 4, there are no candidate neighbors, then the new element after transformation is 4. For element 7, its candidate neighbor is the element in the left position, and thus is transformed to 11. For element 3, its candidate neighbor is the element at the left position, so the transform is 11+ 3-14. For element 2, its candidate neighbor element is the element at the top position, so the transformed value is 6. For element 6, the left position element of element 6 takes the value of 6, the upper position element takes the value of 11, and the upper left position element takes the value of 4, wherein the maximum value is 11, and the sum 17 of 11 and 6 is taken as a new element of element 6 position. By analogy, all elements are traversed to obtain a dynamic matching matrixThe dynamic matching matrix is obtained in the above way, so that the dynamic matching matrix can reflect the connection path of the element with the largest value, and even if the missing frame and the condition that one key frame corresponds to a plurality of key frames exist, the segment with the highest coincidence degree can be determined according to the dynamic matching matrix, so that the content coincidence time period can be accurately determined through the dynamic matching matrix. Since the rows and columns of each element in the dynamic matching matrix correspond to consecutive key frame images in the current video and the candidate video, there is a continuous context between each element. The candidate overlapping segments of the image pair with the upper and lower relationships may be segments corresponding to 4-6-17-22-34, 4-6-17-25-34, 4-11-17-22-34, 4-11-14-25-34, etc. in the dynamic matching matrix, that is, segments corresponding to paths formed by connecting elements at the right position or elements at the lower position of any element and then repeating the above connection process for the connected elements at the right position or elements at the lower position. Through the coincidence value of the image pair, the segment with coincidence can be clearly and accurately determined, so that the segment with the highest coincidence degree can be determined from the candidate coincident segments.
S230, selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments, and determining the content coincidence time period of the target coincidence segment.
Due to instability in the process of extracting the key frames, the key frames cannot be extracted comprehensively, and therefore part of the key frames cannot be recalled. In addition, if a plurality of continuous pictures of the video are similar, a situation that one key frame is associated with a plurality of key frames may occur in the extracted key frames, which affects the accuracy of the content overlapping time period. Therefore, in the embodiment of the application, the coincidence value of the image pair determines the target coincidence segment from the candidate coincidence segments, so that the problem that the determined content coincidence time period is inaccurate due to the fact that one key frame is associated with a plurality of key frames is avoided, and the content coincidence time period is accurately determined.
Optionally, selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments, and determining a content coincidence time period of the target coincidence segment, includes: taking the candidate coincidence segment to which the image pair with the maximum coincidence value belongs as the target coincidence segment, and taking the image pair with the maximum coincidence value as a termination coincidence image pair of the target coincidence segment; and backtracking the target coincidence segment by the termination coincidence image pair, taking the image pair with the minimum coincidence value as the initial coincidence image pair of the target coincidence segment, and obtaining the content coincidence time period.
Exemplary, for dynamic matching matricesIf the maximum coincidence value is 34, using the candidate coincidence segment to which 34 belongs, namely the candidate coincidence segment corresponding to 4-11-17-25-34, as a target coincidence segment, and using the image pair corresponding to 4 with the minimum coincidence value in the target coincidence segment as an initial coincidence image pair, thereby using the time periods corresponding to the initial coincidence image pair and the final coincidence image pair as content coincidence time periods. The target coincidence segment is determined from the candidate coincidence segments according to the coincidence values of the image pairs, so that the content coincidence time period is determined according to the target coincidence segment, the influence of association of one key frame and a plurality of key frames is eliminated, and the content coincidence time period is determined more accurately.
And S240, selecting a target video having time continuity with the current video from the candidate videos according to the content coincidence time period.
And S250, displaying continuous playing prompt information of the target video through a current video playing interface before the current video playing is finished.
Illustratively, in order to prompt the user that the video related to the current video is continuously played when the current video is played, a continuous playing prompt message of the target video, such as "next segment to be played", is displayed through the current video playing interface before the current video is played, so as to prompt the user of the content to be played.
S260, when the current video playing is finished, detecting whether the user performs refusing operation on the continuous playing prompt message; if not, continuing to play the target video.
Specifically, the prompt message may be "whether to refuse to continue playing the video: and if the user selects 'no' or does not select, the user is considered to need to continuously watch the target video, and the target video is continuously played. If yes, the fact that the user does not want to continue watching the target video is indicated, and the target video is not played continuously at the moment. Through the scheme, the watching intention of the user can be fully known, so that the user can continue to play when the user needs to continue watching the target video.
According to the embodiment of the application, the candidate coincidence segment between the current video and the candidate video and the coincidence value of the image pair in the candidate coincidence segment are determined; according to the coincidence value of the image pair in the candidate coincidence segment, selecting a target coincidence segment from the candidate coincidence segment, and determining the content coincidence time period of the target coincidence segment, thereby avoiding the problem of inaccurate determined content coincidence time period caused by missing frames or association of one key frame with a plurality of key frames, and further enabling the determined content coincidence time period to be more accurate.
Fig. 4 is a schematic structural diagram of a video playback device according to an embodiment of the present application. Referring to fig. 4, an embodiment of the present application discloses a video playback device 300, where the device 300 includes: a candidate video determination module 301, a content coincidence time period determination module 302, a target video selection module 303, and a play module 304.
The candidate video determining module 301 is configured to determine, in a current video playing process, a candidate video that has content overlapping with a current video;
a content coincidence time period determination module 302, configured to determine a content coincidence time period between the current video and the candidate video;
a target video selecting module 303, configured to select a target video having time continuity with the current video from the candidate videos according to the content overlapping time period;
and the playing module 304 is configured to continue to play the target video when the current video is played.
Further, the content coincidence time period determination module 302 includes:
a candidate coincident segment determining unit, configured to determine a candidate coincident segment between the current video and the candidate video, and a coincident value of an image pair in the candidate coincident segment;
and the target coincidence segment selecting unit is used for selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments and determining the content coincidence time period of the target coincidence segment.
Further, the candidate coincident segment determination unit includes:
a relevance value determining subunit, configured to determine a relevance value of an image pair in the current video and the candidate video;
a previous image pair determining subunit, configured to determine, for each image pair, a candidate neighboring image pair of the image pair with the largest correlation value as a previous image pair;
a coincidence value determining subunit, configured to determine a sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair;
a candidate coincident segment generation subunit operable to generate a candidate coincident segment including an image pair having an upper and lower relationship.
Further, before the candidate neighboring image pair with the largest correlation value is taken as the previous image pair, the method further includes:
and the processing module is used for performing enhancement or attenuation processing on the correlation value of the image pair.
Further, the processing module includes:
the first attenuation unit is used for carrying out attenuation processing on the correlation value of any image pair by adopting a first attenuation coefficient if the image pair is associated with the missing frame in the candidate video;
the enhancement unit is used for enhancing the correlation value of any image pair by adopting an enhancement coefficient if the correlation value of the image pair is greater than or equal to a correlation threshold value;
the second attenuation unit is used for carrying out attenuation processing on the correlation value of any image pair by adopting a second attenuation coefficient if the correlation value of the image pair is smaller than the correlation threshold value; wherein the second attenuation coefficient is less than the first attenuation coefficient.
Further, the target coincidence section selecting unit includes:
a termination coincident image pair determining subunit, configured to use the candidate coincident segment to which the image pair with the largest coincidence value belongs as the target coincident segment, and use the image pair with the largest coincidence value as a termination coincident image pair of the target coincident segment;
and the initial coincident image pair determining subunit is used for backtracking the target coincident segment by the termination coincident image pair, taking the image pair with the minimum coincident value as the initial coincident image pair of the target coincident segment, and obtaining the content coincident time period.
Further, the target video selecting module 303 is specifically configured to:
and if the content overlapping time period is positioned at the tail of the current video and at the head of any candidate video, taking the candidate video as a target video having time continuity with the current video.
Further, the playing module 304 includes:
the prompt information display unit is used for displaying the continuous playing prompt information of the target video through a current video playing interface before the current video playing is finished;
the detection unit is used for detecting whether the user performs refusing operation on the continuous playing prompt information when the current video playing is finished;
and the continuous playing unit is used for continuously playing the target video if the target video is not played.
The video playing device provided by the embodiment of the application can execute the video playing method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, fig. 5 is a block diagram of an electronic device for implementing a video playing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 401 is taken as an example.
Memory 402 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to execute the video playing method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the video playback method provided by the present application.
The memory 402, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of video playing in the embodiments of the present application (e.g., the candidate video determination module 301, the content coincidence time period determination module 302, the target video selection module 303, and the playing module 304 shown in fig. 4). The processor 401 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 402, that is, implements the video playing method in the above-described method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for video playback, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 402 optionally includes memory located remotely from processor 401, which may be connected to video playback electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the video playing method may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 5 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the video-playing electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A video playback method, the method comprising:
in the current video playing process, determining a candidate video which has content superposition with the current video;
determining a content coincidence time period between the current video and the candidate video; selecting a target video having time continuity with the current video from the candidate videos;
when the current video playing is finished, continuing to play the target video;
wherein determining a content coincidence time period between the current video and the candidate video comprises:
determining candidate coincidence segments between the current video and the candidate videos and coincidence values of image pairs in the candidate coincidence segments; the image pair is composed of an image which is correspondingly positioned in the current video and an image which is correspondingly positioned in the candidate video in the candidate overlapping segment; the coincidence values are used to determine a connection path for the image pair having the greatest correlation value;
and selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments, and determining the content coincidence time period of the target coincidence segment.
2. The method of claim 1, wherein determining a candidate coincidence segment between the current video and the candidate video, and a coincidence value of an image pair in the candidate coincidence segment, comprises:
determining a correlation value of the current video and an image pair in the candidate video;
for each image pair, taking the candidate adjacent image pair of the image pair with the largest correlation value as the last image pair; the candidate adjacent image pair is an image pair formed by adjacent images of the images in the current video and the candidate video;
taking the sum of the correlation value of the previous image pair and the correlation value of the image pair as a coincidence value of the image pair;
generating a candidate coincident segment comprising an image pair having an up-down relationship;
wherein generating the candidate coincident segments comprising the image pairs having an up-down relationship comprises:
connecting each image pair with a candidate adjacent image pair to form an image pair connection path;
and taking the segment corresponding to the image pair connecting path as a candidate coincident segment of the image pair with the upper and lower relations.
3. The method of claim 2, wherein before the candidate neighboring image pair with the largest correlation value is the previous image pair, further comprising:
and performing enhancement or attenuation processing on the correlation value of the image pair.
4. The method of claim 3, wherein enhancing or attenuating the correlation value of the image pair comprises:
if any image pair is associated with the missing frame in the candidate video, performing attenuation processing on the correlation value of the image pair by adopting a first attenuation coefficient;
if the correlation value of any image pair is greater than or equal to the correlation threshold value, the correlation value of the image pair is enhanced by adopting an enhancement coefficient;
if the correlation value of any image pair is smaller than the correlation threshold value, performing attenuation processing on the correlation value of the image pair by adopting a second attenuation coefficient; wherein the second attenuation coefficient is less than the first attenuation coefficient.
5. The method of claim 1, wherein selecting a target coincidence segment from the candidate coincidence segments according to coincidence values of image pairs in the candidate coincidence segments and determining a content coincidence time period of the target coincidence segment comprises:
taking the candidate coincidence segment to which the image pair with the maximum coincidence value belongs as the target coincidence segment, and taking the image pair with the maximum coincidence value as a termination coincidence image pair of the target coincidence segment;
and backtracking the target coincidence segment by the termination coincidence image pair, taking the image pair with the minimum coincidence value as the initial coincidence image pair of the target coincidence segment, and obtaining the content coincidence time period.
6. The method of claim 1, wherein selecting a target video having temporal continuity with the current video from the candidate videos according to the content coincidence time period comprises:
and if the content overlapping time period is positioned at the tail of the current video and at the head of any candidate video, taking the candidate video as a target video having time continuity with the current video.
7. The method of claim 1, wherein continuing to play the target video at the end of the playing of the current video comprises:
displaying continuous playing prompt information of the target video through a current video playing interface before the current video playing is finished;
when the current video playing is finished, detecting whether a user performs refusing operation on the continuous playing prompt information;
if not, continuing to play the target video.
8. A video playback apparatus, comprising:
the candidate video determining module is used for determining candidate videos with content overlapped with the current video in the current video playing process;
a content coincidence time period determination module, configured to determine a content coincidence time period between the current video and the candidate video;
the target video selection module is used for selecting a target video which has time continuity with the current video from the candidate videos according to the content coincidence time period;
the playing module is used for continuously playing the target video when the current video is played;
wherein, the content coincidence time period determining module includes:
a candidate coincident segment determining unit, configured to determine a candidate coincident segment between the current video and the candidate video, and a coincident value of an image pair in the candidate coincident segment; the image pair is composed of an image which is correspondingly positioned in the current video and an image which is correspondingly positioned in the candidate video in the candidate overlapping segment; the coincidence values are used to determine a connection path for the image pair having the greatest correlation value;
and the target coincidence segment selecting unit is used for selecting a target coincidence segment from the candidate coincidence segments according to the coincidence value of the image pair in the candidate coincidence segments and determining the content coincidence time period of the target coincidence segment.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN201911195372.3A 2019-11-28 2019-11-28 Video playing method, device, equipment and medium Active CN110798736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195372.3A CN110798736B (en) 2019-11-28 2019-11-28 Video playing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195372.3A CN110798736B (en) 2019-11-28 2019-11-28 Video playing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110798736A CN110798736A (en) 2020-02-14
CN110798736B true CN110798736B (en) 2021-04-20

Family

ID=69446659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195372.3A Active CN110798736B (en) 2019-11-28 2019-11-28 Video playing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110798736B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970560B (en) * 2020-07-09 2022-07-22 北京百度网讯科技有限公司 Video acquisition method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108260014A (en) * 2018-04-12 2018-07-06 腾讯科技(上海)有限公司 A kind of video broadcasting method and terminal and storage medium
CN109640129A (en) * 2018-12-12 2019-04-16 北京字节跳动网络技术有限公司 Video recommendation method, device, client device, server and storage medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872415A (en) * 2010-05-06 2010-10-27 复旦大学 Video copying detection method being suitable for IPTV
JP6105936B2 (en) * 2011-01-12 2017-03-29 シャープ株式会社 Playback device
CN103297851B (en) * 2013-05-16 2016-04-13 中国科学院自动化研究所 The express statistic of object content and automatic auditing method and device in long video
CN103546698B (en) * 2013-10-31 2016-08-17 广东欧珀移动通信有限公司 A kind of mobile terminal recorded video store method and device
CN103970906B (en) * 2014-05-27 2017-07-04 百度在线网络技术(北京)有限公司 The method for building up and device of video tab, the display methods of video content and device
CN104102723B (en) * 2014-07-21 2017-07-25 百度在线网络技术(北京)有限公司 Search for content providing and search engine
CN104636505A (en) * 2015-03-13 2015-05-20 北京世纪互联宽带数据中心有限公司 Video retrieval method and video retrieval device
CN105163156B (en) * 2015-10-12 2018-05-08 华勤通讯技术有限公司 Video resume method, playback equipment and system
CN106686404B (en) * 2016-12-16 2021-02-02 中兴通讯股份有限公司 Video analysis platform, matching method, and method and system for accurately delivering advertisements
CN108628913A (en) * 2017-03-24 2018-10-09 上海交通大学 The processing method and processing device of video
CN107205161B (en) * 2017-06-30 2019-10-25 Oppo广东移动通信有限公司 A kind of video broadcasting method, device, storage medium and terminal
EP3477956A1 (en) * 2017-10-31 2019-05-01 Advanced Digital Broadcast S.A. System and method for automatic categorization of audio/video content
CN108062377A (en) * 2017-12-12 2018-05-22 百度在线网络技术(北京)有限公司 The foundation of label picture collection, definite method, apparatus, equipment and the medium of label
CN108970091A (en) * 2018-09-14 2018-12-11 郑强 A kind of shuttlecock action-analysing method and system
CN109168022A (en) * 2018-11-05 2019-01-08 北京奇艺世纪科技有限公司 A kind of method, apparatus and electronic equipment for recommending order video
CN109743591B (en) * 2019-01-04 2022-01-25 广州虎牙信息科技有限公司 Method for video frame alignment
CN110290419B (en) * 2019-06-25 2021-11-26 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN110446065A (en) * 2019-08-02 2019-11-12 腾讯科技(武汉)有限公司 A kind of video recalls method, apparatus and storage medium
CN110490119A (en) * 2019-08-14 2019-11-22 腾讯科技(深圳)有限公司 Repeat video marker method, apparatus and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108260014A (en) * 2018-04-12 2018-07-06 腾讯科技(上海)有限公司 A kind of video broadcasting method and terminal and storage medium
CN109640129A (en) * 2018-12-12 2019-04-16 北京字节跳动网络技术有限公司 Video recommendation method, device, client device, server and storage medium

Also Published As

Publication number Publication date
CN110798736A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
US10031577B2 (en) Gaze-aware control of multi-screen experience
CN111277861A (en) Method and device for extracting hot spot segments in video
WO2017181597A1 (en) Method and device for video playback
CN111225236B (en) Method and device for generating video cover, electronic equipment and computer-readable storage medium
CN111107392A (en) Video processing method and device and electronic equipment
KR102286410B1 (en) Identification of previously streamed portions of a media title to avoid repeated playback
CN110798736B (en) Video playing method, device, equipment and medium
CN111866550A (en) Method and device for shielding video clip
EP3902280A1 (en) Short video generation method and platform, electronic device, and storage medium
CN111462174A (en) Multi-target tracking method and device and electronic equipment
CN111770388B (en) Content processing method, device, equipment and storage medium
CN112383825A (en) Video recommendation method and device, electronic equipment and medium
JP2021100277A (en) Video playback method, device, electronic device, and storage medium
US20170139933A1 (en) Electronic Device, And Computer-Readable Storage Medium For Quickly Searching Video Segments
CN111935506A (en) Method and apparatus for determining repeating video frames
CN112584077A (en) Video frame interpolation method and device and electronic equipment
CN111935549B (en) Method and device for updating playing sequence
CN111935502A (en) Video processing method, video processing device, electronic equipment and storage medium
EP3896987A1 (en) Video playback method and apparatus, electronic device, and storage medium
CN112055198A (en) Video testing method and device, electronic equipment and storage medium
CN110889020A (en) Site resource mining method and device and electronic equipment
CN111524123A (en) Method and apparatus for processing image
CN111669647A (en) Real-time video processing method, device, equipment and storage medium
CN112383676A (en) Video file processing method and device, electronic equipment and storage medium
CN111770384A (en) Video switching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant