CN111741325A - Video playing method and device, electronic equipment and computer readable storage medium - Google Patents

Video playing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111741325A
CN111741325A CN202010507770.0A CN202010507770A CN111741325A CN 111741325 A CN111741325 A CN 111741325A CN 202010507770 A CN202010507770 A CN 202010507770A CN 111741325 A CN111741325 A CN 111741325A
Authority
CN
China
Prior art keywords
video
information
video segment
segment
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010507770.0A
Other languages
Chinese (zh)
Inventor
李峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202010507770.0A priority Critical patent/CN111741325A/en
Publication of CN111741325A publication Critical patent/CN111741325A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention relates to the field of video processing, and discloses a video playing method, a video playing device, electronic equipment and a computer readable storage medium. In some embodiments of the present application, a video playing method includes: acquiring a first video segment of a video; searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment; the same screen plays the first video clip and the second video clip. The embodiment enables the user to watch other video contents related to the video, obtains more information and enhances the experience of impression.

Description

Video playing method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video processing, and in particular, to a video playing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
At present, when a sports game is live and broadcast directly, a plurality of cameras can be used for synchronously acquiring field videos from multiple viewpoints, and wonderful segment videos can be produced from the plurality of video sources on site for switching and replaying.
However, the inventors found that at least the following problems exist in the prior art: when the match video is played, the wonderful playback is the wonderful playback of the local match, for example, the shooting of a video is played back, and the watching experience of the user is poor.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
Embodiments of the present invention provide a video playing method, an apparatus, an electronic device, and a computer-readable storage medium, so that a user can watch other video contents related to a video, obtain more information, and enhance the viewing experience.
In order to solve the above technical problem, an embodiment of the present invention provides a video playing method, including the following steps: acquiring a first video segment of a video; searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment; the same screen plays the first video clip and the second video clip.
An embodiment of the present invention further provides a video playing apparatus, including: the device comprises an acquisition module, a search module and a play module; the acquisition module is used for acquiring a first video segment of a video; the searching module is used for searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment; the playing module is used for playing the first video clip and the second video clip in the same screen.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the video playback method as mentioned in the above embodiments.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the video playing method mentioned in the above embodiment.
Compared with the prior art, the electronic equipment can search the second video segment similar to the motion information of the first video segment from the material library and play the first video segment and the second video segment on the same screen, so that a user can synchronously watch other videos related to the first video segment while watching the first video segment, more information is obtained, and the film watching experience is enhanced.
In addition, the motion information includes: and any one or any combination of the action track information, the sphere track information and the pose information of the first video clip.
In addition, the motion information comprises M pieces of information in the motion track information, the sphere track information and the pose information of the first video clip, wherein M is a positive integer; searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, wherein the searching comprises the following steps: for each candidate video segment, calculating the similarity of each information of the candidate video segment and the information corresponding to the first video segment, and determining the motion information similarity of the candidate video segment and the first video segment according to the similarity of each information; taking the candidate video segment with the maximum motion information similarity as a second video segment; or, for each candidate video segment, calculating the similarity of each piece of information of the candidate video segment and the information corresponding to the first video segment; if the similarity of any one piece of information is smaller than the similarity threshold of the information, deleting the candidate video clip; from the remaining candidate video segments, a second video segment is selected.
In addition, the motion information includes: motion track information, sphere track information and pose information of the first video clip; searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, wherein the searching comprises the following steps: for each candidate video clip, determining a first similarity between the motion track information of the candidate video clip and the motion track information of the first video clip; determining a second similarity between the pose information of the candidate video segment and the pose information of the first video segment; determining a third similarity between the sphere track information of the candidate video segment and the sphere track information of the first video segment; determining the motion information similarity of the candidate video segment and the first video segment according to the first similarity, the second similarity and the third similarity; and taking the candidate video segment with the maximum motion information similarity as a second video segment.
In addition, before searching for a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, the method further comprises the following steps: determining scene information of a first video segment; and screening out the video clips with the same scene information as the first video clip from the material library to serve as candidate video clips.
In addition, the playing of the first video clip and the second video clip on the same screen comprises: determining a first key frame of a first video clip and a second key frame of a second video clip; the first key frame is a video frame separated by a human ball in the first video clip; the second key frame is a video frame separated by a human ball in the second video clip; performing time code synchronization on the first video clip and the second video clip based on the first key frame and the second key frame; and synchronously playing the first video segment and the second video segment.
In addition, the synchronous playing of the first video segment and the second video segment comprises: determining position information of a first target object in a video frame of a first video clip; determining a first playing area according to the position information of the first target object; determining position information of a second target object in a video frame of the second video clip corresponding to the video frame of the first video clip; determining a second playing area according to the position information of the second target object; and synchronously playing the video pictures in the first playing area and the video pictures in the corresponding second playing area.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic flow chart of a video playing method according to a first embodiment of the present invention;
fig. 2 is a schematic flow chart of a video playing method according to a second embodiment of the present invention;
FIG. 3a is a diagram of a picture of a first video segment in accordance with a second embodiment of the present invention;
FIG. 3b is a diagram of a picture of a second video segment in accordance with a second embodiment of the present invention;
fig. 3c is a schematic diagram of playing effect during split-screen playing according to the second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video playback device according to a third embodiment of the present invention;
fig. 5 is a schematic configuration diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present disclosure, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
The first embodiment of the invention relates to a video playing method, which is applied to electronic equipment such as a server, a terminal and the like. The video playing method provided by the embodiment can be suitable for live scenes of the events and can also be suitable for other scenes. For clarity, the present embodiment is illustrated by taking a live event scene as an example. As shown in fig. 1, the video playing method includes the following steps:
step 101: a first video segment of a video is acquired.
Specifically, the first video segment may be a playback picture of the video, or may be a pre-designated video segment.
In one embodiment, the process by which the electronic device determines the playback picture in the video may be: the electronic equipment carries out real-time intelligent monitoring and feature processing on picture signals in the live broadcast of the event, and continuously carries out feature matching with short video materials in a material library. Since the short video material in the material library is a highlight, most of the shots that can be successfully matched may appear in the playback. And after the wonderful shot is identified, monitoring subsequent pictures, and if a certain picture repeatedly appears within a preset time, determining the picture as a playback picture.
It should be noted that, as will be understood by those skilled in the art, in practical applications, the playback picture of the video may also be locked in other manners, and the present embodiment is not limited to the method for determining the playback picture of the video.
Step 102: and searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library.
Specifically, the motion information of the first video segment is used for representing the motion track or pose of the object in the first video segment. The material library can store some wonderful shots of the current events. And the electronic equipment matches the first video segment with the video segments in the material library to obtain a second video segment similar to the motion information of the first video segment.
In one embodiment, the motion information includes: and any one or any combination of the action track information, the sphere track information and the pose information of the first video clip.
It should be noted that, as can be understood by those skilled in the art, the motion information may also include other information, and this embodiment is merely an example.
In one embodiment, the motion information includes M pieces of information among motion trajectory information, sphere trajectory information, and pose information of the first video segment, where M is a positive integer. The method for the electronic device to find the second video segment similar to the motion information of the first video segment from the candidate video segments of the material library includes, but is not limited to, the following two methods:
the method comprises the following steps: for each candidate video segment, calculating the similarity of each information of the candidate video segment and the information corresponding to the first video segment, and determining the motion information similarity of the candidate video segment and the first video segment according to the similarity of each information; and taking the candidate video segment with the maximum motion information similarity as a second video segment.
For example, the motion information includes any one of motion trajectory information, sphere trajectory information, and pose information. For clarity of explanation, the information it contains is referred to as first information. The electronic equipment calculates the similarity of the first information of the candidate video clips and the first information of the first video clip aiming at each candidate video clip; and taking the similarity of the first information as the motion information similarity of the candidate video segment and the first video segment. And the electronic equipment takes the candidate video clip with the maximum similarity to the running information of the first video clip as the second video clip.
For another example, the motion information includes any two kinds of information among motion trajectory information, sphere trajectory information, and pose information, and for clarity of explanation, the two kinds of information included in the motion information are respectively referred to as first information and second information. The electronic equipment calculates the similarity of the first information of the candidate video clips and the first information of the first video clips and calculates the similarity of the second information of the candidate video clips and the second information of the first video clips aiming at each candidate video clip; and calculating the motion information similarity of the candidate video segment and the first video segment according to the similarity of the first information and the similarity of the second information. And the computing electronic equipment takes the candidate video clip with the maximum similarity with the running information of the first video clip as the second video clip.
It should be noted that, as will be understood by those skilled in the art, the method for calculating the similarity of the motion information of the candidate video segment and the first video segment may be: the sum of the similarity of the first information and the similarity of the second information is used as the motion information similarity, and the motion information similarity may be: the weighted sum of the similarity of the first information and the similarity of the second information is used as the similarity of the motion information, and other methods are also possible, which are not listed here.
As another example, the motion information includes: the motion track information, the sphere track information and the pose information of the first video clip. The electronic equipment searches a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, and the method comprises the following steps: for each candidate video clip, determining a first similarity between the motion track information of the candidate video clip and the motion track information of the first video clip; determining a second similarity between the pose information of the candidate video segment and the pose information of the first video segment; determining a third similarity between the sphere track information of the candidate video segment and the sphere track information of the first video segment; determining the motion information similarity of the candidate video segment and the first video segment according to the first similarity, the second similarity and the third similarity; and taking the candidate video segment with the maximum motion information similarity as a second video segment. The method for determining the similarity of the motion information of the candidate video segment and the first video segment according to the first similarity, the second similarity and the third similarity may refer to a description related to the similarity of the motion information calculated based on the similarity of the first information and the similarity of the second information, and will not be described herein again.
The method 2 comprises the following steps: for each candidate video clip, calculating the similarity of each piece of information of the candidate video clip and the information corresponding to the first video clip; if the similarity of any one piece of information is smaller than the similarity threshold of the information, deleting the candidate video clip; from the remaining candidate video segments, a second video segment is selected.
For example, the motion information includes any one of motion trajectory information, sphere trajectory information, and pose information. For clarity of explanation, the information it contains is referred to as first information. The electronic equipment calculates the similarity of the first information of the candidate video clips and the first information of the first video clip aiming at each candidate video clip; if the similarity of the first information is larger than the similarity threshold corresponding to the first information, the candidate video clip is reserved, and if the similarity of the first information is not larger than the similarity threshold corresponding to the first information, the candidate video clip is deleted. After the above operations are completed for all the candidate video segments, the electronic device selects a second video segment from the remaining candidate video segments. When the electronic device selects the second video segment, the candidate video segment with the largest similarity of the first information may be selected as the second video segment, or the second video segment may be selected based on other selection criteria.
For another example, the motion information includes any two kinds of information among motion trajectory information, sphere trajectory information, and pose information, and for clarity of explanation, the two kinds of information included in the motion information are respectively referred to as first information and second information. The electronic equipment calculates the similarity of the first information of the candidate video clips and the first information of the first video clips and calculates the similarity of the second information of the candidate video clips and the second information of the first video clips aiming at each candidate video clip; if the similarity of the first information is larger than the similarity threshold corresponding to the first information, judging whether the similarity of the second information is larger than the similarity threshold corresponding to the second information, if so, retaining the candidate video clip, otherwise, deleting the candidate video clip; and if the similarity of the first information is not larger than the similarity threshold corresponding to the first information, deleting the candidate video clip. After the above operations are completed for all the candidate video segments, the electronic device selects a second video segment from the remaining candidate video segments. When the electronic device selects the second video segment, the candidate video segment with the maximum similarity of the first information or the candidate video segment with the maximum similarity of the second information may be selected as the second video segment, or the second video segment may be selected based on other selection criteria.
As another example, the motion information includes: the motion track information, the sphere track information and the pose information of the first video clip. The electronic equipment searches a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, and the method comprises the following steps: the electronic equipment calculates the similarity of the action track information of each candidate video segment and the action track information of the first video segment; if the similarity of the motion track information is not larger than the similarity threshold corresponding to the motion track information, deleting the candidate video segment; if the similarity of the motion track information is determined to be larger than the similarity threshold corresponding to the motion track information, calculating the similarity of the sphere track information of the candidate video segment and the first video segment, and if the similarity of the motion track information is determined to be not larger than the similarity threshold corresponding to the motion track information, deleting the candidate video segment; if the similarity of the sphere track information is larger than the similarity threshold corresponding to the sphere track information, calculating the similarity of the pose information of the candidate video segment and the first video segment, and if the similarity of the pose information is not larger than the similarity threshold corresponding to the pose information, deleting the candidate video segment; and if the similarity of the pose information is determined to be larger than the similarity threshold corresponding to the pose information, the candidate video clip is reserved. After the above operations are completed for all the candidate video segments, the electronic device selects a second video segment from the remaining candidate video segments. When the electronic device selects the second video segment, the candidate video segment with the largest similarity of the motion trajectory information, or the candidate video segment with the largest similarity of the sphere trajectory information, or the candidate video segment with the largest similarity of the pose information may be selected as the second video segment, or the second video segment may be selected based on other selection criteria.
Optionally, if the number of the remaining candidate video segments is 0, the electronic device may consider that the operation of searching for the second video segment from the material library fails, and play the first video segment, or may arbitrarily select one candidate video segment from all the candidate video segments, or select the candidate video segment with the largest similarity of the first information as the second video segment. The present embodiment does not limit the specific operation manner of the electronic device when the number of remaining candidate video segments is 0.
It is worth mentioning that the electronic device performs accurate matching based on various motion information, so that the similarity between the second video segment obtained by matching and the first video segment is larger.
The following illustrates a process in which the electronic device determines each piece of information.
In one embodiment, the process for the electronic device to determine motion trajectory information for the first video segment includes: and carrying out human body action track recognition on the video frames with the timestamps earlier than the key frames of the first video clip in the first video clip, and determining the action track information of the first video clip. And the key frame of the first video clip is a video frame separated by a human ball in the first video clip. The electronic equipment takes the key frame separated by the human ball as a watershed, analyzes and matches the motion of the human target main body and the gesture of the hand-out moment forward, and determines the motion track information of the first video clip. The action track information can be any one of rapid turning and backward withdrawing of a held ball, backward three-out-of-hand (basketball) of a left base angle, defending counterattack, foot lifting and shooting (football) of a right foot three meters away from a right forbidden zone after long-distance rush crossing defense, jumping and sprinting after an attack line on the right rear side of the basketball net to overhead catching and killing (volleyball) in the middle of the basketball net and the like.
It should be noted that the above-mentioned motion trajectory information is only an example, and in practical applications, other types of motion trajectory information may also be defined, and the present embodiment is not limited.
In one embodiment, the process for the electronic device to determine pose information for the first video segment includes: and carrying out pose identification on the key frame of the first video clip, and determining pose information. Specifically, the electronic device can recognize the gesture of the human target subject in the key frame of the human ball separation, and divide the video into a basketball shooting (basketball), a goal shooting (football), a killing (volleyball) and other categories. The electronic device further defines the video more accurately as three hands (basketball) at the bottom corner on the left side, three goals (football) at the outside of the forbidden zone on the right side and a shot (volleyball) at the middle front of the basketball net based on the gesture recognition result and the corresponding relation between the character target body and the goal, the net and the field side line. Namely, the pose information of the first video segment indicates the pose information and the position information of the human target subject at the time of the human-ball separation in the first video segment.
In one embodiment, the process for the electronic device to determine the sphere trajectory information for the first video segment includes: and carrying out ball track recognition on the video frames with the timestamps later than the key frames of the first video clip in the first video clip, and determining the ball track information of the first video clip. Specifically, a key frame of human-ball separation is used as a starting point, and a target tracking algorithm based on contrast analysis, matching analysis or moving target detection is adopted to capture and draw a moving track of a ball body.
It should be noted that, as will be understood by those skilled in the art, in practical applications, each piece of information in the motion information may be determined in other manners, and this embodiment is merely an example.
The above-mentioned key frame identification method is exemplified below.
In one embodiment, the process for the electronic device to determine the key frame includes: the electronic equipment identifies a human target subject; the electronic equipment simultaneously detects the motion trail of the figure target main body and the motion trail of the ball based on methods such as optical flow field detection or Kalman filtering, so that the state that the ball leaves the target task main body is identified, the moment is captured, and the key frame for separating the figure and the ball is defined.
In one example, the method for the electronic device to identify the human target subject is as follows: the electronic equipment identifies the lens motion track, the focus position and the spherical track. Based on the lens motion trajectory, the focal position, and the sphere trajectory, the electronic device may lock the character target subject currently controlling, carrying, or hitting the ball. For example, the electronic device combines people in the video with the same motion track as the motion track of the lens into a first set; forming a second set by the persons with the focal positions within the range in the video; and forming a third set by the characters with the same motion tracks as the motion tracks of the spheres. The person existing in each of the first set, the second set, and the third set (i.e., the intersection of the three sets) is taken as the person target subject. In the process, if the number of the people in the first set, the second set and the third set is more than 1, one person can be randomly selected as a person target subject, or a final person target subject is screened according to other screening rules. If the number of people in the first set, the second set, and the third set is 0, one person may be randomly selected from the third set as the target person, and if the third set is an empty set, the target person may be selected from the first set or the second set. If the first set, the second set and the third set are all empty sets, the first video clip can be fed back to the operation and maintenance personnel, and the operation and maintenance personnel can designate the character target subject. The present embodiment does not limit the specific selection rule.
The electronic device may also identify the human target subject by other methods, and this embodiment is merely an example, and is not limited to the method for determining the human target subject by the electronic device in practical applications.
In one embodiment, the electronic device determines scene information of a first video segment before searching a second video segment similar to the motion information of the first video segment from candidate video segments of the material library; and screening out the video clips with the same scene information as the first video clip from the material library to serve as candidate video clips. The scene information of the first video segment may be obtained by performing scene recognition on the first video segment, for example, a scene recognition model obtained by pre-training is stored in the electronic device, the scene recognition model is obtained by training based on image sample data and a label of each image sample data, and the label indicates the scene information of each image sample. The scene information of the first video segment may also be obtained according to information in the configuration file of the video, that is, the configuration file of the video includes the scene information. The scene information of the first video segment may also be determined based on motion trajectory information of the first video segment. The motion trail information of the first video segment refers to motion trail information of a person with a ball motion or a preset target person in the first video segment.
For example, the electronic device may perform object recognition and trajectory recognition on the person in the first video segment, lock the person with the ball motion in the first video segment, and determine the motion trajectory of the person with the ball motion. Determining scene information of a first video segment according to the motion track of a character with ball motion, such as a basketball shooting scene video, a three-minute shooting scene video, a second shooting scene video, a shooting scene video or a point ball scene video; and according to the scene of the first video clip, screening the video clip with the same scene information as the first video clip from the material library as a candidate video clip.
It is worth mentioning that the video clips in the material library are primarily screened based on the scene information of the first video clip, so that the data volume of subsequent operation can be reduced, and the calculation loss is reduced.
Step 103: the same screen plays the first video clip and the second video clip.
Specifically, in the video edited by the electronic device, the first video segment and the second video segment are played synchronously, so that the user can watch other videos related to the first video segment synchronously while watching the first video segment, more information is acquired, and the viewing experience is enhanced.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, according to the video playing method provided by the embodiment, the electronic device can search the second video segment similar to the motion information of the first video segment from the material library, and play the first video segment and the second video segment on the same screen, so that a user can synchronously watch other videos related to the first video segment while watching the first video segment, more information is obtained, and the viewing experience is enhanced.
A second embodiment of the present invention relates to a video playback method. This embodiment is an example of step 103 of the first embodiment, and illustrates: and the process of playing the first video clip and the second video clip in the same screen.
Specifically, as shown in fig. 2, the present embodiment includes steps 201 to 204, where steps 201 and 202 are substantially the same as steps 101 and 102 in the first embodiment, and are not repeated here. The following mainly introduces the differences:
step 201: a first video segment of a video is acquired.
Step 202: and searching a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library.
Step 203: key frames of the first video segment and key frames of the second video segment are determined.
Specifically, the key frame of the first video clip is a video frame separated by a human ball in the first video clip; the key frame of the second video clip is a video frame separated by a human ball in the second video clip. The method for determining the key frame may refer to the related description of the first embodiment, and will not be described in detail here.
Step 204: time code synchronization is performed on the first video segment and the second video segment based on the first key frame and the second key frame.
Specifically, the electronic equipment takes the key frames extracted from the first video clip and the second video clip and separated by the human ball as reference to synchronize the time codes of the two video clips.
Step 205: and synchronously playing the first video segment and the second video segment.
Specifically, since the time code of the first video segment and the time code of the second video segment are synchronized, the electronic device can determine the starting playing time of the second video segment according to the starting playing time of the first video segment, so that the first video segment and the second video segment are played synchronously.
In one embodiment, the process of the electronic device in synchronously playing the first video segment and the second video segment includes: determining position information of a first target object in a video frame of a first video clip; determining a first playing area according to the position information of the first target object; determining position information of a second target object in a video frame of the second video clip corresponding to the video frame of the first video clip; determining a second playing area according to the position information of the second target object; and synchronously playing the video pictures in the first playing area and the video pictures in the corresponding second playing area.
It should be noted that, as can be understood by those skilled in the art, the first target object of each video frame of the first video segment may be the same or different, and the second target object of each video frame of the second video segment may be the same or different, which is not limited herein.
In one embodiment, the electronic device determining a first playback region of a first video segment and a second playback region of a second video segment includes: for each first video frame from the starting frame of the first video segment to the key frame of the first video segment, identifying coordinate information of a first target person of the first video frame; determining a playing area of the first video frame based on the coordinate information of the first target person so that the first target person is located in the middle area of the playing area of the first video frame; aiming at each second video frame from the key frame of the first video clip to the target frame of the first video clip, identifying the sphere coordinate information of the second video frame, and determining the playing area of the second video frame based on the sphere coordinate information of the second video frame so as to enable the sphere of the second video frame to be positioned in the middle area of the playing area of the second video frame; wherein the target frame indicates that the sphere of the second video frame stops moving; for each third video frame from the target frame of the first video clip to the end frame of the first video clip, identifying coordinate information of a second target character of the third video frame, and determining a playing area of the third video frame based on the coordinate information of the second target character so that the second target character is located in the middle area of the playing area of the third video frame; for each fourth video frame from the starting frame of the second video segment to the key frame of the second video segment, identifying coordinate information of a third target character of the fourth video frame, and determining a playing area of the fourth video frame based on the coordinate information of the third target character so that the third target character is located in the middle area of the playing area of the fourth video frame; for each fifth video frame from the key frame of the second video clip to the target frame of the second video clip, identifying the sphere coordinate information of the fifth video frame, and determining the playing area of the fifth video frame based on the sphere coordinate information of the fifth video frame so that the sphere of the fifth video frame is positioned in the middle area of the playing area of the fifth video frame; wherein the target frame indicates that the sphere of the fifth video frame stops moving; for each sixth video frame from the target frame of the second video clip to the end frame of the second video clip, identifying coordinate information of a fourth target person of the sixth video frame, based on the coordinate information of the fourth target person, so that the fourth target person is located in a middle area of a playing area of the sixth video frame; determining a first playing area of a first video clip according to the playing area of each first video frame, the playing area of each second video frame and the playing area of each third video frame; and determining a second playing area of the second video clip according to the playing area of each fourth video frame, the playing area of the fifth video frame and the playing area of the sixth video frame.
It should be noted that the first target character may be a character target subject in the first video segment, the second target character may be a character target subject in the first video segment, or a celebration character in the first video segment, the third target character may be a character target subject in the second video segment, and the fourth target character may be a character target subject in the second video segment, or a celebration character in the second video segment. The celebration character can mean a character with exciting emotion in a video and can be determined through expression recognition and the like.
The first target person and the second target person may be the same person or different persons, and the third target person and the fourth target person may be the same person or different persons, which is not limited in this embodiment.
Alternatively, in the process of determining the play area based on each target person or sphere, the play area indicated by the play area may be changed according to the movement of each target person or sphere. In order to avoid large-area black screen, the value of the playing area can be limited, so that the playing area indicated by the playing area does not exceed the edge of the picture.
In an embodiment, in the above content, the operation of synchronously playing the video frame in the first playing area and the video frame in the corresponding second playing area is performed by the video playing terminal, and the other steps except this step are performed by the background server. In this case, in order to enable the video playing end to play the video synchronously with the first video segment and the second video segment, the path, the time code, and the start playing time information of the second video segment may be written into the video.
Specifically, the related information of the second video segment is written into the live broadcast stream of the video, so that in the live broadcast process, the video playing end can acquire the second video segment from the path and start to play the second video segment at the specified initial playing time. Because the starting playing time of the second video segment is the same as the starting playing time of the first video segment, the video playing end can synchronously play the first video segment and the second video segment.
In one example, the video playing end plays the first video segment and the second video segment through a double-frame playing mode. For example, the video playing end equally divides the playing area in the display screen into a left part and a right part. In this case, the width of the playing area of each video frame in the first video segment is half of the width of the original video. Specifically, the electronic device tracks and calculates coordinate information of a target person in videos of the first video segment and the second video segment by taking a video frame of a first video segment as a starting point and a key frame as an end point, and dynamically determines a playing area of each first video frame of the first video segment and a playing area of each fourth video frame of the second video segment based on a criterion that a person target subject is centered. The electronic equipment takes the key frame as a starting point and takes the appearance of a celebration character or a celebration lens as an end point, tracks and calculates the coordinate information of the sphere, follows the motion track of the sphere, and dynamically determines the playing area of each second video frame and the playing area of each fifth video frame. In the process, detection technologies such as goals, nets, boundaries and the like are assisted, and excessive cutting is avoided. And the electronic equipment determines the playing areas of the third video frames and the sixth video frames by taking the appearance of the character or the celebration shot as a starting point, the end of the material library or the end of the first video segment as an end point and taking the target feature of the original character subject, the centering of celebrators and the like as a criterion. And writing the playing area of the first video clip and the playing area of the second video clip into the video.
It should be noted that, in practical applications, besides the dual-frame playing mode, the first video segment and the second video segment can be simultaneously played in multiple modes such as picture-in-picture, and the embodiment does not limit the specific way of synchronously playing the videos.
In one embodiment, the background server transmits the video to the video playing end, and when the video playing end detects a split-screen playing instruction during the process of playing the video, the background server controls a first playing area of the playing end to play a first video clip and controls a second playing area of the playing end to play a second video clip. The split-screen playing instruction is generated after a user triggers the split-screen playing control at the playing end. Specifically, the front end of the player has hysteresis, the hysteresis time of the front end is enough to complete operations such as material matching and processing, and the user can independently select whether to start the function, so that the function of double-frame playing can be arranged on the player. After the player identifies the split-screen playing instruction, the original single player is changed into a double-frame player, the first video clip is played in the left playing frame, and the second video clip is played in the right playing frame. The key actions of the two pictures are consistent, and the film watching experience is enhanced. For example, if a certain picture of the first video segment is shown in fig. 3a and a certain picture of the second video segment is shown in fig. 3b, after the split-screen playing is started, the playing effect of the certain picture is shown in fig. 3 c.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, according to the video playing method provided by the embodiment, the electronic device can search the second video segment similar to the motion information of the first video segment from the material library, and play the first video segment and the second video segment on the same screen, so that a user can synchronously watch other videos related to the first video segment while watching the first video segment, more information is obtained, and the viewing experience is enhanced. In addition, time code synchronization is carried out on the first video clip and the second video clip based on the key frames, so that the key actions of the first video clip and the second video clip are consistent, a user can compare the two videos conveniently, and the film watching experience is enhanced.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to a video playback device, as shown in fig. 4, including: an acquisition module 401, a search module 402 and a play module 403. The obtaining module 401 is configured to obtain a first video segment of a video; the searching module 402 is configured to search, from candidate video segments of the material library, a second video segment similar to the motion information of the first video segment; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment; the playing module 403 is used for playing the first video clip and the second video clip in the same screen.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 5, including: at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the video playing method according to the above embodiments.
The electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 5. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example. The memory 502 is a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as video clips in a material library stored in the memory 502 according to the embodiment of the present application. The processor 501 executes various functional applications and data processing of the device, i.e., implements the above-described video playing method, by running the non-volatile software programs, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502 and, when executed by the one or more processors 501, perform the video playback method of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A video playback method, comprising:
acquiring a first video segment of a video;
searching a second video segment similar to the motion information of the first video segment from candidate video segments of a material library; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment;
the same screen plays the first video clip and the second video clip.
2. The video playing method according to claim 1, wherein the motion information comprises: and any one or any combination of the motion track information, the sphere track information and the pose information of the first video clip.
3. The video playing method according to claim 2, wherein the motion information includes M pieces of information among motion trajectory information, sphere trajectory information, and pose information of the first video segment, where M is a positive integer;
the searching for the second video segment similar to the motion information of the first video segment from the candidate video segments of the material library comprises:
for each candidate video segment, calculating the similarity of information of the candidate video segment and information corresponding to the first video segment, and determining the similarity of motion information of the candidate video segment and the first video segment according to the similarity of the information; taking the candidate video segment with the maximum motion information similarity as the second video segment;
or,
for each candidate video segment, calculating the similarity of each piece of information of the candidate video segment and the information corresponding to the first video segment; if the similarity of any one piece of information is smaller than the similarity threshold of the information, deleting the candidate video clip; selecting the second video segment from the remaining candidate video segments.
4. The video playing method according to claim 1, wherein the motion information comprises: motion track information, sphere track information and pose information of the first video clip;
the searching for the second video segment similar to the motion information of the first video segment from the candidate video segments of the material library comprises:
for each candidate video segment, determining a first similarity between the motion track information of the candidate video segment and the motion track information of the first video segment; determining a second similarity of the pose information of the candidate video segment and the pose information of the first video segment; determining a third similarity between the sphere track information of the candidate video segment and the sphere track information of the first video segment; determining the motion information similarity of the candidate video segment and the first video segment according to the first similarity, the second similarity and the third similarity;
and taking the candidate video segment with the maximum motion information similarity as the second video segment.
5. The video playing method according to any one of claims 1 to 4, wherein before searching for a second video segment similar to the motion information of the first video segment from the candidate video segments of the material library, further comprising:
determining scene information of the first video segment;
and screening out the video clips with the same scene information as the first video clip from the material library to serve as the candidate video clips.
6. The video playing method according to any one of claims 1 to 4, wherein said playing the first video segment and the second video segment on the same screen comprises:
determining a first key frame of the first video clip and a second key frame of the second video clip; the first key frame is a video frame separated by a human ball in the first video clip; the second key frame is a video frame separated by a human ball in the second video clip;
time code synchronizing the first video segment and the second video segment based on the first key frame and the second key frame;
and synchronously playing the first video segment and the second video segment.
7. The video playing method according to claim 6, wherein said synchronously playing the first video segment and the second video segment comprises:
determining position information of a first target object in a video frame of the first video clip;
determining a first playing area according to the position information of the first target object;
determining position information of a second target object in a video frame of the second video segment corresponding to the video frame of the first video segment;
determining a second playing area according to the position information of the second target object;
and synchronously playing the video pictures in the first playing area and the video pictures of the corresponding second playing area.
8. A video playback apparatus, comprising: the device comprises an acquisition module, a search module and a play module;
the acquisition module is used for acquiring a first video segment of a video;
the searching module is used for searching a second video segment similar to the motion information of the first video segment from candidate video segments of a material library; the motion information of the first video segment is used for representing the motion track or the pose of an object in the first video segment;
the playing module is used for playing the first video clip and the second video clip in the same screen.
9. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video playback method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the video playback method of any one of claims 1 to 7.
CN202010507770.0A 2020-06-05 2020-06-05 Video playing method and device, electronic equipment and computer readable storage medium Pending CN111741325A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507770.0A CN111741325A (en) 2020-06-05 2020-06-05 Video playing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507770.0A CN111741325A (en) 2020-06-05 2020-06-05 Video playing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111741325A true CN111741325A (en) 2020-10-02

Family

ID=72648350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507770.0A Pending CN111741325A (en) 2020-06-05 2020-06-05 Video playing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111741325A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770167A (en) * 2020-12-21 2021-05-07 深圳Tcl新技术有限公司 Video display method and device, intelligent display terminal and storage medium
CN112887792A (en) * 2021-01-22 2021-06-01 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN112929699A (en) * 2021-01-27 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113435328A (en) * 2021-06-25 2021-09-24 上海众源网络有限公司 Video clip processing method and device, electronic equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741655B1 (en) * 1997-05-05 2004-05-25 The Trustees Of Columbia University In The City Of New York Algorithms and system for object-oriented content-based video search
CN101840422A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化系统工程有限公司 Intelligent video retrieval system and method based on target characteristic and alarm behavior
CN102117313A (en) * 2010-12-29 2011-07-06 天脉聚源(北京)传媒科技有限公司 Video retrieval method and system
CN104166685A (en) * 2014-07-24 2014-11-26 北京捷成世纪科技股份有限公司 Video clip detecting method and device
CN107748750A (en) * 2017-08-30 2018-03-02 百度在线网络技术(北京)有限公司 Similar video lookup method, device, equipment and storage medium
CN108970091A (en) * 2018-09-14 2018-12-11 郑强 A kind of shuttlecock action-analysing method and system
CN110996157A (en) * 2019-12-20 2020-04-10 上海众源网络有限公司 Video playing method and device, electronic equipment and machine-readable storage medium
CN111159476A (en) * 2019-12-11 2020-05-15 智慧眼科技股份有限公司 Target object searching method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741655B1 (en) * 1997-05-05 2004-05-25 The Trustees Of Columbia University In The City Of New York Algorithms and system for object-oriented content-based video search
CN101840422A (en) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化系统工程有限公司 Intelligent video retrieval system and method based on target characteristic and alarm behavior
CN102117313A (en) * 2010-12-29 2011-07-06 天脉聚源(北京)传媒科技有限公司 Video retrieval method and system
CN104166685A (en) * 2014-07-24 2014-11-26 北京捷成世纪科技股份有限公司 Video clip detecting method and device
CN107748750A (en) * 2017-08-30 2018-03-02 百度在线网络技术(北京)有限公司 Similar video lookup method, device, equipment and storage medium
CN108970091A (en) * 2018-09-14 2018-12-11 郑强 A kind of shuttlecock action-analysing method and system
CN111159476A (en) * 2019-12-11 2020-05-15 智慧眼科技股份有限公司 Target object searching method and device, computer equipment and storage medium
CN110996157A (en) * 2019-12-20 2020-04-10 上海众源网络有限公司 Video playing method and device, electronic equipment and machine-readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770167A (en) * 2020-12-21 2021-05-07 深圳Tcl新技术有限公司 Video display method and device, intelligent display terminal and storage medium
CN112887792A (en) * 2021-01-22 2021-06-01 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN112929699A (en) * 2021-01-27 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and readable storage medium
CN112929699B (en) * 2021-01-27 2023-06-23 广州虎牙科技有限公司 Video processing method, device, electronic equipment and readable storage medium
CN113435328A (en) * 2021-06-25 2021-09-24 上海众源网络有限公司 Video clip processing method and device, electronic equipment and readable storage medium
CN113435328B (en) * 2021-06-25 2024-05-31 上海众源网络有限公司 Video clip processing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111741325A (en) Video playing method and device, electronic equipment and computer readable storage medium
CN109326310B (en) Automatic editing method and device and electronic equipment
CN109194978A (en) Live video clipping method, device and electronic equipment
Zhu et al. Player action recognition in broadcast tennis video with applications to semantic analysis of sports game
US8538153B2 (en) System and method for enabling meaningful interaction with video based characters and objects
Ariki et al. Automatic production system of soccer sports video by digital camera work based on situation recognition
Beetz et al. Aspogamo: Automated sports game analysis models
CN111726518A (en) System for capturing images and camera device
JP2004046647A (en) Method and device for tracking moving object based on dynamic image data
JP4839226B2 (en) Scene segmentation device
JP6649231B2 (en) Search device, search method and program
WO2021017496A1 (en) Directing method and apparatus and computer-readable storage medium
Zhu et al. Automatic multi-player detection and tracking in broadcast sports video using support vector machine and particle filter
CN110771175A (en) Video playing speed control method and device and motion camera
CN107454437A (en) A kind of video labeling method and its device, server
CN112287771A (en) Method, apparatus, server and medium for detecting video event
Valand et al. Automated clipping of soccer events using machine learning
CN114302234B (en) Quick packaging method for air skills
CN114339423B (en) Short video generation method, device, computing equipment and computer readable storage medium
CN110798692A (en) Video live broadcast method, server and storage medium
US11810352B2 (en) Operating method of server for providing sports video-based platform service
Markoski et al. Application of adaboost algorithm in basketball player detection
KR20180089977A (en) System and method for video segmentation based on events
Lazarescu et al. Using camera motion to identify types of American football plays
KR102652647B1 (en) Server, method and computer program for generating time slice video by detecting highlight scene event

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002

RJ01 Rejection of invention patent application after publication