CN110933460B - Video splicing method and device and computer storage medium - Google Patents
Video splicing method and device and computer storage medium Download PDFInfo
- Publication number
- CN110933460B CN110933460B CN201911237610.2A CN201911237610A CN110933460B CN 110933460 B CN110933460 B CN 110933460B CN 201911237610 A CN201911237610 A CN 201911237610A CN 110933460 B CN110933460 B CN 110933460B
- Authority
- CN
- China
- Prior art keywords
- video
- segment
- target
- fragment
- tag
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 230000005540 biological transmission Effects 0.000 claims abstract description 12
- 239000012634 fragment Substances 0.000 claims description 70
- 238000012163 sequencing technique Methods 0.000 claims 2
- 238000007781 pre-processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26258—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8352—Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The application provides a video splicing method and device and a computer storage medium. Wherein the method comprises the following steps: receiving a playing request of a target video; the playing request at least carries a video identifier of the target video; acquiring the transmission stream address of the video segment corresponding to each segment label corresponding to the video identifier; each segment label is a segment label in a segment label set corresponding to the video identifier; sequentially splicing the transport stream addresses to obtain a play list of the target video; and returning the playlist of the target video. Based on the video identifiers generated for the video clips in advance and the corresponding relation between the video identifiers and the clip label set, the transmission stream addresses of the video clips are spliced, and a spliced play list is obtained. Therefore, the method and the device realize the splicing of a plurality of videos into one video, and solve the problem that obvious switching and loading processes exist between two videos when the videos are played.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method and an apparatus for splicing videos, and a computer storage medium.
Background
In many clients that play videos, a user may obtain an album of video topics by searching for the video topics or selecting a video topic provided by the client. When playing videos in a video title album, the videos may be automatically played one after the other.
However, videos in the video theme album are all independent videos, so after all transport stream addresses corresponding to one video are requested to be in the playlist and played, it is necessary to request all transport stream addresses corresponding to one video to be in the playlist again to play the next video, so that an obvious switching and loading process exists between two videos, and the video theme album mostly takes clipped short videos as a main component, so that the frequent switching and loading process greatly affects the viewing experience of a user. The existing mode capable of solving the problem to a certain extent can only splice a plurality of clipped short videos into one video in an artificial clipping mode.
However, the way of artificially editing is obviously too cumbersome and the number of videos that can be provided to the user is also very limited due to the large workload. Therefore, how to effectively solve the problem that an obvious switching and loading process exists between two videos when the videos in the video theme album are played is very important.
Disclosure of Invention
Based on the defects of the prior art, the application provides a video splicing method and device and a computer storage medium, so as to solve the problem that obvious switching and loading processes exist between two videos when the videos in a video theme album are played.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides a video stitching method, including:
receiving a playing request of a target video; the playing request at least carries a video identifier of the target video;
acquiring the transmission stream address of the video segment corresponding to each segment label corresponding to the video identifier; each segment label is a segment label in a segment label set corresponding to the video identifier;
sequentially splicing the transport stream addresses to obtain a play list of the target video;
and returning the playlist of the target video.
Optionally, in the foregoing method, the method for generating a fragment tag includes:
dividing continuous image frames which belong to the same video subject in the original video into a video segment aiming at each uploaded original video;
generating a segment label corresponding to each video segment; wherein the segment tags include at least a time period of the video segment in the original video, an identification of the original video, and a video topic.
Optionally, in the above method, after generating the segment tag corresponding to each of the video segments, the method further includes:
respectively combining the fragment tags belonging to the same video theme to obtain a plurality of fragment tag sets;
respectively generating a video identifier corresponding to each fragment label set; wherein one of the segment tag sets corresponds to one of the target videos.
Optionally, in the foregoing method, before the receiving the playback of the target video, the method further includes:
receiving a query request of a video theme;
respectively combining a plurality of fragment tags belonging to the video theme to obtain a plurality of fragment tag sets;
respectively generating and returning a video identifier and a video cover corresponding to each fragment tag set; wherein one of the segment tag sets corresponds to one of the target videos.
Optionally, in the foregoing method, the obtaining a transport stream address of a video segment denoted by each segment tag corresponding to the video identifier includes:
when the video identification is determined to have the corresponding segment label set, determining the address information of the original video corresponding to the identification of the original video aiming at the segment label in each segment label set, and acquiring the transmission stream address of the video segment corresponding to the time period of the original video according to the address information.
This application second aspect provides a video splicing apparatus, includes:
the first receiving unit is used for receiving a playing request of a target video; the playing request at least carries a video identifier of the target video;
an obtaining unit, configured to obtain a transport stream address of a video segment corresponding to each segment tag corresponding to the video identifier; each segment label is a segment label in a segment label set corresponding to the video identifier;
the splicing unit is used for sequentially splicing the transport stream addresses to obtain a play list of the target video;
and the sending unit is used for returning the playlist of the target video.
Optionally, in the above apparatus, a preprocessing unit is further included, where the preprocessing unit includes:
the dividing unit is used for dividing continuous image frames which belong to the same video subject in the original video into a video segment aiming at each uploaded original video;
the first generating unit is used for generating a fragment label corresponding to each video fragment; wherein the segment tags include at least a time period of the video segment in the original video, an identification of the original video, and a video topic.
Optionally, in the above apparatus, the preprocessing unit further includes:
the first combination unit is used for respectively combining the fragment tags belonging to the same video theme to obtain a plurality of fragment tag sets;
the second generating unit is used for respectively generating a video identifier corresponding to each fragment label set; wherein one of the segment tag sets corresponds to one of the target videos.
Optionally, in the above apparatus, further comprising:
the second receiving unit is used for receiving a query request of a video theme;
the second combination unit is used for respectively combining the fragment tags belonging to the video theme to obtain a plurality of fragment tag sets;
the third generation unit is used for respectively generating and returning the video identification and the video cover corresponding to each fragment tag set; wherein one of the segment tag sets corresponds to one of the target videos.
Optionally, in the above apparatus, the obtaining unit includes:
and the obtaining subunit is configured to, when it is determined that the video identifier has the corresponding segment tag set, determine, for a segment tag in each segment tag set, address information of the original video corresponding to the identifier of the original video, and obtain, according to the address information, a transport stream address of a video segment corresponding to the time period of the original video.
A third aspect of the present application provides a computer storage medium storing a program for implementing a method of splicing videos as described in any one of the above when the program is executed.
According to the video splicing method, the segment labels corresponding to each video segment are generated through preselection before the playing request of the target video is received, and the corresponding video identifications are generated for the segment label set. Therefore, when a play request of a target video is received, according to a video identifier carried in the play request, a transport stream address of a video segment corresponding to each segment tag in a segment tag set corresponding to the video identifier is obtained, and then the transport stream addresses are sequentially spliced to obtain and return to a play list of the target video. Therefore, by splicing the transmission stream addresses of the video clips corresponding to the clip tags, the problem that a plurality of videos are spliced into one target video, the switching and loading processes between the videos do not exist any more, and the user experience is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for generating a fragment tag according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an example of a method for generating a fragment tag according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another fragment tag generation method according to another embodiment of the present application;
fig. 4 is a schematic diagram of an example of another fragment tag generation method according to another embodiment of the present application;
fig. 5 is a schematic flowchart of a video stitching method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a video stitching method according to another embodiment of the present application;
fig. 7 is a flowchart illustrating an example of a video stitching method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a video stitching apparatus according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a preprocessing unit according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the application provides a video splicing method, which aims to solve the problem that when videos in a video theme album are played, after one video is played, obvious switching and loading processes exist when the next video is automatically played, so that the watching experience of a user is influenced.
First, it should be noted that, when implementing the video splicing method provided by the present application, segment tags corresponding to a plurality of video segments need to be generated in advance. Therefore, optionally, an embodiment of the present application provides a method for generating a fragment tag, as shown in fig. 1, specifically including:
s101, dividing continuous image frames which belong to the same video subject in the original video into a video segment aiming at each uploaded original video.
The original video refers to the uploaded complete video, and the complete video can be directly played without being spliced by the splicing method. The image frame of the original video refers to each frame image constituting the original video. The video theme refers to the name of the theme corresponding to the video content, such as the name of a game, a car, a laugh, or a star, and so on, and is also a type of attribute information of the video.
It should be noted that, in the present application, dividing the original video into a plurality of segments is only by definition, and the original video is not clipped into a plurality of video segments. In the embodiment of the present application, therefore, a video clip refers to a portion of video belonging to the original video, rather than a separate one.
Optionally, in this embodiment of the present application, for each original video uploaded, the video segments need to be divided according to the video content of the video. Of course, the division may also be performed only on a part of the original video, for example, only on the original video with the video duration greater than the preset duration. It should be noted that an original video may be divided into a plurality of video clips, may also be divided into only one video, and may even be that the entire original video is divided into one video clip, that is, one original video is one video clip.
Specifically, for each original video uploaded, the video topic to which each image frame belongs may be determined according to the picture and text included in each image frame in the original video. Specifically, image features in image frames can be extracted through an image recognition technology, and then corresponding video topics are obtained through the constructed neural network model. And finally, dividing the continuous image frames belonging to the same video subject into video segments.
Optionally, when dividing the segment, for consecutive image frames belonging to the same video subject, the number of the image frames is required to satisfy the preset number to divide the segment into one video segment. Or after the video segment is divided, judging whether the number of the image frames corresponding to the video segment meets the preset number, and canceling the division of the video segment when the number does not meet the preset number. Therefore, the video clips with too few frames can be avoided, and the video clips with too few frames are only one picture when being played, but not one video clip, so that the splicing significance is avoided, and the working efficiency can be improved to a certain extent.
It should be noted that one frame of image frame may correspond to a plurality of video subjects, so that one frame of image frame may be divided into a plurality of different video segments, i.e. there may be combined image frames between video segments. Similarly, a video clip may correspond to multiple video topics, may be multiple different video topics, or may be multiple levels of different video topics. For example, one video clip may correspond to a plurality of different video themes such as games, funs and the like, or may correspond to a plurality of different video themes such as a fun theme, a movie fun theme, a european movie fun theme and the like. Therefore, the video theme is refined, so that the granularity of the video watched and searched by a user is smaller, and the target video which is in line with the preference of the user can be found more accurately.
S102, generating a fragment label corresponding to each video fragment, wherein the fragment label at least comprises a time period of the video fragment in the original video, the identification of the original video and a video theme.
That is, after dividing the original video into a plurality of video segments, one segment tag is generated for the corresponding video topic and time period of each video segment. It can also be considered that a plurality of clip tags are marked on one original video, and can be stored in the attribute information of the original video.
The time period of the video clip in the original video, the identification of the original video and the corresponding video subject are recorded in the clip tag. Wherein the identification of the original video may be a video name of the original video. Therefore, according to the labels of the original videos in the section labels and the time periods, the video section corresponding to each section label can be determined, which is the video section of the original video from the first minute to the second minute.
For example, as shown in fig. 2, the original video a is a video with a video duration of 24 minutes, and is divided into 5 video segments according to the content of the video, and each video segment is labeled, so as to obtain 5 labels of the original video a. The video subjects of the label 1 are landscape, sea and sea waves, and correspond to the 9 th to 10 th minutes of the original video A; the video theme of the label 2 is star, star name and eye spirit, and corresponds to 13 th minute 15 seconds to 14 th minute 21 seconds of the original video A; the video theme of the label 3 is games, game names and achievement names, and corresponds to the 15 th minute 21 seconds to the 16 th minute 30 seconds of the original video A; the video titles of tag 4 are racing, F1 racing, drifting, corresponding to the 17 th minute 30 seconds to the 18 th minute 30 seconds of the original video a; the video topics of tag 5 are shopping, luxury, brand name, corresponding to the 19 th minute 30 seconds to the 24 th minute of the original video a.
Optionally, based on the segment tags generated in the foregoing embodiment, in another embodiment of the present application, as shown in fig. 3, after step S301 is also sequentially performed, and for each uploaded original video, image frames that are continuous in the original video and belong to the same video subject are divided into one video segment, and step S302, a segment tag corresponding to each video segment is generated, the method may further include:
s303, combining the fragment labels belonging to the same video theme respectively to obtain a plurality of fragment label sets.
Specifically, all the segment tags belonging to the same video theme can be combined to obtain a plurality of segment tag sets; i.e., one video topic corresponds to one clip tag set. It is also possible to combine every N segment tags into one segment tag set and combine the last remaining less than N segment tags into one segment tag set, or combine the last more than N but less than twice N segment tags into one segment tag set, of all segment tags belonging to the same video topic. Wherein N is a positive integer. Therefore, one video theme corresponds to a plurality of fragment labels, and overlong target videos obtained by final splicing are avoided.
It should be noted that the segment notes in the segment label set are in a sequential ordering state, so as to determine the splicing order subsequently. Specifically, the ordering may be a random ordering, or may be an ordering with a preset ordering rule, for example, the ordering is performed according to the length of the corresponding video segment or according to the uploading time of the corresponding original video.
And S304, respectively generating the video identifications corresponding to the label sets of the segments.
Wherein one clip tag set corresponds to one target video. That is, the video segment corresponding to each segment tag in a segment tag set is all video data of a target video. It can be seen that the target video spliced in the present application is not that each video segment is cut from the original video and then spliced into one video. The target video is only corresponding to the video clip, or only corresponding to the video clip by a video name, so that the spliced target video can be regarded as a virtual video.
For example, as shown in fig. 4, 5 clip tags correspond to a target video F having a duration of 16 minutes. The 5 labels are respectively the fragment labels of five original videos, namely an original video A, an original video B, an original video C, an original video D and an original video E, and the video titles of the five labels are as follows: game, game a, and game achievement b. The video clips corresponding to the five clip tags are 5 video clips corresponding to the video target video F. So that the target video F of 16 minutes can be played by only playing the 5 video clips in sequence. Wherein, the 0 th minute to the 1 st minute of the target video F are video clips of the 9 th minute to the 10 th minute of the video A corresponding to the label 1; the 1 st minute to the 3 rd minute of the target video F are video clips of the 13 th minute to the 15 th minute of the video B corresponding to the label 2; the 3 rd minute to 8 th minute of the target video F are video clips of the 20 th minute to 25 th minute of the video C corresponding to the tag 3; the 8 th to 12 th minutes of the target video F are video clips of the 16 th to 20 th minutes of the video D corresponding to the tag 4; the 12 th to 16 th minutes of the target video F are video clips of the 12 th to 16 th minutes of the video E corresponding to the tag 5.
It should be noted that the generated video identifier may be a name corresponding to a video topic of a segment tag in the segment tag set, where the name is a name of the target video, and then the video name is returned to the user terminal for selection or query. And meanwhile, information corresponding to the clip tag set, such as a corresponding video cover, can be generated.
Based on the segment tags generated in the foregoing embodiment, the combined segment tag set, the generated video identifier, and the corresponding relationship therebetween, an embodiment of the present application provides a video splicing method, as shown in fig. 5, including:
s501, receiving a playing request of the target video, wherein the playing request at least carries a video identifier of the target video.
Optionally, when the user terminal responds to the user selection to play the target video, a play request of the target video is received. Optionally, the play request may further include, in addition to the video identifier of the target video, other attribute information of the target video, such as a video theme of the target video, a corresponding segment tag set, and the like, and may further include user information, so that the play request of this time may be authenticated through the user information, and whether the play request of this time has an authority to play the target video is determined.
S502, acquiring the transmission stream address of the video segment corresponding to each segment label corresponding to the video identifier, wherein each segment label is a segment label in the segment label set corresponding to the video identifier.
It should be noted that video is generally transmitted and stored using a Transport Stream (TS). Therefore, video data of one video is stored in one TS packet. Each TS packet contains relatively small video data, so an original video or video clip is composed of a plurality of TS packets, and each TS packet corresponds to a corresponding transport stream address.
Specifically, each video identifier corresponds to a segment tag set, and data of each video segment exists in a corresponding TS packet for the video segment corresponding to each segment tag in the segment tag set. Therefore, the transport stream address of each TS packet of the video segment corresponding to the segment tag corresponding to the video identifier is obtained.
S503, sequentially splicing the transmission stream addresses to obtain a target video play list.
Specifically, the transport stream addresses of the video segments corresponding to the segment tags may be sequentially arranged according to a segment tag ordering rule in the segment tag set, so as to obtain a playlist of the target video.
And S504, returning the playlist of the target video.
Since video is composed of TS packets, when a user terminal requests video, a playlist containing transport stream addresses of all TS packets of the video is obtained. Then the user terminal obtains the video data to play according to the transport stream address of the TS data packet in the play list. Therefore, in the video splicing method provided in the embodiment of the present application, before a play request of a target video is received, a video is divided into a plurality of video segments, a segment tag corresponding to each video segment is generated, and a corresponding video identifier is generated for a segment tag set. Therefore, when a playing request of a target video is received, according to a video identifier carried in the playing request, the transport stream addresses of the video segments corresponding to the segment labels corresponding to the video identifier are obtained, then the transport stream addresses are sequentially spliced to obtain a play list of the target video, and the play list is returned to the user terminal, so that the user terminal can sequentially play the video segments, and therefore, the purpose of splicing a plurality of videos into one target video is achieved, the problems of switching and loading processes between the videos are avoided, and user experience is effectively improved.
Based on the embodiment corresponding to fig. 1 or the embodiment corresponding to fig. 3, in another embodiment of the present application, another video stitching method is improved. That is, in this embodiment, the method provided in this embodiment may be executed after only the embodiment corresponding to fig. 1 is executed, or the method provided in this embodiment may be executed after the embodiment corresponding to fig. 3 is executed. As shown in fig. 6, the method specifically includes:
s601, receiving a query request of a video theme.
Specifically, the user terminal sends a query request when responding to a query of the video topic.
It should be noted that, when the method provided in this embodiment is executed only after the embodiment corresponding to fig. 1 is executed, step S602 is directly executed after step S602 is executed. At this time, no fragment tag set exists, so that the target video corresponding to the fragment tag set does not exist, and is returned to the user terminal. Therefore, at this time, the segment tags need to be correspondingly de-combined according to the query request of the video topic. If the method provided in this embodiment is executed after the embodiment corresponding to fig. 3 is executed, it indicates that the fragment tag set and the corresponding video identifier have been generated in advance at this time, and may be fed back to the user terminal. However, considering that the video theme of the segment label of the pre-generated segment label set does not exist and accords with the inquired video theme, at this time, it is necessary to first judge whether the segment label set consistent with the inquired video theme exists; and if the video title exists, directly returning the video identification, the video cover page and the like corresponding to the fragment tag set consistent with the inquired video theme. If not, go to step S602.
S602, combining the plurality of segment labels belonging to the queried video theme respectively to obtain a plurality of segment label sets.
It should be noted that, the specific implementation process of step S602 may refer to step S303 in the foregoing embodiment accordingly, and details are not described here again.
And S603, respectively generating and returning the video identification and the video cover corresponding to each fragment tag set.
Wherein one clip tag set corresponds to one target video.
Alternatively, the generated video identification may be the name of the target video. The corresponding name may be generated specifically according to the queried video topic and the video topic in the fragment tag of the fragment tag set. For example, the video title of the query is an F1 racing video. The generated video identification can generate an F1 racing accident video, an F1 racing overtaking video and the like according to different video themes in the segment tags of the segment tag set. For the video cover, one frame of image frame can be selected from the video clips corresponding to any one clip tag in the clip tag set to be used as the video cover.
S604, receiving a playing request of the target video, wherein the playing request at least carries the video identification of the target video.
It should be noted that, the specific implementation process of this step may refer to step S501 in the foregoing method embodiment accordingly, and details are not described here again.
S605, when the corresponding segment label set of the video identifier is determined, determining the address information of the original video corresponding to the identifier of the original video for the segment label in each segment label set, and acquiring the transport stream address of the video segment corresponding to the time period of the original video according to the address information.
It should be noted that the user terminal provides the spliced virtual video and also has the original video, so that the user terminal selects the target video to be played, and the spliced virtual video is not necessarily obtained for the query, and may also be the original video. Therefore, after a play request of a target video is received, whether the target video is a spliced video or an original video is determined according to whether a corresponding segment label set exists in the video identifier.
Specifically, when it is determined that the video identifier has the corresponding segment tag set, that is, when it is determined that the target video is a spliced video, the address information of the original video corresponding to the identifier of the original video is determined for the segment tag in each segment tag set. The address information of the original video refers to address information of a Content Delivery Network (CDN) server where the original video is located. It should be noted that, network congestion is reduced, access response speed and hit rate are improved, and the current data passes through different CDN servers. Therefore, different original videos may also be stored on different CDN servers. The method comprises the steps of inquiring address information of a CDN server where an original video is located through identification of the original video in a fragment tag, and then obtaining a video time period of the original video and a corresponding transmission stream address of a video fragment from the CDN server according to the address information of the CDN server. The video time period is a time period in the clip tag, and the time period indicates which time period the video clip corresponding to the clip tag is in the original video.
In addition to a plurality of CDN servers for storing video, a plurality of servers for processing a playback request from a user terminal are generally included. Therefore, after receiving the playing request and performing authentication, the server determined by load balancing needs to determine the playing request, and whether the server can process the playing request, that is, whether the target video of the request can be scheduled or not, if the server can process the target video, the server schedules the target video, if the server cannot process the target video, the server determines a scheduler capable of processing the playing request, that is, a scheduling server of the target video, and returns the address of the scheduling server of the target video, so that the user terminal requests the scheduling server of the target video for the target video, that is, forwards the playing request, and the server capable of scheduling the target video acquires the transport stream addresses from each corresponding CDN server, splices the transport stream addresses into a playing list, and returns the playing list to the user terminal.
For example, as shown in fig. 7, the user terminal transmits a play request for playing the video F to request authentication and schedule the video F. At this time, the authentication/scheduling server receives the playing request, after authentication, if the video F is determined to be the spliced virtual video, whether the server is the scheduling server of the video F is determined, if not, the playing request is forwarded, and the address of the scheduling server where the video F is located is returned to the user terminal. The user terminal requests the scheduling server of the video F for the playlist of the video F. And the scheduling server of the video F determines the address of the CDN server where the original video of the video segment corresponding to each segment label is located in the corresponding segment label set according to the identifier of the video F and through the identifier of the video F. The video clips forming the video F are original video a, original video B, original video C, original video D, original video E, and video clips of the five original videos, and the five original videos are stored on different CDN servers. Therefore, the scheduling server of the video F needs to obtain the transport stream addresses of the video clips forming the video F from five corresponding CDN services, then splice the transport stream addresses into a complete playlist, and finally return the playlist of the video F to the user terminal.
And S606, sequentially splicing the transport stream addresses to obtain and return to a target video play list.
The specific implementation of step S606 may refer to step S503 and step S504 in the above method embodiments, which are not described herein again.
Another embodiment of the present application provides a video splicing apparatus, as shown in fig. 8, including:
a first receiving unit 801 is configured to receive a play request of a target video. The playing request at least carries the video identifier of the target video.
It should be noted that, the specific working process of the first receiving unit 801 may refer to step S501 in the foregoing method embodiment accordingly, and details are not repeated here.
An obtaining unit 802, configured to obtain a transport stream address of a video segment corresponding to each segment tag corresponding to the video identifier.
Each segment label is a segment label in a segment label set corresponding to the video identifier.
It should be noted that, the specific working process of the obtaining unit 802 may refer to step S502 in the foregoing method embodiment accordingly, which is not described herein again.
And a splicing unit 803, configured to splice the transport stream addresses in sequence to obtain a playlist of the target video.
It should be noted that, the specific working process of the splicing unit 803 may refer to step S503 in the foregoing method embodiment accordingly, which is not described herein again.
A sending unit 804, configured to return a playlist of the target video.
It should be noted that, the specific working process of the sending unit 804 may refer to step S504 in the foregoing method embodiment accordingly, which is not described herein again.
Optionally, in another embodiment of the present application, a preprocessing unit is further included. As shown in fig. 9, the preprocessing unit includes:
the dividing unit 901 is configured to divide, for each uploaded original video, image frames that are continuous in the original video and belong to the same video topic into one video segment.
It should be noted that, the specific working process of the dividing unit 901 may refer to step S101 in the foregoing method embodiment accordingly, which is not described herein again.
A first generating unit 902, configured to generate a clip tag corresponding to each video clip; wherein the segment label at least comprises a time period of the video segment in the original video, an identification of the original video and a video subject.
It should be noted that, the specific working process of the first generating unit 902 may refer to step S102 in the foregoing method embodiment accordingly, and details are not described here again.
Optionally, in another embodiment of the present application, referring also to fig. 9, the preprocessing unit further includes:
a first combining unit 903, configured to combine the segment tags belonging to the same video topic, respectively, to obtain a plurality of segment tag sets.
It should be noted that, the specific working process of the first combining unit 903 may refer to step S303 in the foregoing method embodiment accordingly, which is not described herein again.
A second generating unit 904, configured to generate video identifiers corresponding to each segment label set respectively; wherein one clip tag set corresponds to one target video.
It should be noted that, the specific working process of the second generating unit 904 may refer to step S304 in the foregoing method embodiment accordingly, and is not described herein again.
Optionally, in another embodiment of the present application, the video splicing apparatus further includes:
and the second receiving unit is used for receiving the query request of the video theme.
And the second combination unit is used for respectively combining the plurality of fragment tags belonging to the video theme to obtain a plurality of fragment tag sets.
And the third generating unit is used for respectively generating and returning the video identification and the video cover corresponding to each fragment tag set. Wherein one clip tag set corresponds to one target video.
It should be noted that, for the specific working process of the unit in the embodiment of the present application, reference may be made to step S601 to step S603 in the method embodiment, which is not described herein again.
Optionally, in another embodiment of the present application, the obtaining unit 802 includes:
and the obtaining subunit is configured to, when it is determined that the video identifier has the corresponding segment tag set, determine, for the segment tag in each segment tag set, address information of the original video corresponding to the identifier of the original video, and obtain, according to the address information, a transport stream address of the video segment corresponding to the time period of the original video.
It should be noted that, the specific working process of the obtaining subunit may refer to step S605 in the foregoing method embodiment accordingly, and details are not described here again.
According to the video splicing device, the segment labels corresponding to each video segment are generated through preselection before the playing request of the target video is received, and the corresponding video identification is generated for the segment label set. Therefore, when the first receiving unit receives the play request of the target video, the obtaining unit obtains the transport stream address of the video segment corresponding to each segment label in the segment label set corresponding to the video identifier according to the video identifier carried in the play request, and then the splicing unit sequentially splices the transport stream addresses to obtain and return to the play list of the target video. Therefore, by splicing the transmission stream addresses of the video clips corresponding to the clip tags, the problem that a plurality of videos are spliced into one target video, the switching and loading processes between the videos do not exist any more, and the user experience is effectively improved.
Another embodiment of the present application provides a computer storage medium storing a program that, when executed, implements a method for splicing videos as provided in any one of the above method embodiments.
It should be noted that when the program is executed to implement the video splicing method provided by any one of the above method embodiments, the specific implementation process may refer to the above method embodiments accordingly.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (6)
1. A video splicing method is characterized by comprising the following steps:
for each original video uploaded, dividing the original video into a plurality of video segments by definition, while avoiding editing the original video into a plurality of video segments; dividing continuous image frames belonging to the same video subject in the original video into a video segment;
generating a fragment tag corresponding to each video fragment, and storing the fragment tag in the attribute information of the original video; the segment tag comprises at least a time period of the video segment in the original video, an identification of the original video, and a video subject; wherein, one video clip corresponds to a plurality of different video themes; one frame of image frame corresponds to a plurality of video subjects, so that one frame of image frame can be divided into a plurality of different video segments, and overlapped image frames can exist among the video segments;
respectively combining the fragment tags belonging to the same video theme to obtain a plurality of fragment tag sets;
respectively generating a video identifier corresponding to each fragment tag set, wherein the video identifier corresponding to the fragment tag set is a name corresponding to a video theme of a fragment tag in the fragment tag set; one of the segment tag sets corresponds to one target video so as to establish a corresponding relation between the target video and the video segments, and the video segments do not need to be clipped from the original video and spliced into the target video;
receiving a playing request of a target video through an authentication/scheduling server, wherein the playing request of the target video is used for requesting authentication and scheduling the target video, after authentication, if the target video is determined to be a spliced virtual video, determining whether a current server is a scheduling server of the target video, if not, forwarding the playing request, and returning the address of the scheduling server where the target video is located to a user terminal so that the user terminal can request a playing list of the target video from the scheduling server where the target video is located; the playing request at least carries a video identifier of the target video, and the video identifier of the target video is the name of the target video;
acquiring, by a scheduling server where the target video is located, a transport stream address of a video segment corresponding to each segment tag corresponding to the video identifier, wherein an address of a CDN server where an original video of the video segment corresponding to each segment tag is located in a corresponding segment tag set is determined according to the video identifier of the target video; storing a plurality of original videos to which each video clip forming the target video belongs on a plurality of different CDN servers; acquiring the transmission stream addresses of the video clips forming the target video from the different CDN servers; each segment label is a segment label in a segment label set corresponding to the video identifier;
sequentially splicing all transport stream addresses corresponding to all the segment labels according to a segment label sequencing rule in the segment label set to obtain a play list of the target video; the fragment tag ordering rule is random ordering, ordering according to the length of a video fragment or ordering according to the uploading time of a corresponding original video;
and returning the play list of the target video to the user terminal.
2. The method of claim 1, wherein prior to receiving playback of the target video, further comprising:
receiving a query request of a video theme;
respectively combining a plurality of fragment tags belonging to the video theme to obtain a plurality of fragment tag sets;
respectively generating and returning a video identifier and a video cover corresponding to each fragment tag set; wherein one of the segment tag sets corresponds to one of the target videos.
3. The method according to claim 1, wherein the obtaining the transport stream address of the video segment corresponding to each segment tag corresponding to the video identifier comprises:
when the video identification is determined to have the corresponding segment label set, determining the address information of the original video corresponding to the identification of the original video aiming at the segment label in each segment label set, and acquiring the transmission stream address of the video segment corresponding to the time period of the original video according to the address information.
4. A video stitching device, comprising:
a first receiving unit, configured to receive a play request of a target video through an authentication/scheduling server, where the play request of the target video is used to request authentication and schedule the target video, and after authentication, if it is determined that the target video is a spliced virtual video, determine whether a current server is a scheduling server of the target video, and if not, forward the play request, and return an address of the scheduling server where the target video is located to a user terminal, so that the user terminal requests a play list of the target video from the scheduling server where the target video is located; the playing request at least carries a video identifier of the target video, and the video identifier of the target video is the name of the target video;
an obtaining unit, configured to obtain, by a scheduling server where the target video is located, a transport stream address of a video segment corresponding to each segment tag corresponding to the video identifier, where, according to the video identifier of the target video, an address of a CDN server where an original video of the video segment corresponding to each segment tag is located in a corresponding segment tag set is determined; storing a plurality of original videos to which each video clip forming the target video belongs on a plurality of different CDN servers; acquiring the transmission stream addresses of the video clips forming the target video from the different CDN servers; each segment label is a segment label in a segment label set corresponding to the video identifier;
the splicing unit is used for sequentially splicing the transport stream addresses corresponding to the fragment labels according to a fragment label sequencing rule in the fragment label set to obtain a play list of the target video; the fragment tag ordering rule is random ordering, ordering according to the length of a video fragment or ordering according to the uploading time of a corresponding original video;
the sending unit is used for returning the play list of the target video to the user terminal;
the dividing unit is used for dividing each uploaded original video into a plurality of video segments from definition, and avoiding clipping the original video into the plurality of video segments; dividing continuous image frames belonging to the same video subject in the original video into a video segment;
the first generation unit is used for generating a fragment label corresponding to each video fragment and storing the fragment label in the attribute information of the original video; wherein the segment tags include at least a time period of the video segment in the original video, an identification of the original video, and a video topic; wherein, one video clip corresponds to a plurality of different video themes; one frame of image frame corresponds to a plurality of video subjects, so that one frame of image frame can be divided into a plurality of different video segments, and overlapped image frames can exist among the video segments;
the first combination unit is used for respectively combining the fragment tags belonging to the same video theme to obtain a plurality of fragment tag sets;
a second generating unit, configured to generate a video identifier corresponding to each segment tag set, where the video identifier corresponding to the segment tag set is a name corresponding to a video topic of a segment tag in the segment tag set; and one fragment tag set corresponds to one target video so as to establish the corresponding relation between the target video and the video fragments, and the video fragments are not required to be clipped from the original video and spliced into the target video.
5. The apparatus of claim 4, further comprising:
the second receiving unit is used for receiving a query request of a video theme;
the second combination unit is used for respectively combining the fragment tags belonging to the video theme to obtain a plurality of fragment tag sets;
the third generation unit is used for respectively generating and returning the video identification and the video cover corresponding to each fragment tag set; wherein one of the segment tag sets corresponds to one of the target videos.
6. A computer storage medium storing a program for implementing the video splicing method according to any one of claims 1 to 3 when the program is executed.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111217461.0A CN113992942B (en) | 2019-12-05 | Video stitching method and device and computer storage medium | |
CN201911237610.2A CN110933460B (en) | 2019-12-05 | 2019-12-05 | Video splicing method and device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911237610.2A CN110933460B (en) | 2019-12-05 | 2019-12-05 | Video splicing method and device and computer storage medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111217461.0A Division CN113992942B (en) | 2019-12-05 | Video stitching method and device and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110933460A CN110933460A (en) | 2020-03-27 |
CN110933460B true CN110933460B (en) | 2021-09-07 |
Family
ID=69857229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911237610.2A Active CN110933460B (en) | 2019-12-05 | 2019-12-05 | Video splicing method and device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110933460B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111711861B (en) * | 2020-05-15 | 2022-04-12 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111787341B (en) * | 2020-05-29 | 2023-12-05 | 北京京东尚科信息技术有限公司 | Guide broadcasting method, device and system |
CN113259708A (en) * | 2021-04-06 | 2021-08-13 | 阿里健康科技(中国)有限公司 | Method, computer device and medium for introducing commodities based on short video |
CN113905274B (en) * | 2021-09-30 | 2024-05-17 | 安徽尚趣玩网络科技有限公司 | Video material splicing method and device based on EC (electronic control) identification |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104093067A (en) * | 2014-06-23 | 2014-10-08 | 广州三星通信技术研究有限公司 | Device and method for sharing and playing audio and visual fragments in terminal |
CN107517411A (en) * | 2017-09-04 | 2017-12-26 | 青岛海信电器股份有限公司 | A kind of video broadcasting method based on GStreamer frameworks |
WO2019042341A1 (en) * | 2017-09-04 | 2019-03-07 | 优酷网络技术(北京)有限公司 | Video editing method and device |
CN110198432A (en) * | 2018-10-30 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Processing method, device, computer-readable medium and the electronic equipment of video data |
CN110381371A (en) * | 2019-07-30 | 2019-10-25 | 维沃移动通信有限公司 | A kind of video clipping method and electronic equipment |
CN110475121A (en) * | 2018-05-10 | 2019-11-19 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure, device and relevant device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
HUE042122T2 (en) * | 2011-06-08 | 2019-06-28 | Koninklijke Kpn Nv | Locating and retrieving segmented content |
US10275430B2 (en) * | 2015-06-29 | 2019-04-30 | Microsoft Technology Licensing, Llc | Multimodal sharing of content between documents |
US9426543B1 (en) * | 2015-12-18 | 2016-08-23 | Vuclip (Singapore) Pte. Ltd. | Server-based video stitching |
-
2019
- 2019-12-05 CN CN201911237610.2A patent/CN110933460B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104093067A (en) * | 2014-06-23 | 2014-10-08 | 广州三星通信技术研究有限公司 | Device and method for sharing and playing audio and visual fragments in terminal |
CN107517411A (en) * | 2017-09-04 | 2017-12-26 | 青岛海信电器股份有限公司 | A kind of video broadcasting method based on GStreamer frameworks |
WO2019042341A1 (en) * | 2017-09-04 | 2019-03-07 | 优酷网络技术(北京)有限公司 | Video editing method and device |
CN110475121A (en) * | 2018-05-10 | 2019-11-19 | 腾讯科技(深圳)有限公司 | A kind of video data handling procedure, device and relevant device |
CN110198432A (en) * | 2018-10-30 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Processing method, device, computer-readable medium and the electronic equipment of video data |
CN110381371A (en) * | 2019-07-30 | 2019-10-25 | 维沃移动通信有限公司 | A kind of video clipping method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113992942A (en) | 2022-01-28 |
CN110933460A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110933460B (en) | Video splicing method and device and computer storage medium | |
US11670271B2 (en) | System and method for providing a video with lyrics overlay for use in a social messaging environment | |
US9111285B2 (en) | System and method for representing content, user presence and interaction within virtual world advertising environments | |
US12047615B2 (en) | Methods and systems for dynamic routing of content using a static playlist manifest | |
US20130343722A1 (en) | System and method for distributed and parallel video editing, tagging and indexing | |
CN110769270A (en) | Live broadcast interaction method and device, electronic equipment and storage medium | |
US20130013699A1 (en) | Online Photosession | |
CN102170584A (en) | Method, device and system for playing media between synchronic HS (HTTP (HyperText Transfer Protocol) Streaming) terminal equipment | |
WO2008110087A1 (en) | Mehtod for playing multimedia, system, client-side and server | |
CN102196008A (en) | Peer-to-peer downloading method, video equipment and content transmission method | |
CN113518247A (en) | Video playing method, related equipment and computer readable storage medium | |
CN109218765B (en) | Live video room recommendation method and device | |
CN106331089A (en) | Video play control method and system | |
CN110996145A (en) | Multimedia resource playing method, system, terminal equipment and server | |
CN111083504B (en) | Interaction method, device and equipment | |
JP6426258B1 (en) | Server and program | |
CN113992942B (en) | Video stitching method and device and computer storage medium | |
CN111149366A (en) | Server and program | |
US20220417619A1 (en) | Processing and playing control over interactive video | |
CN113536036A (en) | Video data display method and device, electronic equipment and storage medium | |
CN109999490B (en) | Method and system for reducing networking cloud application delay | |
JP6426257B1 (en) | Server and program | |
CN107172451B (en) | Video playing control method and device | |
JP2020523686A5 (en) | ||
JP7237927B2 (en) | Information processing device, information processing device and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022646 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |