CN110213672B - Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment - Google Patents

Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment Download PDF

Info

Publication number
CN110213672B
CN110213672B CN201910598337.XA CN201910598337A CN110213672B CN 110213672 B CN110213672 B CN 110213672B CN 201910598337 A CN201910598337 A CN 201910598337A CN 110213672 B CN110213672 B CN 110213672B
Authority
CN
China
Prior art keywords
video
target
fragment
original
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910598337.XA
Other languages
Chinese (zh)
Other versions
CN110213672A (en
Inventor
郜光耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910598337.XA priority Critical patent/CN110213672B/en
Publication of CN110213672A publication Critical patent/CN110213672A/en
Application granted granted Critical
Publication of CN110213672B publication Critical patent/CN110213672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application relates to a video generation and playing method, a system, a device, a storage medium and equipment, wherein the video generation method comprises the following steps: receiving a target video request; analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video; determining a target fragment matched with the starting and stopping time points from the video fragments of the original video; generating an index file corresponding to the target video according to the index address of the target fragment; and distributing a video identifier for the target video, and associating the index file with the video identifier. The scheme provided by the application can save the storage space.

Description

Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video generating method, a video playing method, a video generating device, a video playing device, a computer-readable storage medium, and a computer device.
Background
With the rapid development of the internet technology, videos are more and more popular with users, and compared with characters and pictures, the videos carry richer and more expressive information, and users can watch the videos through terminals anytime and anywhere, for example, the videos can be watched through a video client or a content browsing client. The video to be watched may be a complete original video or a video cut from the original video, for example, the video shows a certain wonderful moment or is very eyecatching.
However, the current method of cutting out a video segment from the original video requires manually cutting out a video segment from the original video and storing the cut-out video segment additionally, which occupies a large amount of storage space.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a video generation method, an apparatus, a computer-readable storage medium, and a computer device for solving the technical problem that the existing method for producing a certain segment of video from an original video occupies a large amount of storage space.
A video generation method, comprising:
receiving a target video request;
analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video;
determining a target fragment matched with the starting and stopping time point from the video fragments of the original video;
generating an index file corresponding to the target video according to the index address of the target fragment;
and distributing a video identifier for the target video, and associating the index file with the video identifier.
A video playback method, comprising:
entering a video playing entry page;
displaying a video cover in the video playing entry page;
triggering a video playing event in the video cover;
responding to the video playing event, acquiring an index file corresponding to a target video linked by the video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target fragment is determined according to a video fragment matched with a start-stop time point in video fragments of an original video, wherein the start-stop time point is a time point of the target video at the start-stop position in the original video;
and requesting a target fragment according to the index address, and playing the target video according to the target fragment.
In one embodiment, the index file is an index file corresponding to the target video, which is generated by a video storage server according to the index address of the target segment, and the method further includes: and requesting the target fragment from the video storage server according to the index address.
In one embodiment, the target fragment is an original index file corresponding to the original video, which is obtained by a video storage server; analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video; splicing the playing time lengths corresponding to the video fragments in sequence according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video; and determining the video segments corresponding to the playing time periods containing the starting and stopping time points.
In one embodiment, the start-stop time points include a start time point and an end time point; the method further comprises the following steps: taking the video fragment corresponding to the playing time period containing the starting time point as a starting target fragment through a video storage server; and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
In one embodiment, the method further comprises: analyzing the original index file through a video storage server to obtain an index address corresponding to the video fragment of the original video; when the starting time point is in the playing time period corresponding to the starting target fragment, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting a video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; and storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
In one embodiment, the method further comprises: when the starting time point is the starting point of the playing time period corresponding to the starting target fragment, the video storage server takes the starting target fragment as the starting fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the starting fragment.
In one embodiment, the method further comprises: analyzing the original index file through a video storage server to obtain an index address corresponding to the video fragment of the original video; when the ending time point is in the playing time period corresponding to the ending target fragment, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting a video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the ending fragments, and generating an index file corresponding to the target video according to the storage addresses of the ending fragments.
In one embodiment, the method further comprises: when the ending time point is the ending point of the playing time period corresponding to the ending target fragment, the video storage server takes the ending target fragment as the ending fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
In one embodiment, the method further comprises: taking the video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment through a video storage server; analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
In one embodiment, the method further comprises: acquiring a starting fragment of the target video through a video storage server; capturing a screenshot corresponding to a starting time point in the starting and stopping time points from a playing picture corresponding to the starting fragment; and taking the screenshot as a video cover of the target video, and associating the video cover with the video identifier of the target video.
A video processing system comprises a video editing server, a video storage server, a video application server and a terminal, wherein:
the video editing server is used for sending a target video request to the video storage server, wherein the target video request carries starting and stopping time points of a target video to be generated in an original video;
the video storage server is used for receiving and analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video; determining a target fragment matched with the starting and stopping time point from the video fragments of the original video, and generating an index file corresponding to the target video according to an index address of the target fragment; distributing a video identifier for the target video, and associating the index file with the video identifier;
the video storage server is further used for returning the video identifier to the video editing server and sending the video identifier to the video application server through the video editing server;
the video application server is further used for receiving a video query request sent by the terminal, responding to the video query request, and returning a video identifier corresponding to the target video to the terminal;
the terminal is used for acquiring a corresponding index file according to the video identifier and analyzing the index file to obtain an index address corresponding to a target fragment of the target video; and requesting a target fragment according to the index address, and playing the target video according to the target fragment.
A video generation apparatus, the apparatus comprising:
the receiving module is used for receiving a target video request;
the analysis module is used for analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video;
a target fragment determining module, configured to determine a target fragment matching the start-stop time point from video fragments of the original video;
the index file generation module is used for generating an index file corresponding to the target video according to the index address of the target fragment;
and the storage module is used for distributing a video identifier for the target video and associating the index file with the video identifier.
A video playback device, the device comprising:
the display module is used for entering a video playing entry page; displaying a video cover in the video playing entry page;
the acquisition module is used for triggering a video playing event in the video cover;
the index address acquisition module is used for responding to the video playing event, acquiring an index file corresponding to a target video linked by the video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target fragment is determined according to a video fragment matched with a start-stop time point in video fragments of an original video, wherein the start-stop time point is a time point of the target video at the start-stop position in the original video;
and the playing module is used for requesting a target fragment according to the index address and playing the target video according to the target fragment.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described video generating method or video playing method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video generation method described above.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video playback method described above.
The video generating and playing method, the video generating and playing device, the computer readable storage medium and the computer equipment directly determine the target fragment of the target video to be generated from the video fragments of the original video, generate the index file corresponding to the target video according to the index address of the target fragment, and after associating the index file with the video identifier allocated to the target video, the target video can be directly played according to the index address of the target fragment when the video needs to be played.
Drawings
FIG. 1 is a diagram of an application environment of a video generation method in one embodiment; FIG. 2 is a schematic flow chart diagram illustrating a video generation method in one embodiment;
FIG. 3 is a timing diagram for playing a video via a video player in one embodiment;
FIG. 4 is a flowchart illustrating the steps of determining a target segment matching a start-stop time point from video segments of an original video according to an embodiment;
FIG. 5 is a diagram illustrating a playing time period corresponding to a video segment of an original video in an embodiment;
FIG. 6 is a diagram illustrating video slices of a target video generated from an original video, in one embodiment;
FIG. 7 is an interface diagram illustrating a video cover corresponding to a target video in one embodiment;
FIG. 8 is a schematic flow chart diagram illustrating a method for video generation in one embodiment;
FIG. 9 is a diagram showing an application environment of a video generation method in another embodiment;
FIG. 10 is a flowchart illustrating a video playback method according to an embodiment;
FIG. 11 is a block diagram showing the structure of a video generating apparatus according to an embodiment;
FIG. 12 is a block diagram showing the structure of a video player according to an embodiment;
FIG. 13 is a block diagram showing the structure of a computer device in one embodiment;
fig. 14 is a block diagram showing a configuration of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram of an application environment of a video generation method in one embodiment. Referring to fig. 1, the video generation method is applied to a video generation system. The video generation system includes a video editing server 110 and a video storage server 120. The video editing server 110 and the video storage server 120 are connected via a network. The video storage server 120 may receive a target video request sent by the video editing server 110, analyze the target video request, obtain start and stop time points of a target video to be generated in an original video, determine a target segment matched with the start and stop time points from video segments of the original video, generate an index file corresponding to the target video according to an index address of the target segment, and allocate a video identifier to the target video and associate the index file with the video identifier. The video editing server 110 and the video storage server 120 may be integrated together or implemented as a server cluster composed of a plurality of servers.
In some embodiments, the video generation method provided by the present application may also be executed by a terminal, where a client supporting video generation function is installed and run on the terminal, and the client may implement the video generation method provided by the present application when running. In other embodiments, the video generation method provided by the present application may also be executed by a terminal and a server together.
In one embodiment, as shown in FIG. 2, a video generation method is provided. The embodiment is mainly illustrated by applying the method to the video storage server 120 in fig. 1. Referring to fig. 2, the video generation method specifically includes the following steps:
s202, receiving a target video request.
Wherein the target video request is a request for generating a target video. The target video to be generated is a video in the original video, and the original video may be an original video data, such as a movie, a tv episode or a recorded video. In one embodiment, according to the playing time of the video, the original video may be referred to as a long video, and the target video generated according to the original video may be referred to as a short video.
Specifically, the video storage server may receive a target video request sent by the video editing server. The video storage server stores a large amount of media resources, such as copyrighted video data, which includes original video and may also include target video generated from the original video. The video editing server can inquire some original video from the video storage server, and then newly add a target video request for generating a target video according to the inquired original video.
In a specific application scenario, an editor may pull a video asset from a video storage server through a video editing server. For example, an editor may input an original video name through the video editing server, and query and display an original video stored on the video storage server and related to the video name through the video editing server. Optionally, the video editing server may further query a target video generated according to the original video, and display a corresponding target video list according to the queried target video, where video information related to the target video may be presented in the target video list, and the video information includes a start-stop time point of the target video based on the original video, a video identifier of the target video, a title of the target video, and the like. After inquiring the original video related to the input original video name, an editor can designate a starting and stopping time point of a target video to be generated based on the original video through the video editing server, and submit a corresponding target video request to the video storage server according to the starting and stopping time point.
In one embodiment, besides specifying the start-stop time point of the target video to be generated in the original video, an editor may also specify the video identifier of the original video through the video editing server, so that the video storage server may locally find the original video corresponding to the video identifier. Certainly, in another application scenario, after querying an original video, an editor may enter an editing page corresponding to the original video, specify a start-stop time point in the editing page corresponding to the original video, and automatically submit a target video request to a video storage server after confirmation, so that the video storage server may detect that the target video request is submitted in the editing page corresponding to the original video, thereby automatically analyzing a video identifier corresponding to the original video on which the target video to be generated is based.
And S204, analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video.
Specifically, the target video request may carry the start-stop time point of the target video to be generated in the original video, and therefore, after receiving the target video request sent by the video editing server, the video storage server analyzes the target video request to obtain the start-stop time point of the target video to be generated in the original video.
The start-stop time point comprises a start time point and an end time point, the start time point is the time point when the target video in the original video starts playing, and the end time point is the time point when the target video in the original video finishes playing. It can be seen that the start and stop time points are used to determine which video segment in the original video the target video is.
For example, an editor pulls an original video from a video storage server through a video editing server, the playing time length corresponding to the original video is 02:05:20, if a target video is generated according to video data corresponding to a time interval from 01:01:25 to 01:05:30, the editor can respectively designate a starting time point of the target video to be generated in the original video to be 01:01:25 and an ending time point to be 01:05:30 through the video editing server, and generate a target video request according to the designated starting and ending time points. The video storage server can analyze the target to obtain the starting and stopping time points of the target video in the original video after receiving the target video request.
S206, determining the target fragment matched with the starting and stopping time points from the video fragments of the original video.
Where a video slice is a unit of storage for video data. In order to reduce the time consumption for starting playing of a video, a video storage server generally transcodes a part of complete video data, that is, a video is divided into small segments of video fragments, then all the divided video fragments are stored, the video storage server also writes the playing time and the index address of each divided video fragment into an index file, and can request the video fragments in sequence according to the sequence of the video fragments according to the index file when the video is played, so that the video can be played immediately, and the playing of the whole video is realized. The video clips may be video Stream files in a TS (Transport Stream) format, and the index file may be, for example, an M3U8 file.
Fig. 3 is a timing diagram illustrating the playing of video by a video player in one embodiment. Referring to fig. 3, a player on the terminal may obtain a video playing request, request an index file of a video from the scheduling server based on the video playing request, and after the scheduling server returns the index file, the video player may request video fragments from the file server according to an index address corresponding to each video fragment recorded in the index file, and sequentially play the video according to the video fragments returned by the file server, thereby playing the entire video.
In the embodiment provided by the application, a player on a terminal can acquire a video playing event triggered in a video cover, respond to the video playing event, acquire an index file corresponding to a target video linked by the video cover from a video storage server, acquire an index address corresponding to a target fragment of the target video according to the index file, request the target fragment from the video storage server according to the index address, and play the target video according to the target fragment.
The video storage server stores video fragments of an original video, the video storage server can acquire the video fragments forming the original video after determining the original video used for generating a target video, and the target fragments used for generating the target video are determined from the video fragments of the original video according to the playing time corresponding to each video fragment and the starting and stopping time points obtained by analyzing from the target video request. As can be seen, the target segment is a video segment determined from the video segments of the original video according to the start-stop time points.
The number of target slices may be 1 or more, and the number of target slices is also related to the starting and ending time points of the target video in the original video. Generally, the playing time of one video fragment is 5 to 10 seconds, and if the playing time corresponding to the starting and ending time point is far longer than the playing time of one video fragment, a plurality of target fragments for generating the target video can be determined; if the playing time corresponding to the start-stop time point is less than the playing time of one video fragment, the number of the determined target fragments for generating the target video may be only 1 to 2.
As shown in fig. 4, in one embodiment, the step of determining a target segment matching the start-stop time point from the video segments of the original video includes:
s402, obtaining an original index file corresponding to the original video.
Specifically, the video storage server may obtain an original index file corresponding to the original video according to the video identifier of the original video, where the original index file is a directory file of the original video, and the original index file includes an index address and a play duration of a video fragment of the original video.
S404, analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video.
The playing duration is the duration of the video fragment during playing, and the playing duration corresponding to the video fragment of the original video is also recorded in the original index file. And the video storage server analyzes the original index file of the original video to obtain the playing time length corresponding to the video fragment forming the original video.
And S406, sequentially splicing the playing time lengths corresponding to the video fragments according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video.
The playing time period is the time period range of each video fragment played in the original video. For example, if the playing time duration corresponding to the first video segment of the original video is 5s, and the playing time duration corresponding to the second video segment is 3 s, the playing time periods of the first video segment and the second video segment in the original video are 0 s to 5s, and 5s to 8 s in sequence. And if the playing time length corresponding to the first video fragment is 10 seconds, the playing time period of the first video fragment in the original video is 0-10 seconds.
In the original index file, in order to facilitate the realization of playing the whole video by sequentially requesting the video fragments according to the playing sequence, the playing time duration corresponding to the video fragments of the original video and the corresponding index address are sequentially recorded according to the playing sequence of the video fragments, so that the video storage server can sequentially splice the playing time duration corresponding to the video fragments according to the playing sequence of the video fragments, thereby obtaining the playing time period of each video fragment in the original video.
And S408, determining a target segment according to the video segment corresponding to the playing time period containing the start-stop time point.
Specifically, after determining the playing time period of each video segment of the original video in the original video, the video storage server may determine a target segment for generating the target video according to the video segment corresponding to the playing time period including the start-stop time point.
In this embodiment, the target segment for generating the target video is directly found out from the video segments of the original video at the starting and ending time points, and the target video can be generated based on the target segment.
And S208, generating an index file corresponding to the target video according to the index address of the target fragment.
Specifically, the video storage server may obtain an original index file corresponding to an original video, and analyze the original index file to obtain index addresses corresponding to video fragments of the original video, so that after a target fragment for generating a target video is determined from the video fragments of the original video, the video storage server may obtain the index address corresponding to each target fragment, and then may directly generate the index file corresponding to the target video according to the index address of the target fragment; or, in some cases, the target segment may be downloaded according to the index address of the target segment, and then the cut segment is stored after the target segment is partially cut, and the index file corresponding to the target video is generated according to the storage address.
S210, distributing video identification for the target video, and associating the index file with the video identification.
The index file corresponding to the target video is used for sequentially requesting the video fragments according to the download addresses and the playing time of the video fragments forming the target video recorded in the index file when the target video is played, so that the whole target video is played. Therefore, after the index file corresponding to the target video is obtained, the video storage server can allocate a unique video identifier for the target video, and associate the generated index file with the allocated video identifier, so that the video player can sequentially download video fragments according to the index address recorded in the index file corresponding to the video identifier and play the video.
In one embodiment, the video storage server may further associate the video identifier of the target video with the video identifier of the original video, so that when the video query request for the original video sent by the video editing server is obtained, video information of the target video, which is associated with the original video and generated based on the original video, may be obtained, and the video information of the original video and the video information of the target video are returned to the video editing server together. The video information includes a video name, a video duration, a video cover, a start-stop time point of the target video in the original video, and the like.
The video generation method directly determines the target fragment of the target video to be generated from the video fragments of the original video, generates the index file corresponding to the target video according to the index address of the target fragment, associates the index file with the video identifier allocated to the target video, and can directly play the target video according to the index address of the target fragment when the video needs to be played.
In one embodiment, the start-stop time points include a start time point and an end time point; determining a target fragment according to a video fragment corresponding to a playing time period containing a start-stop time point, comprising: taking a video fragment corresponding to a playing time period containing an initial time point as an initial target fragment; and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
Wherein the start-stop time point includes a start time point and an end time point. After determining the playing time period of the video segment of the original video in the original video, the video storage server may use the video segment corresponding to the playing time period including the starting time point as a starting target segment for generating the target video, and use the video segment corresponding to the playing time period including the ending time point as an ending target segment for generating the target video.
Fig. 5 is a schematic diagram illustrating a playing time period corresponding to a video segment of an original video in an embodiment. Referring to fig. 5, the original video includes n video clips, where the playing time period corresponding to the 0 th video clip is 0 to t1, the playing time period corresponding to the 1 st video clip is t1 to t2, the playing time period corresponding to the 2 nd video clip is t2 to t3, and so on, the playing time period corresponding to the n-1 th video clip is t (n-1) to t (n), and the playing time period corresponding to the n th video clip is t (n) to t (n + 1). The starting and ending time points parsed from the target video request are m to n, where m is e (0, t1), and n is e (t2, t3), then the 0 th video slice of the original video is the starting target slice for generating the target video, and the 2 nd video slice is the ending target slice for generating the target video.
In one embodiment, generating an index file corresponding to a target video according to an index address of a target fragment includes: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; when the starting time point is in the playing time period corresponding to the starting target fragment, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
Specifically, the video storage server further needs to parse an original index file corresponding to the original video to obtain an index address corresponding to each video fragment of the original video. In this embodiment, when the start time point is in the playing time period corresponding to the start target segment, it indicates that the entire start target segment cannot be used as a part of the target video, but a part of the video needs to be cut from the start target segment from the start time point, and the cut video is used as the start segment of the target video.
For example, referring to fig. 5, if the start time point m is between (0, t1), the video storage server needs to cut out video data corresponding to the time interval of (m, t1) from the start target segment, use the cut-out video data as the start segment corresponding to the target video, store the start segment separately, and generate an index file corresponding to the target video according to the storage address of the start segment.
In fact, in order to facilitate other devices or the video storage server itself to obtain the generated target video, the video storage server needs to generate a corresponding index file for the target video. And generating an index file corresponding to the target video according to the storage address of the starting fragment, namely recording the storage address of the starting fragment in the index file corresponding to the target video so as to obtain the starting fragment according to the storage address recorded in the index file.
In one embodiment, the method further comprises: when the starting time point is the starting point of the playing time period corresponding to the starting target fragment, taking the starting target fragment as the starting fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
In this embodiment, when the starting time point is just the starting point of the playing time period corresponding to the starting target segment, it is described that the whole starting target segment may be used as a part of the constituting target video, that is, the starting target segment is directly used as the starting segment corresponding to the target video, so as to implement multiplexing of the video segments of the original video, and save the storage space.
For example, referring to fig. 5, if the starting time point m is just t1 and the ending time point n ∈ (t2, t3), the video storage server needs to directly use the 1 st video fragment of the original video as the starting fragment corresponding to the target video, and generate the index file corresponding to the target video according to the index address of the starting fragment.
In one embodiment, generating an index file corresponding to a target video according to an index address of a target fragment includes: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; when the ending time point is in the playing time period corresponding to the ending target fragment, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
In this embodiment, when the ending time point is in the playing time period corresponding to the ending target segment, it indicates that the entire ending target segment cannot be taken as a part of the target video, but a part of the video needs to be cut out from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment, and the cut-out video is taken as the ending segment of the target video.
For example, referring to fig. 5, if the ending time point n is between (t2, t3), the video storage server needs to divide the video data corresponding to the time interval of (t2, n) from the ending target segment, store the divided video data as the ending segment corresponding to the target video, store the ending segment separately, and generate the index file corresponding to the target video according to the storage address of the ending segment.
Similarly, in order to facilitate other devices or the video storage server itself to obtain the generated target video, the video storage server needs to generate a corresponding index file for the target video. And generating an index file corresponding to the target video according to the storage address of the ending fragment, namely recording the storage address of the ending fragment in the index file corresponding to the target video so as to obtain the ending fragment according to the storage address recorded in the index file.
In one embodiment, the method further comprises: when the ending time point is the ending point of the playing time period corresponding to the ending target fragment, taking the ending target fragment as the ending fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
In this embodiment, when the ending time point is exactly the ending point of the playing time period corresponding to the ending target segment, it is described that the entire ending target segment may be used as a part of the constituting target video, that is, the ending target segment is directly used as the ending segment corresponding to the target video, so as to implement multiplexing of the video segments of the original video, and save the storage space.
For example, referring to fig. 5, if the ending time point n is exactly t3, the video storage server needs to directly use the 2 nd video segment of the original video as the ending segment corresponding to the target video, and generate the index file corresponding to the target video according to the index address of the ending segment.
In one embodiment, the method further comprises: taking a video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment; generating an index file corresponding to the target video according to the index address of the target fragment, wherein the index file comprises: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
Specifically, after determining the starting target segment and the ending target segment from the video segments of the original video, the video storage server may further use the video segment between the starting target segment and the ending target segment as an intermediate target segment, and the whole intermediate target segment may be used as a part constituting the target video, that is, the intermediate target segment is directly used as an intermediate segment corresponding to the target video, so that multiplexing of the video segments of the original video is realized, and the storage space can be saved. Therefore, in this case, the video storage server may directly record the index address of the middle fragment obtained by parsing the original index file in the index file corresponding to the target video, so that when the target video is played, the video storage server directly requests the middle fragment according to the index address of the middle fragment recorded in the corresponding index file and plays the middle fragment.
Of course, when the determined starting target segment and ending target segment are adjacent video segments of the original video, no other video segment exists between the starting target segment and the ending target segment, and in this case, the target video is generated according to the starting target segment and the ending target segment.
Fig. 6 is a schematic diagram of a video slice of a target video generated according to an original video in an embodiment. Referring to fig. 6, if the start time point m e (0, t1) and n e (t2, t3) are both within the range of m e (0, t1) and n e (t2, t3), the start fragment of the target video needs to be regenerated according to the 0 th video fragment of the original video, the 1 st video fragment is directly multiplexed as the intermediate fragment, and the end fragment of the target video needs to be regenerated according to the 2 nd video fragment. The index file corresponding to the target video needs to store the storage addresses for finding the start fragment and the end fragment and also needs to store the index address for finding the 1 st video fragment.
In one embodiment, the method further comprises: acquiring a starting fragment of a target video; capturing a screenshot corresponding to a starting time point in starting and stopping time points from a playing picture corresponding to the starting fragment; and taking the screenshot as a video cover of the target video, and associating the video cover with the video identification of the target video.
In this embodiment, the video storage server may further store a video cover of the generated target video, so that the generated target video and the video cover are linked and then displayed. Specifically, the video storage server may obtain a starting segment of the target video, and intercept a playing picture from a playing picture corresponding to the starting segment as a video cover, where the playing picture may be a screenshot corresponding to a starting time point. Further, the video storage server may capture a video cover as the target video and associate the video cover with the video identification of the target video.
In one embodiment, the method further comprises: acquiring a video query request sent by a terminal; and responding to the video query request, returning the video cover of the target video to the terminal so that the terminal displays the video cover of the target video in the video playing entry page.
The terminal may be a user terminal equipped with a video player, and the video storage server may receive a video query request sent by the terminal, extract a video keyword from the video query request, and return an original video related to the video keyword to the terminal. Optionally, the video storage server may also return a video cover page corresponding to the target video related to the original video. Therefore, the terminal can display the original video and the video cover corresponding to the target video related to the original video through the video player.
Of course, the video storage server may also independently push the video covers corresponding to the generated target videos to the terminal, so that the terminal may display the video covers corresponding to the target videos in the video playing entry page, when the user clicks the video cover corresponding to the target video, the terminal may obtain an index file corresponding to the target video linked by the video cover, obtain an index address corresponding to a target segment of the target video according to the index file, request the target segment according to the index address, and play the target video according to the target segment.
Fig. 7 is a schematic interface diagram illustrating a video cover corresponding to a target video in one embodiment. Referring to fig. 7, in a video playback entry page 700, a plurality of video covers 702 are displayed, each video cover linked to a corresponding target video, and a video title 704 is also displayed in the video cover. When a user clicks a video cover corresponding to a certain target video, the target video can be played.
In one embodiment, the video storage server may obtain the video query request sent by the video editing server, and return a video cover corresponding to the original video and a video cover corresponding to the target video related to the original video according to the video query request. Optionally, information such as start-stop time points, video titles and the like related to the target video may also be returned, so that the video editing server may present video list information so as to generate target videos corresponding to other start-stop time points according to the current original video.
As shown in fig. 8, a schematic flowchart of a video generation method in a specific embodiment is shown, where the video generation method is executed by a video storage server, and specifically includes the following steps:
s802, receiving a target video request.
S804, the target video request is analyzed, and the starting time point and the ending time point of the target video to be generated in the original video are obtained.
S806, an original index file corresponding to the original video is obtained.
And S808, analyzing the original index file to obtain the playing time length and the index address corresponding to the video fragment of the original video.
And S810, splicing the playing time lengths corresponding to the video fragments according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video.
S812, the video segment corresponding to the playing time period including the starting time point is used as the starting target segment.
S814, judging whether the starting time point is the starting point of the playing time period corresponding to the starting target fragment; if not, executing step S818; if yes, go to step S816.
S816, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
S818, taking the starting target fragment as a starting fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
And S820, taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
S822, determining whether the ending time point is the ending point of the playing time period corresponding to the ending target segment, if yes, executing step S824; if not, go to step S826.
S826, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
S824, taking the ending target fragment as the ending fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
S828, taking the video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment;
s830, directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
And S832, distributing a video identifier for the target video, and associating the index file with the video identifier.
And S834, capturing a screenshot corresponding to the starting time point in the starting and stopping time points from the playing picture corresponding to the starting fragment.
And S836, taking the screenshot as a video cover of the target video, and associating the video cover with the video identifier of the target video.
And S838, acquiring the video query request sent by the terminal.
S840, responding to the video query request, returning the video cover of the target video to the terminal, so that the terminal displays the video cover of the target video in the video playing entry page.
FIG. 8 is a flowchart illustrating a video generation method according to an embodiment. It should be understood that, although the steps in the flowchart of fig. 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Fig. 9 is a diagram of an application environment of a video generation method in another embodiment. Referring to fig. 9, the video generating method is applied to a video processing system. The video processing system includes a video editing server 910, a video storage server 920, a video application server 930, and a terminal 940.
The video editing server 910 and the video storage server 920 are connected via a network. The video editing server 910 may be configured to send a target video request to the video storage server 920, where the target video request carries start-stop time points of a target video to be generated in an original video, and the video storage server 120 may be configured to receive and analyze the target video request to obtain the start-stop time points of the target video to be generated in the original video; determining a target fragment matched with the start-stop time point from the video fragments of the original video, and generating an index file corresponding to the target video according to the index address of the target fragment; and distributing a video identifier for the target video, and associating the index file with the video identifier. The video storage server 920 may also be used to return a video identification to the video editing server 910.
The video editing server 910 is connected to the video application server 930 via a network, and the video editing server 910 can be configured to send the video identifier to the video application server 930.
The video application server 930 is connected to the terminal 940 via a network, the video application server 930 may be configured to receive a video query request sent by the terminal 940, respond to the video query request, and return a video identifier corresponding to the target video to the terminal 940, and the terminal 940 may be configured to obtain a corresponding index file according to the video identifier and analyze the index file to obtain an index address corresponding to the target segment of the target video; and requesting the target fragment according to the index address, and playing the target video according to the target fragment.
The video editing server 910, the video storage server 920 and the video application server 930 may be integrated together or implemented as a server cluster composed of a plurality of servers. The terminal 940 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
In one embodiment, as shown in fig. 10, a video playback method is provided. This embodiment is mainly illustrated by applying this method to the terminal 940 in fig. 9. Referring to fig. 10, the video playing method specifically includes the following steps:
s1002, entering a video playing entry page.
The video playing entry page can be used for accommodating a plurality of video covers, and the terminal can increase the video playing entry page and display a large number of video covers in the video playing entry page. Each video cover is linked to corresponding video data. In a specific application scenario, a user may start a video player installed on a terminal, and enter a video playing entry page provided by the video player.
S1004, a video cover is displayed in the video playback entry page.
S1006, a video play event is triggered in the video cover.
The video playing event is an event for playing the video triggered in the video cover. The video playing event can be a pressing operation, a clicking operation or a sliding operation for a video cover triggered by a user. The video cover is an inlet for playing the video connected with the video cover, and a user can enter the playing page of the video linked with the video cover by triggering a video playing event and play the video in the playing page.
S1008, responding to a video playing event, acquiring an index file corresponding to a target video linked by a video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target video slice is a video slice which is determined from video slices of the original video and is matched with the starting and stopping time point, and the starting and stopping time point is the time point of the starting and stopping position of the target video in the original video.
And generating the target video based on the target fragment determined from the video fragments of the original video at the starting and ending time points. Specifically, the video generation method can be obtained according to the video generation method provided in the foregoing embodiment, and a description thereof is not repeated here. The index addresses of all target fragments of the target video are recorded in the index file of the target video, and the terminal can analyze the index addresses to obtain the index addresses corresponding to all the target fragments of the target video according to the index file corresponding to the target video connected with the video cover after detecting a video playing event.
And S1010, requesting the target fragment according to the index address and playing the target video according to the target fragment.
Specifically, the terminal may sequentially obtain the target segments according to the index addresses of the target segments recorded in the index file and then decode the target segments, thereby realizing complete playing of the entire target video.
The video playing method directly determines the target fragment of the target video to be generated from the video fragments of the original video, generates the index file corresponding to the target video according to the index address of the target fragment, can directly play the target video according to the index address of the target fragment when the video needs to be played, is actually a plurality of video fragments multiplexed by the original video and the target video, and compared with a mode of cutting a section of video from the original video, manually cutting the video to obtain the target fragment and then additionally storing the target fragment, the video playing method not only can greatly reduce the storage space, but also does not need to manually cut and segment the video.
In one embodiment, as shown in fig. 11, there is provided a video generating apparatus 1100 comprising a receiving module 1102, a parsing module 1104, an object slicing determination module 1106, an index file generating module 1108, and a storage module 1110, wherein:
a receiving module 1102, configured to receive a target video request;
the analysis module 1104 is used for analyzing the target video request to obtain the starting and ending time points of the target video to be generated in the original video;
a target segment determining module 1106, configured to determine a target segment matching the start-stop time point from video segments of the original video;
an index file generating module 1108, configured to generate an index file corresponding to the target video according to the index address of the target segment;
the storage module 1110 is configured to allocate a video identifier to a target video, and associate the index file with the video identifier.
In one embodiment, the target segment determining module 1106 is further configured to obtain an original index file corresponding to an original video; analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video; splicing the playing time lengths corresponding to the video fragments in sequence according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video; and determining a target fragment according to the video fragment corresponding to the playing time period containing the start-stop time point.
In one embodiment, the start-stop time points include a start time point and an end time point; the target segment determining module 1106 is further configured to use a video segment corresponding to the playing time period including the starting time point as a starting target segment; and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
In an embodiment, the index file generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; when the starting time point is in the playing time period corresponding to the starting target fragment, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
In an embodiment, the index file generating module 1108 is further configured to, when the starting time point is a starting point of a playing time period corresponding to the starting target segment, take the starting target segment as a starting segment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
In an embodiment, the index file generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; when the ending time point is in the playing time period corresponding to the ending target fragment, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
In an embodiment, the index file generating module 1108 is further configured to, when the ending time point is an ending point of a playing time period corresponding to the ending target segment, take the ending target segment as an ending segment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
In one embodiment, the target segment determining module 1106 is further configured to take a video segment between the starting target segment and the ending target segment as an intermediate target segment; the index file generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
In one embodiment, the video generating apparatus 1100 further comprises a video cover generation module for obtaining a start fragment of the target video; capturing a screenshot corresponding to a starting time point in starting and stopping time points from a playing picture corresponding to the starting fragment; and taking the screenshot as a video cover of the target video, and associating the video cover with the video identification of the target video.
In one embodiment, the video generating apparatus 1100 further comprises a video query request obtaining module, configured to obtain a video query request; and responding to the video query request, returning the video cover of the target video so that the terminal displays the video cover of the target video in the video playing entry page.
The video generating device 1100 directly determines the target segment of the target video to be generated from the video segments of the original video, generates the index file corresponding to the target video according to the index address of the target segment, and associates the index file with the video identifier allocated to the target video, so that the target video can be directly played according to the index address of the target segment when the video needs to be played.
In one embodiment, as shown in fig. 12, a video playback apparatus 1200 is provided, which includes a presentation module 1202, an acquisition module 1204, an index address acquisition module 1206, and a playback module 1208, wherein:
a display module 1202, configured to enter a video play entry page; displaying a video cover in a video playing entry page;
an obtaining module 1204, configured to trigger a video playing event in a video cover;
the index address obtaining module 1206 is configured to obtain, in response to a video playing event, an index file corresponding to a target video linked to a video cover, and obtain, according to the index file, an index address corresponding to a target segment of the target video; the target fragment is determined according to a video fragment matched with a start-stop time point in video fragments of an original video, and the start-stop time point is a time point of the start-stop position of the target video in the original video;
the playing module 1208 is configured to request the target segment according to the index address and play the target video according to the target segment.
The video playing apparatus 1200 directly determines the target segment of the target video to be generated from the video segments of the original video, and generates the index file corresponding to the target video according to the index address of the target segment, so that the target video can be directly played according to the index address of the target segment when the video needs to be played.
FIG. 13 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the video storage server 120 in fig. 1 or the video storage server 920 in fig. 9. As shown in fig. 13, the computer device includes a processor, a memory, a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video generation method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a video generation method.
Fig. 14 is a diagram showing an internal structure of a computer device in another embodiment. The computer device may specifically be the terminal 940 in fig. 9. As shown in fig. 14, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video playback method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a video playback method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 13 and 14 are block diagrams of only some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video generation apparatus 1100 provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 13. The memory of the computer device may store various program modules constituting the video generating apparatus, such as a receiving module 1102, a parsing module 1104, an object segmentation determining module 1106, an index file generating module 1108 and a storage module 1110 shown in fig. 11. The computer program constituted by the respective program modules causes the processor to execute the steps in the video generation method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 13 may perform step S202 by the receiving module 1102 in the video generating apparatus shown in fig. 11. The computer device may perform step S204 through the parsing module 1104. The computer device may perform step S206 by the target shard determination module 1106. The computer device may perform step S208 by the index file generation module 1108. The computer device may perform step S210 through the storage module 1110.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video generation method described above. Here, the steps of the video generation method may be steps in the video generation methods of the above-described respective embodiments.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the video generation method described above. Here, the steps of the video generation method may be steps in the video generation methods of the above-described respective embodiments.
In one embodiment, the video playback apparatus 1200 provided in the present application may be implemented in the form of a computer program, which is executable on a computer device as shown in fig. 14. The memory of the computer device may store various program modules constituting the video playback apparatus, such as a presentation module 1202, an acquisition module 1204, an index address acquisition module 1206, and a playback module 1208 shown in fig. 12. The computer program constituted by the respective program modules causes the processor to execute the steps in the video playback method of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 14 may perform steps S1002 and S1004 by the presentation module 1202 in the video playback apparatus shown in fig. 12. The computer device may perform step S1006 through the acquisition module 1204. The computer device may perform step S1008 by the index address acquisition module 1206. The computer device may perform step S1010 through the play module 1208.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video playback method described above. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, causes the processor to perform the steps of the above-described video playback method. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A video generation method, comprising:
receiving a target video request, wherein the target video request is used for requesting to generate a target video by using a section of video in an original video;
analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video;
acquiring an original index file corresponding to the original video, analyzing the original index file to obtain index addresses corresponding to a plurality of video fragments of the original video, wherein the original video is divided into the plurality of video fragments in advance, and the plurality of video fragments are stored as storage units of video data; the original index file comprises index addresses of the plurality of video fragments, and the index addresses are download addresses of the corresponding video fragments;
determining a target fragment matched with the starting and stopping time point from the video fragments of the original video;
generating an index file corresponding to the target video according to the index address of the target fragment, wherein the index file corresponding to the target video records the download address of the video fragment forming the target video;
and distributing a video identifier for the target video, and associating the index file with the video identifier.
2. The method according to claim 1, wherein the determining, from the video slices of the original video, a target slice that matches the start-stop time point comprises:
acquiring an original index file corresponding to the original video;
analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video;
splicing the playing time lengths corresponding to the video fragments in sequence according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video;
and determining a target fragment according to the video fragment corresponding to the playing time period containing the start-stop time point.
3. The method of claim 2, wherein the start-stop time points comprise a start time point and an end time point; the determining a target segment according to the video segment corresponding to the playing time period including the start-stop time point includes:
taking the video segment corresponding to the playing time period containing the starting time point as a starting target segment;
and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
4. The method according to claim 3, wherein the generating an index file corresponding to the target video according to the index address of the target segment includes:
analyzing the original index file to obtain an index address corresponding to the video fragment of the original video;
when the starting time point is in the playing time period corresponding to the starting target segment, then
Acquiring the starting target fragment according to the index address corresponding to the starting target fragment;
segmenting a video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video;
and storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
5. The method of claim 4, further comprising:
when the starting time point is the starting point of the playing time period corresponding to the starting target segment, then
Taking the starting target fragment as a starting fragment corresponding to the target video;
and generating an index file corresponding to the target video according to the index address of the starting fragment.
6. The method according to claim 3, wherein the generating an index file corresponding to the target video according to the index address of the target segment includes:
analyzing the original index file to obtain an index address corresponding to the video fragment of the original video;
when the ending time point is in the playing time period corresponding to the ending target segment, then
Acquiring the ending target fragment according to the index address corresponding to the ending target fragment;
segmenting a video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video;
and storing the ending fragments, and generating an index file corresponding to the target video according to the storage addresses of the ending fragments.
7. The method of claim 6, further comprising:
when the ending time point is the ending point of the playing time period corresponding to the ending target segment, then
Taking the ending target segment as an ending segment corresponding to the target video;
and generating an index file corresponding to the target video according to the index address of the ending fragment.
8. The method of claim 3, further comprising:
taking the video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment;
the generating of the index file corresponding to the target video according to the index address of the target fragment includes:
analyzing the original index file to obtain an index address corresponding to the video fragment of the original video;
directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video;
and generating an index file corresponding to the target video according to the index address of the middle fragment.
9. The method according to any one of claims 1 to 8, further comprising:
acquiring a starting fragment of the target video;
capturing a screenshot corresponding to a starting time point in the starting and stopping time points from a playing picture corresponding to the starting fragment;
and taking the screenshot as a video cover of the target video, and associating the video cover with the video identifier of the target video.
10. The method of claim 9, further comprising:
acquiring a video query request sent by a terminal;
and responding to the video query request, returning the video cover of the target video to the terminal so that the terminal displays the video cover of the target video in a video playing entry page.
11. A video playback method, comprising:
entering a video playing entry page;
displaying a video cover in the video playing entry page;
triggering a video playing event in the video cover;
responding to the video playing event, acquiring an index file corresponding to a target video linked by the video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target fragment is determined according to a video fragment which is matched with a start-stop time point in a plurality of video fragments obtained by dividing an original video, wherein the start-stop time point is a time point of the target video at the start-stop position in the original video, and the video fragment is stored as a storage unit of video data; the index file comprises an index address of the video fragment, and the index address is a download address of the video fragment;
and requesting a target fragment according to the index address, and playing the target video according to the target fragment.
12. A video generation apparatus, characterized in that the apparatus comprises:
the system comprises a receiving module, a processing module and a display module, wherein the receiving module is used for receiving a target video request which is used for requesting to generate a target video by utilizing a section of video in an original video;
the analysis module is used for analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video;
the target fragment determining module is used for acquiring an original index file corresponding to the original video, analyzing the original index file to obtain index addresses corresponding to a plurality of video fragments of the original video, and determining a target fragment matched with the start-stop time point from the video fragments of the original video; the original video is divided into a plurality of video fragments in advance, and the video fragments are stored as storage units of video data; the original index file comprises index addresses of the plurality of video fragments, and the index addresses are download addresses of the corresponding video fragments;
the index file generation module is used for generating an index file corresponding to the target video according to the index address of the target fragment, wherein the index file corresponding to the target video records the download address of the video fragment forming the target video;
and the storage module is used for distributing a video identifier for the target video and associating the index file with the video identifier.
13. A video playback apparatus, comprising:
the display module is used for entering a video playing entry page; displaying a video cover in the video playing entry page;
the acquisition module is used for triggering a video playing event in the video cover;
the index address acquisition module is used for responding to the video playing event, acquiring an index file corresponding to a target video linked by the video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target fragment is determined according to a video fragment which is matched with a start-stop time point in a plurality of video fragments obtained by dividing an original video, wherein the start-stop time point is a time point of the target video at the start-stop position in the original video, and the video fragment is stored as a storage unit of video data; the index file comprises an index address of the video fragment, and the index address is a download address of the video fragment;
and the playing module is used for requesting a target fragment according to the index address and playing the target video according to the target fragment.
14. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 11.
15. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 11.
CN201910598337.XA 2019-07-04 2019-07-04 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment Active CN110213672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910598337.XA CN110213672B (en) 2019-07-04 2019-07-04 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910598337.XA CN110213672B (en) 2019-07-04 2019-07-04 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment

Publications (2)

Publication Number Publication Date
CN110213672A CN110213672A (en) 2019-09-06
CN110213672B true CN110213672B (en) 2021-06-18

Family

ID=67796132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910598337.XA Active CN110213672B (en) 2019-07-04 2019-07-04 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment

Country Status (1)

Country Link
CN (1) CN110213672B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992942A (en) * 2019-12-05 2022-01-28 腾讯科技(深圳)有限公司 Video splicing method and device and computer storage medium
CN111050217B (en) * 2019-12-18 2021-03-16 珠海格力电器股份有限公司 Video playing method and device
CN111182328B (en) * 2020-02-12 2022-03-25 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
CN111182327B (en) * 2020-02-12 2022-04-22 北京达佳互联信息技术有限公司 Video editing method and device, video distribution server and terminal
CN111369434B (en) * 2020-02-13 2023-08-25 广州酷狗计算机科技有限公司 Method, device, equipment and storage medium for generating spliced video covers
CN111405380B (en) * 2020-04-14 2022-08-05 聚好看科技股份有限公司 Method and device for playing streaming media data
CN111447490A (en) * 2020-05-06 2020-07-24 山东汇贸电子口岸有限公司 Streaming media file processing method and device
CN111757148B (en) * 2020-06-03 2022-11-04 苏宁云计算有限公司 Method, device and system for processing sports event video
CN113810783B (en) * 2020-06-15 2023-08-25 腾讯科技(深圳)有限公司 Rich media file processing method and device, computer equipment and storage medium
CN112218118A (en) * 2020-10-13 2021-01-12 湖南快乐阳光互动娱乐传媒有限公司 Audio and video clipping method and device
CN112104897B (en) * 2020-11-04 2021-03-12 北京达佳互联信息技术有限公司 Video acquisition method, terminal and storage medium
CN114827753B (en) * 2021-01-22 2023-10-27 腾讯科技(北京)有限公司 Video index information generation method and device and computer equipment
CN113742519A (en) * 2021-08-31 2021-12-03 杭州登虹科技有限公司 Multi-object storage cloud video Timeline storage method and system
CN114007102A (en) * 2021-10-28 2022-02-01 深圳市商汤科技有限公司 Video processing method, video processing device, electronic equipment and storage medium
CN114025201A (en) * 2021-10-29 2022-02-08 恒安嘉新(北京)科技股份公司 Video playing method, device, equipment and storage medium
CN116389758A (en) * 2023-04-07 2023-07-04 北京度友信息技术有限公司 Video transcoding method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4720543B2 (en) * 2006-03-01 2011-07-13 ソニー株式会社 Data processing device, data processing method and data processing program, recording medium, and playback device, playback method and playback program
CN104284216B (en) * 2014-10-23 2018-07-13 Tcl集团股份有限公司 A kind of method and its system generating video essence editing
CN106803992B (en) * 2017-02-14 2020-05-22 北京时间股份有限公司 Video editing method and device
CN106921865A (en) * 2017-05-11 2017-07-04 腾讯科技(深圳)有限公司 Method for processing video frequency and device

Also Published As

Publication number Publication date
CN110213672A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213672B (en) Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment
CN110381382B (en) Video note generation method and device, storage medium and computer equipment
KR102313471B1 (en) Methods, systems, and media for presenting supplemental information corresponding to on-demand media content
US11748408B2 (en) Analyzing user searches of verbal media content
CN106415546B (en) For the system and method in local detection institute consumer video content
CN112019920B (en) Video recommendation method, device and system and computer equipment
CN109474843B (en) Method for voice control of terminal, client and server
US20150110471A1 (en) Capturing Media Content in Accordance with a Viewer Expression
CN107426603B (en) Video playing method and device
CN110401858B (en) Video playing method and device, electronic equipment and storage medium
CN109660854B (en) Video recommendation method, device, equipment and storage medium
CN107276842B (en) Interface test method and device and electronic equipment
CN110913272A (en) Video playing method and device, computer readable storage medium and computer equipment
CN110891198B (en) Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device
CN110198493B (en) Media data downloading method, device, computer equipment, storage medium and system
CN110958470A (en) Multimedia content processing method, device, medium and electronic equipment
CN111277898A (en) Content pushing method and device
CN110046263B (en) Multimedia recommendation method, device, server and storage medium
CN114449361B (en) Media data playing method and device, readable storage medium and computer equipment
CN111124121A (en) Voice interaction information processing method and device, storage medium and computer equipment
CN102708215B (en) Method and system for processing video
US9531993B1 (en) Dynamic companion online campaign for television content
US8745650B1 (en) Content segment selection based on time-shifted content viewing
US20150304460A1 (en) Method and apparatus for assembling data, and resource propagation system
CN111881357A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant