CN112218118A - Audio and video clipping method and device - Google Patents

Audio and video clipping method and device Download PDF

Info

Publication number
CN112218118A
CN112218118A CN202011091151.4A CN202011091151A CN112218118A CN 112218118 A CN112218118 A CN 112218118A CN 202011091151 A CN202011091151 A CN 202011091151A CN 112218118 A CN112218118 A CN 112218118A
Authority
CN
China
Prior art keywords
audio
video
file
point
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011091151.4A
Other languages
Chinese (zh)
Inventor
吴坚强
谭嵩
罗准
张文兵
冯斌
张银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Original Assignee
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan MgtvCom Interactive Entertainment Media Co Ltd filed Critical Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority to CN202011091151.4A priority Critical patent/CN112218118A/en
Publication of CN112218118A publication Critical patent/CN112218118A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses an audio and video clipping method and device, which are used for acquiring an audio and video file to be clipped and input and output point data, segmenting the audio and video file to be clipped according to HLS protocol specifications, determining a first clipping segment serial number of an in-point time and a second clipping segment serial number of an out-point time according to the input and output point data of the audio and video file to be clipped, taking the position of the in-point time in the audio and video segment file corresponding to the first clipping segment serial number as a starting point, taking the position of the out-point time in the audio and video segment file corresponding to the second clipping segment serial number as an end point, and merging all the audio and video segment files between the starting point and the end point to obtain a clipped target audio and video file. The audio and video segmented file agreed in the HLS protocol is about 10s generally, so that the longest segment time of a single piece of the current HLS index file is recoded, the audio and video file is accurately cut, the audio and video cutting time is reduced, and the cutting efficiency is improved.

Description

Audio and video clipping method and device
Technical Field
The invention relates to the technical field of video processing, in particular to an audio and video clipping method and device.
Background
The traditional audio and video cutting is mainly based on a C/S framework, and the method comprises the following steps: the method comprises the steps that audio and video nonlinear editing system software is installed on a user client, after audio and video to be cut are imported into the system, a corresponding index file (key frame data are generally recorded) is generated in a background, when a user submits audio and video cutting in-out point data (namely a starting time point and an ending time point), an audio and video cutting function is started, and the audio and video nonlinear editing system cuts audio and video to be cut according to the in-out point data to obtain audio and video fragments where the in-out point data are located. Although the accuracy of audio and video clipping based on the C/S architecture is relatively high, when an audio and video to be clipped is clipped, the audio and video content required for clipping is encoded and decoded by the audio and video nonlinear editing system, and the CPU (Central Processing Unit) and the memory resources are consumed in the encoding and decoding process, so that the efficiency of the audio and video clipping method is low, and the quality of the audio and video is reduced. In addition, because the existing audio and video nonlinear editing software is generally a stand-alone program, the requirement on the configuration of a user computer is high, and a highly-configured workstation needs to be deployed and operated, so that the expenditure of computer resources is high.
With the investment of cloud PASS platforms, a plurality of cloud clipping systems based on a B/S framework appear in the market, and the method is mainly used for directly copying audio and video clips where the input and output point data are located from audio and video to be clipped. Although the audio and video clipping method based on the B/S architecture improves the audio and video clipping efficiency compared with the audio and video clipping method based on the C/S architecture, the computer overhead is relatively low. However, when the audio and video clips of the cloud platform are obtained, the audio and video clips are only copied, and the audio and video clips are not coded and decoded but directly discarded, so that the purpose of fast clipping is achieved. Therefore, accurate clipping of the audio and video cannot be realized, and the application requirement with high audio and video clipping accuracy cannot be met.
Disclosure of Invention
In view of this, the invention discloses an audio and video clipping method and device, so as to realize accurate clipping of audio and video files, greatly reduce the time consumption of audio and video clipping, and improve the clipping efficiency.
An audio and video clipping method comprises the following steps:
acquiring an audio and video file to be cut and the input and output point data of the audio and video file to be cut;
segmenting the audio and video files to be cut according to HLS protocol specifications to obtain segment sequence numbers and segment durations of each audio and video segmented file;
according to the data of the access point and the access point of the audio and video file to be cut, determining the segment sequence number of the audio and video segmented file where the access point time is located, and recording the segment sequence number as a first cut segment sequence number, and the segment sequence number of the audio and video segmented file where the access point time is located, and recording the segment sequence number as a second cut segment sequence number;
and combining all complete audio and video segmentation files and incomplete audio and video segmentation files between the starting point and the end point by taking the position of the point-in time in the audio and video segmentation file corresponding to the first cutting segment serial number as a starting point and the position of the point-out time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point to obtain a cut target audio and video file.
Optionally, a calculation formula of each audio/video segment file at the start time d (n) of the audio/video file to be clipped is as follows:
Figure BDA0002722154270000021
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
Optionally, the determining, according to the data of the entry and exit points of the audio and video file to be clipped, a segment sequence number of the audio and video segment file where the entry point time is located, and recording as a first clipped segment sequence number, and a segment sequence number of the audio and video segment file where the exit point time is located, and recording as a second clipped segment sequence number specifically include:
comparing the point-in time with each audio/video segmented file D (n), and determining the first cutting segment sequence number corresponding to the audio/video segmented file where the point-in time is located;
and comparing the point-out time with each audio and video segmented file D (n), and determining the second cutting segment sequence number corresponding to the audio and video segmented file where the point-out time is located.
Optionally, the step of merging all complete audio/video segment files and incomplete audio/video segment files between the starting point and the ending point by using the position of the point-in time in the audio/video segment file corresponding to the first clip segment sequence number as the starting point and the position of the point-out time in the audio/video segment file corresponding to the second clip segment sequence number as the ending point to obtain a clipped target audio/video file specifically includes:
recoding the in-point audio and video segmented file corresponding to the in-point time to obtain a first cut audio and video segmented file, and recording a storage path of the first cut audio and video segmented file, wherein the in-point audio and video segmented file is as follows: the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point;
recoding the audio and video segment file corresponding to the audio and video segment file to obtain a second clipped audio and video segment file, and recording the storage path of the second clipped audio and video segment file, wherein the audio and video segment file is as follows: the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point;
judging whether a complete audio and video segmented file exists between the point-in time and the point-out time of the audio and video file to be cut;
if not, combining the first cut audio and video segmented file and the second cut audio and video segmented file to obtain a cut target audio and video file;
if so, recording storage paths of all audio and video segmented files between the point-in time and the point-out time;
and combining the first cut audio and video segmented file, the second cut audio and video segmented file and all the audio and video segmented files between the point-in time and the point-out time to obtain a cut target audio and video file.
Optionally, when the point-in time and the point-out time are on the same segmented file, only the sequence number of the first cutting segment is determined, the sequence number of the second cutting segment is set to be null, and only the audio and video content between the point-in point and the point-out point is recoded to obtain the cut target audio and video file.
An audio and video clipping device comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an audio and video file to be cut and the input and output point data of the audio and video file to be cut;
the segmenting unit is used for segmenting the audio and video files to be cut according to the HLS protocol specification to obtain the segment sequence number and the segment duration of each audio and video segmented file;
a sequence number determining unit, configured to determine, according to the data of the entry and exit points of the audio and video file to be clipped, a segment sequence number of an audio and video segment file where the entry point time is located, which is recorded as a first clipping segment sequence number, and a segment sequence number of an audio and video segment file where the exit point time is located, which is recorded as a second clipping segment sequence number;
and the merging unit is used for merging all complete audio and video segmentation files and incomplete audio and video segmentation files between the starting point and the end point by taking the position of the point-in time in the audio and video segmentation file corresponding to the first cutting segment serial number as a starting point and the position of the point-out time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point to obtain a cut target audio and video file.
Optionally, a calculation formula of each audio/video segment file at the start time d (n) of the audio/video file to be clipped is as follows:
Figure BDA0002722154270000041
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
Optionally, the sequence number determining unit specifically includes:
the first sequence number determining subunit is configured to compare the point-in time with each of the audio/video segment files d (n), and determine a sequence number of the first clip segment corresponding to the audio/video segment file in which the point-in time is located;
and the second sequence number determining subunit is used for comparing the point-out time with each audio/video segmented file D (n) and determining the sequence number of the second cutting segment corresponding to the audio/video segmented file where the point-out time is located.
Optionally, the merging unit specifically includes:
the first recoding subunit is used for recoding the in-point audio and video segmented file corresponding to the in-point time to obtain a first cutting audio and video segmented file and recording a storage path of the first cutting audio and video segmented file, wherein the in-point audio and video segmented file is as follows: the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point;
the second recoding subunit is configured to recode the audio and video segment file of the outgoing point corresponding to the outgoing point time to obtain a second clipped audio and video segment file, and record a storage path of the second clipped audio and video segment file, where the audio and video segment file of the outgoing point is: the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point;
the judging subunit is used for judging whether a complete audio and video segmented file exists between the point-in time and the point-out time of the audio and video file to be cut;
the first merging subunit is used for merging the first cut audio and video segmented file and the second cut audio and video segmented file under the condition that the judging subunit judges that the first cut audio and video segmented file and the second cut audio and video segmented file are not in the first merging subunit, so that a cut target audio and video file is obtained;
the recording subunit is used for recording storage paths of all audio and video segmented files between the point-in time and the point-out time under the condition that the judging subunit judges that the audio and video segmented files are in the positive state;
and the second merging subunit is used for merging the first clipped audio and video segmented file, the second clipped audio and video segmented file and all the audio and video segmented files between the point-in time and the point-out time to obtain a clipped target audio and video file.
Optionally, the merging unit is further configured to:
and when the point-in time and the point-out time are on the same segmented file, only determining the sequence number of the first cutting segment, setting the sequence number of the second cutting segment to be null, and only recoding the audio and video contents between the point-in and the point-out to obtain the cut target audio and video file.
The technical scheme includes that the audio and video clipping method and device are used for obtaining audio and video files to be clipped and input and output point data of the audio and video files to be clipped, segmenting the audio and video files to be clipped according to HLS protocol specifications to obtain segment serial numbers and segment durations of each audio and video segmented file, determining the segment serial numbers of the audio and video segmented files where the point-in time is located according to the input and output point data of the audio and video files to be clipped, marking the segment serial numbers as first clipping segment serial numbers, marking the segment serial numbers of the audio and video segmented files where the point-out time is located as second clipping segment serial numbers, taking the position of the point-in time in the audio and video segmented files corresponding to the first clipping segment serial numbers as a starting point, taking the position of the point-out time in the audio and video segmented files corresponding to the second clipping segment serial numbers as an end point, and merging all complete audio and video segmented files between the starting point and the end point and incomplete segmented files, and obtaining the clipped target audio/video file. Because the audio/video segmentation file agreed in the HLS protocol is about 10s generally, the recoding time is the longest segment time of a single slice of the current HLS index file. Therefore, the invention can realize the accurate clipping of the audio and video files, greatly reduce the time consumption of clipping the audio and video and improve the clipping efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
Fig. 1 is a flow chart of an audio and video clipping method disclosed by the embodiment of the invention;
FIG. 2 is a diagram illustrating an example of an HLS protocol specification according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for obtaining a clipped target audio/video file based on an in-point time and an out-point time according to an embodiment of the present invention;
fig. 4 is a schematic diagram of audio and video clipping disclosed in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an audio and video clipping device disclosed in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a merging unit according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses an audio and video cutting method and a device, which are used for acquiring audio and video files to be cut and input and output point data of the audio and video files to be cut, segmenting the audio and video files to be cut according to HLS protocol specifications to obtain segment serial numbers and segment durations of each audio and video segmented file, determining the segment serial numbers of the audio and video segmented files of an in-point time according to the input and output point data of the audio and video files to be cut, marking the segment serial numbers as first cut segment serial numbers and the segment serial numbers of the audio and video segmented files of an out-point time, marking as second cut segment serial numbers, taking the position of the in-point time in the audio and video segmented files corresponding to the first cut segment serial numbers as a starting point, taking the position of the out-point time in the audio and video segmented files corresponding to the second cut segment serial numbers as an end point, merging all complete audio and video segmented files and incomplete audio and video segmented files between the starting point and the end point, and obtaining the clipped target audio/video file. Because the audio/video segmentation file agreed in the HLS protocol is about 10s generally, the recoding time is the longest segment time of a single slice of the current HLS index file. Therefore, the invention can realize the accurate clipping of the audio and video files, greatly reduce the time consumption of clipping the audio and video and improve the clipping efficiency.
Referring to fig. 1, an embodiment of the present invention discloses a flowchart of an audio and video clipping method, where the method is applied to a server, and the method includes:
s101, acquiring an audio and video file to be cut and the access point data of the audio and video file to be cut;
the in-out point data of the audio and video file to be cut is as follows: and the audio and video to be cut in the audio and video to be cut have a starting time point and an ending time point.
Specifically, in practical application, a user can upload an audio-video file to be cut and the entry-exit point data of the audio-video file to be cut through a browser of a client, and the client encodes the audio-video file to be cut and the entry-exit point data of the audio-video file to be cut and uploads the encoded data to a server.
In this embodiment, the audio/video file to be cut is in the format of url (uniform Resource locator).
Step S102, segmenting the audio and video files to be cut according to HLS protocol specifications to obtain segment serial numbers and segment duration of each audio and video segmented file;
it can be understood that the audio and video file to be clipped sent to the server by the client is in a compressed packet form, so that the audio and video file to be clipped needs to be analyzed before the audio and video file to be clipped is segmented, and the analyzed audio and video file to be clipped is obtained.
The HLS (full name is HTTP Live Streaming) is a Streaming media network transmission protocol based on HTTP proposed by Apple corporation, and is a mainstream playing protocol of an audio/video website on the internet at present. The HLS protocol is characterized in that continuous large media such as mp4 and ts are segmented to obtain a large number of ts (transport stream) segments for transmission, and a client requests to download the ts segments to realize smooth video playing. In the HLS protocol specification, each ts fragment has a # EXTINF field to identify the duration of the fragment, and the ts fragments are ordered. Each ts fragment has two pieces of information, a fragment sequence number and a fragment duration.
Referring to the HLS protocol specification example shown in fig. 2, the HLS protocol specification example is specifically as follows:
the first behavior fixed string "# EXTM 3U";
the second row # EXT-X-VERSION represents the VERSION number followed by a number representing a VERSION, e.g. the number "3" in fig. 2 represents VERSION 3;
the third row # EXT-X-TARGETDURATION represents the minimum integer of the duration of the ts segment under the HLS protocol, for example, the minimum integer of the duration of the ts segment under the HLS protocol in fig. 2 is 10;
the fourth row # EXTINF represents the duration of the ts fragment, such as 9.009 for the ts fragment in FIG. 2;
the fifth line represents the storage URI (Uniform Resource Identifier) of the ts fragment, and the fourth fifth line repeatedly appears until the audio and video are finished;
the last line # EXT-X-ENDLIST represents the end of the audio/video, and the absence of this tag represents a live stream.
Step S103, determining the segment serial number of the audio and video segmented file where the point-in time is located according to the point-in and point-out data of the audio and video file to be cut, and recording the segment serial number as a first cut segment serial number, and recording the segment serial number of the audio and video segmented file where the point-out time is located as a second cut segment serial number;
specifically, the access point time T (start) is compared with each audio/video segmented file, and the segment serial number corresponding to the audio/video segmented file where the access point time is located is determined and recorded as a first clipping segment serial number;
and comparing the time T (end) of the out point with each audio/video segmented file, determining the segment serial number corresponding to the audio/video segmented file in which the out point time is positioned, and recording as a second cutting segment serial number.
The starting time D (n) of each audio and video segmented file in the audio and video file to be cut is respectively calculated according to a formula (1), wherein the formula (1) is as follows:
Figure BDA0002722154270000081
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
And step S104, taking the position of the point-in time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point, taking the position of the point-out time in the audio and video segmented file corresponding to the second cutting segment serial number as an end point, and combining all complete audio and video segmented files and incomplete audio and video segmented files between the starting point and the end point to obtain a cut target audio and video file.
In summary, the audio/video clipping method disclosed by the invention obtains audio/video files to be clipped and the data of the input and output points of the audio/video files to be clipped, segments the audio/video files to be clipped according to the HLS protocol specification to obtain the segment serial number and the segment duration of each audio/video segment file, determines the segment serial number of the audio/video segment file where the point-in time is located according to the data of the input and output points of the audio/video files to be clipped as the first clipping segment serial number and the segment serial number of the audio/video segment file where the point-out time is located as the second clipping segment serial number, takes the position of the point-in time in the audio/video segment file corresponding to the first clipping segment serial number as the starting point, and takes the position of the point-out time in the audio/video segment file corresponding to the second clipping segment serial number as the end point, and merges all the complete audio/video segment files and the incomplete audio/video segment files between the starting point and, and obtaining the clipped target audio/video file. Because the audio/video segmentation file agreed in the HLS protocol is about 10s generally, the recoding time is the longest segment time of a single slice of the current HLS index file. Therefore, the invention can realize the accurate clipping of the audio and video files, greatly reduce the time consumption of clipping the audio and video and improve the clipping efficiency.
It should be particularly noted that, in the embodiment shown in fig. 1, when the point-in time and the point-out time are on the same segment file, only the sequence number of the first clip segment is determined, and the sequence number of the second clip segment is set to be null, and only the audio and video content between the point-in point and the point-out point is re-encoded, so as to obtain the clipped target audio and video file.
In order to further optimize the above embodiment, referring to fig. 3, a flowchart of a method for obtaining a clipped target audio/video file based on an in-point time and an out-point time disclosed in the embodiment of the present invention, that is, step S104 in the embodiment shown in fig. 1 may specifically include the following steps:
step S201, recoding an in-point audio and video segmented file corresponding to an in-point time to obtain a first cut audio and video segmented file, and recording a storage path of the first cut audio and video segmented file;
wherein, the audio and video segmentation file of the entry point corresponding to the entry point time is: and the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point.
Step S202, recoding the audio and video segmentation file of the output point corresponding to the output point time to obtain a second cut audio and video segmentation file, and recording the storage path of the second cut audio and video segmentation file;
the audio and video output point segmentation file corresponding to the output point time is as follows: and the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point.
Step S203, judging whether a complete audio and video segmented file exists between the point-in time and the point-out time of the audio and video file to be cut, if not, executing step S204, and if so, executing step S205;
and step S204, merging the first cut audio and video segmented file and the second cut audio and video segmented file to obtain a cut target audio and video file.
Step S205, recording all storage paths of audio and video segmented files between the point-in time and the point-out time of the audio and video files to be cut;
and step S206, merging the first cut audio and video segmented file, the second cut audio and video segmented file and all audio and video segmented files between the point-in time and the point-out time of the audio and video file to be cut to obtain a cut target audio and video file.
For example, referring to the audio and video clipping schematic diagram shown in fig. 4, an audio and video clip file with an in-point time in an audio and video clip file corresponding to 1.ts as a starting point and an end position of the audio and video clip file corresponding to 1.ts as an end point is used as the in-point audio and video clip file, and the in-point audio and video clip file is encoded to obtain a first clipped audio and video clip file.
And taking the initial position of the audio and video segmentation file corresponding to the 4.ts as a starting point, taking the position of the out-point time in the audio and video segmentation file corresponding to the 4.ts as an end point, taking the audio and video segmentation file as an out-point audio and video segmentation file, and obtaining a second cut audio and video segmentation file for the out-point audio and video segmentation file.
The complete audio and video segmentation file between 1.ts and 4.ts is as follows: and 2.ts and 3.ts correspond to the audio and video segment files respectively.
And combining the first cut audio and video segmented file, the second cut audio and video segmented file and the coded audio and video segmented files corresponding to 2.ts and 3.ts respectively to obtain a cut target audio and video file.
Ts in the example of fig. 4 on the 5 th, 7 th and 9 th rows starts from 0 and has a tolerance of 1, and the arithmetic progression s (n) is shown.
It should be particularly noted that, in the actual execution process, the step of obtaining the first audio/video segment file in step S201, the step of obtaining the second audio/video segment file in step S202, and the step of recording all storage paths of the audio/video segment files between the point-in time and the point-out time of the audio/video file to be clipped in step S205 are not limited to the execution sequence shown in fig. 3, and these three steps may be adjusted according to actual needs.
Corresponding to the embodiment of the method, the invention also discloses an audio and video cutting device.
Referring to fig. 5, a schematic structural diagram of an audio and video clipping device disclosed in the embodiment of the present invention includes:
an obtaining unit 301, configured to obtain an audio/video file to be cut and entry/exit point data of the audio/video file to be cut;
the in-out point data of the audio and video file to be cut is as follows: and the audio and video to be cut in the audio and video to be cut have a starting time point and an ending time point.
Specifically, in practical application, a user can upload an audio-video file to be cut and the entry-exit point data of the audio-video file to be cut through a browser of a client, and the client encodes the audio-video file to be cut and the entry-exit point data of the audio-video file to be cut and uploads the encoded data to a server.
In this embodiment, the audio/video file to be cut is in the format of url (uniform Resource locator).
A segmenting unit 302, configured to segment the audio and video file to be cut according to the HLS protocol specification to obtain a segment sequence number and a segment duration of each audio and video segmented file;
it can be understood that the audio and video file to be clipped sent to the server by the client is in a compressed packet form, so that the audio and video file to be clipped needs to be analyzed before the audio and video file to be clipped is segmented, and the analyzed audio and video file to be clipped is obtained.
A sequence number determining unit 303, configured to determine, according to the data of the entry and exit points of the audio and video file to be clipped, a segment sequence number of the audio and video segment file where the entry point time is located, which is recorded as a first clipped segment sequence number, and a segment sequence number of the audio and video segment file where the exit point time is located, which is recorded as a second clipped segment sequence number;
specifically, the sequence number determination unit 303 includes: a first sequence number determining subunit and a second sequence number determining subunit;
the first sequence number determining subunit is used for comparing the access point time T (start) with each audio/video segmented file, determining a segment sequence number corresponding to the audio/video segmented file in which the access point time is positioned, and recording the segment sequence number as a first cutting segment sequence number;
and the second sequence number determining subunit is used for comparing the out-point time T (end) with each audio/video segmented file, determining the segment sequence number corresponding to the audio/video segmented file in which the out-point time is positioned, and recording the segment sequence number as a second cutting segment sequence number.
The starting time D (n) of each audio and video segmented file in the audio and video file to be cut is respectively calculated according to a formula (1), wherein the formula (1) is as follows:
Figure BDA0002722154270000111
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
And the merging unit 304 is configured to merge all complete audio/video segment files and incomplete audio/video segment files between the starting point and the ending point by taking the position of the point-in time in the audio/video segment file corresponding to the first clip segment sequence number as a starting point and the position of the point-out time in the audio/video segment file corresponding to the second clip segment sequence number as an ending point, so as to obtain a clipped target audio/video file.
In summary, the audio/video clipping device disclosed by the invention obtains audio/video files to be clipped and the data of the input and output points of the audio/video files to be clipped, segments the audio/video files to be clipped according to the HLS protocol specification to obtain the segment serial number and the segment duration of each audio/video segment file, determines the segment serial number of the audio/video segment file where the point-in time is located according to the data of the input and output points of the audio/video files to be clipped as the first clipping segment serial number, and the segment serial number of the audio/video segment file where the point-out time is located as the second clipping segment serial number, and merges all complete audio/video segment files and incomplete audio/video segment files between the starting point and the ending point by taking the position of the point-in time in the audio/video segment file corresponding to the first clipping segment serial number as the starting point and the position of the point time in the audio/video segment file corresponding to the second clipping segment serial number as the ending point, and obtaining the clipped target audio/video file. Because the audio/video segmentation file agreed in the HLS protocol is about 10s generally, the recoding time is the longest segment time of a single slice of the current HLS index file. Therefore, the invention can realize the accurate clipping of the audio and video files, greatly reduce the time consumption of clipping the audio and video and improve the clipping efficiency.
It should be particularly noted that, when the point-in time and the point-out time are on the same segment file, only the sequence number of the first cutting segment is determined, the sequence number of the second cutting segment is set to be null, and only the audio and video content between the point-in point and the point-out point is recoded to obtain the cut target audio and video file.
Thus, the merging unit 304 may also be configured to:
and when the point-in time and the point-out time are on the same segmented file, only determining the sequence number of the first cutting segment, setting the sequence number of the second cutting segment to be null, and only recoding the audio and video contents between the point-in and the point-out to obtain the cut target audio and video file.
For further optimizing the above embodiment, referring to fig. 6, a schematic structural diagram of a merging unit disclosed in the embodiment of the present invention, the merging unit may specifically include:
a first recoding subunit 401, configured to recode the in-point audio/video segment file corresponding to the in-point time to obtain a first clipped audio/video segment file, and record a storage path of the first clipped audio/video segment file, where the in-point audio/video segment file is: the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point;
a second recoding subunit 402, configured to recode the audio and video segment file of the departure point corresponding to the departure point time to obtain a second clipped audio and video segment file, and record a storage path of the second clipped audio and video segment file, where the departure point audio and video segment file is: the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point;
a judging subunit 403, configured to judge whether a complete audio/video segment file exists between the point-in time and the point-out time of the audio/video file to be cut;
a first merging subunit 404, configured to merge the first clipped audio/video segment file and the second clipped audio/video segment file to obtain a clipped target audio/video file when the determining subunit 403 determines that the first clipped audio/video segment file and the second clipped audio/video segment file are not included;
a recording subunit 405, configured to record, when the determining subunit 403 determines that the audio/video segment file is a segment file of an audio/video segment, storage paths of all audio/video segment files between the point-in time and the point-out time;
and a second merging subunit 406, configured to merge the first clipped audio/video segment file, the second clipped audio/video segment file, and all the audio/video segment files between the point-in time and the point-out time to obtain a clipped target audio/video file.
For example, referring to the audio and video clipping schematic diagram shown in fig. 4, an audio and video clip file with an in-point time in an audio and video clip file corresponding to 1.ts as a starting point and an end position of the audio and video clip file corresponding to 1.ts as an end point is used as the in-point audio and video clip file, and the in-point audio and video clip file is encoded to obtain a first clipped audio and video clip file.
And taking the initial position of the audio and video segmentation file corresponding to the 4.ts as a starting point, taking the position of the out-point time in the audio and video segmentation file corresponding to the 4.ts as an end point, taking the audio and video segmentation file as an out-point audio and video segmentation file, and obtaining a second cut audio and video segmentation file for the out-point audio and video segmentation file.
The complete audio and video segmentation file between 1.ts and 4.ts is as follows: and 2.ts and 3.ts correspond to the audio and video segment files respectively.
And combining the first cut audio and video segmented file, the second cut audio and video segmented file and the coded audio and video segmented files corresponding to 2.ts and 3.ts respectively to obtain a cut target audio and video file.
Ts in the example of fig. 4 on the 5 th, 7 th and 9 th rows starts from 0 and has a tolerance of 1, and the arithmetic progression s (n) is shown.
It should be noted that, in the device embodiment, please refer to the corresponding part of the method embodiment for the working principle of each component, which is not described herein again.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An audio and video clipping method is characterized by comprising the following steps:
acquiring an audio and video file to be cut and the input and output point data of the audio and video file to be cut;
segmenting the audio and video files to be cut according to HLS protocol specifications to obtain segment sequence numbers and segment durations of each audio and video segmented file;
according to the data of the access point and the access point of the audio and video file to be cut, determining the segment sequence number of the audio and video segmented file where the access point time is located, and recording the segment sequence number as a first cut segment sequence number, and the segment sequence number of the audio and video segmented file where the access point time is located, and recording the segment sequence number as a second cut segment sequence number;
and combining all complete audio and video segmentation files and incomplete audio and video segmentation files between the starting point and the end point by taking the position of the point-in time in the audio and video segmentation file corresponding to the first cutting segment serial number as a starting point and the position of the point-out time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point to obtain a cut target audio and video file.
2. The audio/video clipping method according to claim 1, wherein the calculation formula of each audio/video segment file at the start time d (n) of the audio/video file to be clipped is as follows:
Figure FDA0002722154260000011
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
3. The audio and video clipping method according to claim 1, wherein the determining, according to the entry and exit point data of the audio and video file to be clipped, a segment sequence number of the audio and video segment file where the entry point time is located is recorded as a first clipping segment sequence number, and a segment sequence number of the audio and video segment file where the exit point time is located is recorded as a second clipping segment sequence number, specifically includes:
comparing the point-in time with each audio/video segmented file D (n), and determining the first cutting segment sequence number corresponding to the audio/video segmented file where the point-in time is located;
and comparing the point-out time with each audio and video segmented file D (n), and determining the second cutting segment sequence number corresponding to the audio and video segmented file where the point-out time is located.
4. The audio and video clipping method according to claim 1, wherein the step of merging all complete audio and video segment files and incomplete audio and video segment files between the starting point and the ending point by using the position of the point-in time in the audio and video segment file corresponding to the first clipping segment sequence number as a starting point and the position of the point-out time in the audio and video segment file corresponding to the second clipping segment sequence number as an ending point to obtain a clipped target audio and video file specifically comprises:
recoding the in-point audio and video segmented file corresponding to the in-point time to obtain a first cut audio and video segmented file, and recording a storage path of the first cut audio and video segmented file, wherein the in-point audio and video segmented file is as follows: the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point;
recoding the audio and video segment file corresponding to the audio and video segment file to obtain a second clipped audio and video segment file, and recording the storage path of the second clipped audio and video segment file, wherein the audio and video segment file is as follows: the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point;
judging whether a complete audio and video segmented file exists between the point-in time and the point-out time of the audio and video file to be cut;
if not, combining the first cut audio and video segmented file and the second cut audio and video segmented file to obtain a cut target audio and video file;
if so, recording storage paths of all audio and video segmented files between the point-in time and the point-out time;
and combining the first cut audio and video segmented file, the second cut audio and video segmented file and all the audio and video segmented files between the point-in time and the point-out time to obtain a cut target audio and video file.
5. The audio and video clipping method according to claim 1, wherein when the point-in time and the point-out time are on the same segment file, only the sequence number of the first clipping segment is determined, and the sequence number of the second clipping segment is set to be empty, and only the audio and video content between the point-in point and the point-out point is re-encoded, so as to obtain the clipped target audio and video file.
6. An audio and video clipping device, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an audio and video file to be cut and the input and output point data of the audio and video file to be cut;
the segmenting unit is used for segmenting the audio and video files to be cut according to the HLS protocol specification to obtain the segment sequence number and the segment duration of each audio and video segmented file;
a sequence number determining unit, configured to determine, according to the data of the entry and exit points of the audio and video file to be clipped, a segment sequence number of an audio and video segment file where the entry point time is located, which is recorded as a first clipping segment sequence number, and a segment sequence number of an audio and video segment file where the exit point time is located, which is recorded as a second clipping segment sequence number;
and the merging unit is used for merging all complete audio and video segmentation files and incomplete audio and video segmentation files between the starting point and the end point by taking the position of the point-in time in the audio and video segmentation file corresponding to the first cutting segment serial number as a starting point and the position of the point-out time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point to obtain a cut target audio and video file.
7. The audio-video clipping device according to claim 6, wherein the calculation formula of each audio-video segment file at the starting time d (n) of the audio-video file to be clipped is as follows:
Figure FDA0002722154260000031
in the formula, k is the lower bound of the segment sequence number of the audio/video segmented file, n is the upper bound of the segment sequence number of the audio/video segmented file, and DurnThe audio/video time length is arbitrarily segmented.
8. The audio/video clipping device according to claim 6, wherein the sequence number determining unit specifically includes:
the first sequence number determining subunit is configured to compare the point-in time with each of the audio/video segment files d (n), and determine a sequence number of the first clip segment corresponding to the audio/video segment file in which the point-in time is located;
and the second sequence number determining subunit is used for comparing the point-out time with each audio/video segmented file D (n) and determining the sequence number of the second cutting segment corresponding to the audio/video segmented file where the point-out time is located.
9. The audio/video clipping device according to claim 6, wherein the merging unit specifically includes:
the first recoding subunit is used for recoding the in-point audio and video segmented file corresponding to the in-point time to obtain a first cutting audio and video segmented file and recording a storage path of the first cutting audio and video segmented file, wherein the in-point audio and video segmented file is as follows: the audio and video segmented file takes the position of the in-point time in the audio and video segmented file corresponding to the first cutting segment serial number as a starting point and takes the end position of the audio and video segmented file corresponding to the first cutting segment serial number as an end point;
the second recoding subunit is configured to recode the audio and video segment file of the outgoing point corresponding to the outgoing point time to obtain a second clipped audio and video segment file, and record a storage path of the second clipped audio and video segment file, where the audio and video segment file of the outgoing point is: the audio and video segmentation file takes the initial position of the audio and video segmentation file corresponding to the second cutting segment serial number as a starting point and takes the position of the out-point time in the audio and video segmentation file corresponding to the second cutting segment serial number as an end point;
the judging subunit is used for judging whether a complete audio and video segmented file exists between the point-in time and the point-out time of the audio and video file to be cut;
the first merging subunit is used for merging the first cut audio and video segmented file and the second cut audio and video segmented file under the condition that the judging subunit judges that the first cut audio and video segmented file and the second cut audio and video segmented file are not in the first merging subunit, so that a cut target audio and video file is obtained;
the recording subunit is used for recording storage paths of all audio and video segmented files between the point-in time and the point-out time under the condition that the judging subunit judges that the audio and video segmented files are in the positive state;
and the second merging subunit is used for merging the first clipped audio and video segmented file, the second clipped audio and video segmented file and all the audio and video segmented files between the point-in time and the point-out time to obtain a clipped target audio and video file.
10. The audio-video clipping device according to claim 6, wherein the merging unit is further configured to:
and when the point-in time and the point-out time are on the same segmented file, only determining the sequence number of the first cutting segment, setting the sequence number of the second cutting segment to be null, and only recoding the audio and video contents between the point-in and the point-out to obtain the cut target audio and video file.
CN202011091151.4A 2020-10-13 2020-10-13 Audio and video clipping method and device Pending CN112218118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011091151.4A CN112218118A (en) 2020-10-13 2020-10-13 Audio and video clipping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011091151.4A CN112218118A (en) 2020-10-13 2020-10-13 Audio and video clipping method and device

Publications (1)

Publication Number Publication Date
CN112218118A true CN112218118A (en) 2021-01-12

Family

ID=74053909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011091151.4A Pending CN112218118A (en) 2020-10-13 2020-10-13 Audio and video clipping method and device

Country Status (1)

Country Link
CN (1) CN112218118A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022173676A1 (en) * 2021-02-11 2022-08-18 Loom, Inc. Instant video trimming and stitching and associated methods and systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278543A1 (en) * 2016-03-22 2017-09-28 Verizon Patent And Licensing Inc. Speedy clipping
CN110213672A (en) * 2019-07-04 2019-09-06 腾讯科技(深圳)有限公司 Video generation, playback method, system, device, storage medium and equipment
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278543A1 (en) * 2016-03-22 2017-09-28 Verizon Patent And Licensing Inc. Speedy clipping
CN110213672A (en) * 2019-07-04 2019-09-06 腾讯科技(深圳)有限公司 Video generation, playback method, system, device, storage medium and equipment
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022173676A1 (en) * 2021-02-11 2022-08-18 Loom, Inc. Instant video trimming and stitching and associated methods and systems

Similar Documents

Publication Publication Date Title
WO2017092336A1 (en) Streaming media processing method and apparatus
US8972374B2 (en) Content acquisition system and method of implementation
US9491225B2 (en) Offline download method and system
US10237583B1 (en) Execution of cases based on barcodes in video feeds
WO2017080428A1 (en) Streaming media channel recording, reviewing method, device, server and storage medium
CN110149529B (en) Media information processing method, server and storage medium
CN110113626B (en) Method and device for playing back live video
CN108924630B (en) Method for displaying cache progress and playing device
CN108769830B (en) Method for caching video and related equipment
CN108924606B (en) Streaming media processing method and device, storage medium and electronic device
CN106899879B (en) Multimedia data processing method and device
CN114629929B (en) Log recording method, device and system
CN112218118A (en) Audio and video clipping method and device
CN107241618B (en) Recording method and recording apparatus
JP5387860B2 (en) Content topicality determination system, method and program thereof
CN109587517B (en) Multimedia file playing method and device, server and storage medium
RU2530671C1 (en) Checking method of web pages for content in them of target audio and/or video (av) content of real time
CN116415032A (en) Video file reading and storing method and device
CN114630143B (en) Video stream storage method, device, electronic equipment and storage medium
CN114417055A (en) Video playing method and device, computer equipment and storage medium
CN108228829B (en) Method and apparatus for generating information
CN104023278B (en) Streaming medium data processing method and electronic equipment
CN114845076B (en) Video data processing method, system, equipment and medium
US20150026147A1 (en) Method and system for searches of digital content
CN106411975B (en) Data output method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112

RJ01 Rejection of invention patent application after publication