CN113810766B - Video clip combination processing method and system - Google Patents

Video clip combination processing method and system Download PDF

Info

Publication number
CN113810766B
CN113810766B CN202111357911.6A CN202111357911A CN113810766B CN 113810766 B CN113810766 B CN 113810766B CN 202111357911 A CN202111357911 A CN 202111357911A CN 113810766 B CN113810766 B CN 113810766B
Authority
CN
China
Prior art keywords
video
processing
segment
segments
video material
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111357911.6A
Other languages
Chinese (zh)
Other versions
CN113810766A (en
Inventor
郝玉倩
徐晓忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sudian Network Technology Co ltd
Original Assignee
Shenzhen Sudian Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sudian Network Technology Co ltd filed Critical Shenzhen Sudian Network Technology Co ltd
Priority to CN202111357911.6A priority Critical patent/CN113810766B/en
Publication of CN113810766A publication Critical patent/CN113810766A/en
Application granted granted Critical
Publication of CN113810766B publication Critical patent/CN113810766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention is suitable for the field of video processing, and provides a video clip combination processing method and a video clip combination processing system, wherein abnormal segments in an obtained video material to be processed are identified by analyzing the obtained video material to be processed; cutting off the abnormal fragments from the video material, and storing the cut-off abnormal fragments into an abnormal collection; and numbering the pre-processing videos obtained by shearing, and storing the pre-processing videos into a pre-processing aggregation. The method and the device realize automatic processing of the video material, process abnormal segments in the video material, liberate video editing personnel in the processing time period, enable the personnel to spend more time on more creative and artistic work, and solve the problems that the final video quality is poor and the video updating speed is slow because more time is wasted on selecting and deleting meaningless segments and less time is spent on the combination of the video segments and the effect presentation of the video when the low-quality segments in the video material are manually processed.

Description

Video clip combination processing method and system
Technical Field
The invention belongs to the field of video processing, and particularly relates to a video clip combination processing method and system.
Background
The self media is a propagator of personal, civilian, general and autonomous media, and is a general term of new media for transmitting normative and non-normative information to unspecific majority or specific single people by means of modernization and electronization.
Most of self-media are operated by individuals, videos are shot through the individuals, then the videos are clipped and combined, wherein video clipping refers to a process of remixing added materials such as pictures, background music, special effects and scenes with the videos, cutting and combining video sources, and generating new videos with different expressive forces through secondary editing.
Most of self-media authors are not professionals, a lot of time is needed from shooting to film production, especially for individuals who take the self-media operation as part-time or hobby, more short video films are often collected during shooting, conditions such as word forgetting and picture pause often occur in the shooting process, continuous NG (NG) retakes are needed, a large amount of materials need to be filtered and screened one by one during clipping, and a lot of time can be wasted. In addition, even if there are accounts operated by a team, dynamic updating is frequent, and there is a certain requirement on the speed of film production, but as the number of dynamic updating increases, the required video materials will increase, more time is wasted in selecting and deleting meaningless segments, and less time is spent in the aspects of video segment combination and video effect presentation, resulting in poor quality of final video and slow video updating speed.
Disclosure of Invention
The embodiment of the invention provides a video clip combination processing method and a video clip combination processing system, and aims to solve the problems that low-quality segments in video materials are manually processed, more time is wasted on selecting and deleting meaningless segments when the video materials are more, and the time spent in the aspects of video segment combination and video effect presentation is less, so that the final video quality is poor and the video updating speed is slow.
The embodiment of the invention is realized in such a way that a video clip combination processing method comprises the following steps:
acquiring a video material to be processed;
analyzing the video material to be processed, and identifying abnormal segments in the video material;
cutting off the abnormal fragments from the video material, and storing the cut-off abnormal fragments into an abnormal collection;
and numbering the pre-processing videos obtained after shearing, and storing the pre-processing videos into a pre-processing aggregation.
As a modified scheme of the invention: the analyzing the video material to be processed and identifying the abnormal segments in the video material specifically comprise:
analyzing the video material to be processed, positioning a segment with the volume lower than the detection decibel value, and marking the segment as a mute segment;
identifying specific recorded voice and specific ending voice in the video material, marking a segment before the specific recorded voice as a preparation segment, and marking a segment after the specific ending voice as a field ending segment;
and positioning the interrupted voice in the video material, and marking the interrupted voice as a position to be processed.
As a further improvement of the invention: the step of numbering the pre-processing videos obtained after the cutting and storing the pre-processing videos into a pre-processing collection specifically comprises the following steps:
cutting off a mute segment in a video material to obtain a primary processed video;
cutting the preparation segment and the closing segment in the primary processed video to obtain a secondary processed video;
marking the position where the voice is interrupted in the secondary processing video, and numbering the video to obtain a pre-processing video;
and storing the pre-processing video into a pre-processing aggregation according to the numbering sequence.
As another improvement of the invention: the analyzing the video material to be processed, locating a segment whose volume is lower than the detection decibel value, and marking the segment as a silence segment specifically includes:
extracting audio information in a video material to be processed;
detecting a decibel value of the audio information;
positioning segments of which the decibel value is lower than the detection decibel value in the audio information, marking the segments as silent segments, and numbering the silent segments; the number of the silent segment is associated with the number of the video material where the silent segment is located.
As a further scheme of the invention: the identifying of the specific recorded voice and the specific ending voice in the video material, the marking of the segment before the specific recorded voice as a preparation segment, and the marking of the segment after the specific ending voice as a closing segment specifically include:
identifying specific recorded voice and specific ending voice in the video material;
positioning the position where the specific recorded voice ends and the position where the specific ending voice starts, and marking the positions as a starting mark and an ending mark;
and sequentially grouping the start marks and the end marks in the video material pairwise from the first start mark to form a pair of cutting marks, and associating the same group of start marks and end marks.
As a further scheme of the invention: cutting the abnormal segments from the video material, and storing the cut abnormal segments into the abnormal collection specifically comprises:
cutting off the mute sections from the video material, inserting mute section numbers into cutting positions, and marking the cut-off mute sections with the same mute section numbers;
and storing the cut mute sections into a mute section subset in the abnormal collection.
As an optimization scheme of the invention: cutting the abnormal segments from the video material, and storing the cut abnormal segments into the abnormal collection specifically comprises:
taking a pair of cutting marks as cutting points, cutting and reserving the video segments between the pair of cutting marks, and numbering the reserved video segments;
numbering preparation fragments and field receiving fragments before and after the reserved video segment, wherein the preparation fragments and the field receiving fragments are associated with the reserved video segment number;
and respectively storing the preparation fragment and the field collection fragment into a preparation sub-collection and a field collection sub-collection in the abnormal collection.
As another scheme of the invention: after the pre-processing video obtained after cutting is numbered and stored in the pre-processing aggregation, the method further comprises the following steps:
combining the pre-processing videos in the pre-processing combination set according to the serial number sequence;
monitoring the size of a memory value after a plurality of pre-processing videos are combined;
when the memory value of the combined pre-processing video exceeds the preset memory value, no other pre-processing video is combined after the current plurality of pre-processing videos, and the combined pre-processing video is numbered again;
sending the combined pre-processing video to a manual end for further processing to obtain a deep processing video;
and receiving the deep processing videos processed by the manual end, arranging the deep processing videos according to the numbers, and sending the deep processing videos to the auditing end.
In another aspect, a video clip assembly processing system includes:
the video material acquisition module is used for acquiring a video material to be processed;
the abnormal segment identification module is used for analyzing the video material to be processed and identifying abnormal segments in the video material;
the clipping and storing module is used for clipping the abnormal segments from the video material and storing the clipped abnormal segments into an abnormal collection;
and the number storage module is used for numbering the pre-processing video obtained after the cutting and storing the pre-processing video into a pre-processing aggregation.
The invention has the beneficial effects that: identifying abnormal segments in the video material by analyzing the obtained video material to be processed; cutting off the abnormal fragments from the video material, and storing the cut-off abnormal fragments into an abnormal collection; and numbering the pre-processing videos obtained after shearing, and storing the pre-processing videos into a pre-processing aggregation. The method and the device realize automatic processing of the video material, process abnormal segments in the video material, liberate video editing personnel in the processing time period, enable the personnel to spend more time on more creative and artistic work, and solve the problems that the final video quality is poor and the video updating speed is slow because more time is wasted on selecting and deleting meaningless segments and less time is spent on the combination of the video segments and the effect presentation of the video when the low-quality segments in the video material are manually processed.
Drawings
FIG. 1 is a main flow diagram of a video clip assembly processing method;
FIG. 2 is a flow chart of silent segment location in a video clip assembly processing method;
FIG. 3 is a flow diagram of an abnormal clip cropping process in a video clip assembly process;
FIG. 4 is a flow diagram of a video further processing review in a video clip assembly processing method;
fig. 5 is a schematic diagram of the internal structure of a video clip composition processing system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 shows a main flow diagram of a video clip combination processing method of an embodiment of the present invention, the video clip combination processing method including:
step S10: and acquiring a video material to be processed.
Step S11: and analyzing the video material to be processed, and identifying abnormal segments in the video material.
Step S12: and cutting the abnormal fragments from the video material, and storing the cut abnormal fragments into an abnormal collection.
Step S13: and numbering the pre-processing videos obtained after shearing, and storing the pre-processing videos into a pre-processing aggregation. The method and the device realize automatic processing of the video material, process abnormal segments in the video material, liberate video editing personnel in the processing time period, enable the personnel to spend more time on more creative and artistic work, and solve the problems of poor final video quality and low video updating speed caused by more time waste on selecting and deleting meaningless segments and less time spent on combination of video segments and effect presentation of videos when the personnel manually process low-quality segments (namely abnormal segments) in the video material.
In one aspect of this embodiment, the analyzing the video material to be processed, and identifying an abnormal segment in the video material specifically includes:
step S110: and analyzing the video material to be processed, positioning a segment of which the volume is lower than the detection decibel value, and marking the segment as a mute segment. In a video recording site, a video recording device may be turned on, but no substantial content is recorded, or a radio device is forgotten to be turned on in the recording process, so that the recorded video does not show true environmental sound or voice, and the situation is handled.
Step S111: identifying a specific recorded voice and a specific ending voice in the video material, marking a segment before the specific recorded voice as a preparation segment, and marking a segment after the specific ending voice as a closing segment. The specific recorded voice can be 'first episode by scene one mirror first strip' or '20210820 by scene one mirror strip' or 'Action'; the specific ending utterance may be "pause", "cut", etc.
Step S112: and positioning the interrupted voice in the video material, and marking the interrupted voice as a position to be processed. The interruption voice can be the time that the actor temporarily reserves the emotion requirement, and words such as "wait for one time" or "wait for one time" can be spoken at the moment; in order to prevent the words from being coincident with the lines of the actors, the contents are only marked, are not processed, and are subjected to manual processing at a later stage, but the positions are quickly positioned by a human, so that the mark is increased, and the processing time is saved.
In one aspect of this embodiment, the numbering the clipped preprocessed video and saving the preprocessed video to the preprocessed collection specifically includes:
step S130: and cutting off the mute sections in the video material to obtain a primary processed video.
Step S131: and cutting the preparation segment and the closing segment in the primary processed video to obtain a secondary processed video.
Step S132: and marking the position of the interrupted voice in the secondary processing video, numbering the video, and obtaining the pre-processing video.
Step S133: and storing the pre-processing video into a pre-processing aggregation according to the numbering sequence.
Fig. 2 shows a mute section positioning flowchart in a video clip combination processing method according to an embodiment of the present invention, where analyzing a video material to be processed, positioning a section whose volume is lower than a detected decibel value, and marking the section as a mute section specifically includes:
step S20: and extracting audio information in the video material to be processed.
Step S21: a decibel value of the audio information is detected.
Step S22: and positioning the segments of the audio information with decibel values lower than the detection decibel value, marking the segments as mute segments, and numbering the mute segments. The number of the silent segment is associated with the number of the video material where the silent segment is located. Segments with little or no sound are disposed of. In order to prevent the clipped mute video from being quickly retrieved when the mute video is deleted by mistake, the number of the mute segment is associated with the number of the video material where the mute segment is located.
In one case of this embodiment, the identifying the specific recorded voice and the specific ending voice in the video material, marking the segment before the specific recorded voice as the preparation segment, and marking the segment after the specific ending voice as the closing segment specifically includes:
step S30: specific recorded voices and specific ending voices in the video material are identified.
Step S31: the position where the specific recorded voice ends and the position where the specific ending voice starts are located and marked as a start mark and an end mark. The content between the beginning and the end is the effective video content, and the rest of the content is the florescence or the scene situation and the like.
Step S32: and sequentially grouping the start marks and the end marks in the video material pairwise from the first start mark to form a pair of cutting marks, and associating the same group of start marks and end marks.
In a case of this embodiment, the cutting the abnormal segment from the video material, and storing the cut abnormal segment into the abnormal set specifically includes:
step S40: and cutting the mute sections from the video material, inserting mute section numbers into the cutting positions, and marking the same mute section numbers on the cut mute sections.
Step S41: and storing the cut mute sections into a mute section subset in the abnormal collection.
Fig. 3 shows a flowchart of abnormal segment clipping in a video clip combination processing method according to an embodiment of the present invention, where clipping an abnormal segment from a video material, and storing the clipped abnormal segment in an abnormal collection specifically further includes:
step S50: and taking the pair of cutting identifications as cutting points, cutting and reserving the video segments between the pair of cutting identifications, and numbering the reserved video segments.
Step S51: and numbering the preparation fragment and the receiving fragment before and after the reserved video segment, wherein the numbers of the preparation fragment and the receiving fragment are associated with the number of the reserved video segment.
Step S52: and respectively storing the preparation fragment and the field collection fragment into a preparation sub-collection and a field collection sub-collection in the abnormal collection.
Fig. 4 shows a flow chart of review of video further processing in a video clip combination processing method according to an embodiment of the present invention, where after the pre-processed video obtained after being cut is numbered and stored in a pre-processing aggregation, the method further includes:
step S60: and combining the pre-processing videos in the pre-processing combination set according to the numbering sequence.
Step S61: and monitoring the memory value size of the combined multiple pre-processed videos.
Step S62: and when the memory value of the combined pre-processing video exceeds the preset memory value, no other pre-processing video is combined after the current plurality of pre-processing videos, and the combined pre-processing video is numbered again.
Step S63: and sending the combined pre-processing video to a manual end for further processing to obtain a deep processing video.
Step S64: and receiving the deep processing videos processed by the manual end, arranging the deep processing videos according to the numbers, and sending the deep processing videos to the auditing end.
In one case of the present embodiment, switching time points between different shots in the pre-processed video are identified; a transition marker is inserted at the switching time point. So that the junction of the sub-lenses can be found manually and quickly in the later period, and a proper transition effect is inserted.
In one case of the embodiment, shots of which the distant view is close or close view is far in the pre-processed video are identified, and the positions of the shots are marked as the dubbing shots. When the close shot and the distant shot are changed alternately, the scenario is explained to have emotion or express or fluctuation on the scenario, and special attention needs to be paid at this time, background music is added, or details of people are emphasized.
In addition, the part of the story line with the heavy emotion can be input in advance according to the performance scenes, and videos related to the scenes can be independently stored so as to be conveniently and heavily processed manually. In the scene with fine emotion, characters are highlighted by virtual backgrounds, details are enlarged, and the lenses are slowed down.
In addition, the method can be additionally provided with a mode of judging whether videos which do not belong to the story line exist in a plurality of video materials or not, and distinguishing and judging whether characters in the videos have correlation or whether the videos are uniform in picture style or not. And further judging whether the picture or the character communication in the video is continuous, whether the picture shakes or not, whether the picture is disordered or not and the like, and further automatically and deeply processing the video material.
Fig. 5 is a schematic diagram showing an internal configuration of a video clip combination processing system according to an embodiment of the present invention, the video clip combination processing system including:
the video material obtaining module 100 is configured to obtain a video material to be processed.
And the abnormal segment identification module 200 is configured to analyze the video material to be processed and identify an abnormal segment in the video material.
And the clipping and storing module 300 is configured to clip the abnormal segments from the video material, and store the clipped abnormal segments into the abnormal collection.
And a number storage module 400, configured to number the pre-processing video obtained after the cutting, and store the pre-processing video in a pre-processing aggregation.
In order to load the above method and system to operate successfully, the system may include more or less components than those described above, or combine some components, or different components, in addition to the various modules described above, for example, input/output devices, network access devices, buses, processors, memories, and the like.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only represent some preferred embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (4)

1. A method of video clip assembly processing, the method comprising:
acquiring a video material to be processed;
analyzing the video material to be processed, and identifying abnormal segments in the video material;
cutting off the abnormal fragments from the video material, and storing the cut-off abnormal fragments into an abnormal collection;
numbering the pre-processing videos obtained after shearing, and storing the pre-processing videos into a pre-processing aggregation;
the analyzing the video material to be processed and identifying the abnormal segments in the video material specifically comprise:
analyzing the video material to be processed, positioning a segment with the volume lower than the detection decibel value, and marking the segment as a mute segment;
identifying specific recorded voice and specific ending voice in the video material, marking a segment before the specific recorded voice as a preparation segment, and marking a segment after the specific ending voice as a field ending segment;
positioning interrupted voice in the video material, and marking the interrupted voice as a position to be processed;
the analyzing the video material to be processed, locating a segment whose volume is lower than the detection decibel value, and marking the segment as a silence segment specifically includes:
extracting audio information in a video material to be processed;
detecting a decibel value of the audio information;
positioning segments of which the decibel value is lower than the detection decibel value in the audio information, marking the segments as silent segments, and numbering the silent segments; the number of the mute segment is associated with the number of the video material where the mute segment is located;
cutting the abnormal segments from the video material, and storing the cut abnormal segments into the abnormal collection specifically comprises:
cutting off the mute sections from the video material, inserting mute section numbers into cutting positions, and marking the cut-off mute sections with the same mute section numbers;
storing the cut mute sections into a mute section subset in the abnormal collection;
the step of numbering the pre-processing videos obtained after the cutting and storing the pre-processing videos into a pre-processing collection specifically comprises the following steps:
cutting off a mute segment in a video material to obtain a primary processed video;
cutting the preparation segment and the closing segment in the primary processed video to obtain a secondary processed video;
marking the position where the voice is interrupted in the secondary processing video, and numbering the video to obtain a pre-processing video;
and storing the pre-processing video into a pre-processing aggregation according to the numbering sequence.
2. The method of claim 1, wherein the identifying of the specific recorded voice and the specific ending voice in the video material, the marking of the segment before the specific recorded voice as a preparation segment, and the marking of the segment after the specific ending voice as a closing segment specifically comprises:
identifying specific recorded voice and specific ending voice in the video material;
positioning the position where the specific recorded voice ends and the position where the specific ending voice starts, and marking the positions as a starting mark and an ending mark;
and sequentially grouping the start marks and the end marks in the video material pairwise from the first start mark to form a pair of cutting marks, and associating the same group of start marks and end marks.
3. The method for processing a combination of video clips of claim 2, wherein said cropping the outlier segments from the video material and saving the cropped outlier segments to the outlier collection specifically comprises:
taking a pair of cutting marks as cutting points, cutting and reserving the video segments between the pair of cutting marks, and numbering the reserved video segments;
numbering preparation fragments and field receiving fragments before and after the reserved video segment, wherein the preparation fragments and the field receiving fragments are associated with the reserved video segment number;
and respectively storing the preparation fragment and the field collection fragment into a preparation sub-collection and a field collection sub-collection in the abnormal collection.
4. A method of processing a combination of video clips according to any of claims 1 to 3, wherein after said pre-processed video from said clipping is numbered and stored in said pre-processed collection, said method further comprises:
combining the pre-processing videos in the pre-processing combination set according to the serial number sequence;
monitoring the size of a memory value after a plurality of pre-processing videos are combined;
when the memory value of the combined pre-processing video exceeds the preset memory value, no other pre-processing video is combined after the current plurality of pre-processing videos, and the combined pre-processing video is numbered again;
sending the combined pre-processing video to a manual end for further processing to obtain a deep processing video;
and receiving the deep processing videos processed by the manual end, arranging the deep processing videos according to the numbers, and sending the deep processing videos to the auditing end.
CN202111357911.6A 2021-11-17 2021-11-17 Video clip combination processing method and system Active CN113810766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111357911.6A CN113810766B (en) 2021-11-17 2021-11-17 Video clip combination processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111357911.6A CN113810766B (en) 2021-11-17 2021-11-17 Video clip combination processing method and system

Publications (2)

Publication Number Publication Date
CN113810766A CN113810766A (en) 2021-12-17
CN113810766B true CN113810766B (en) 2022-02-08

Family

ID=78898656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111357911.6A Active CN113810766B (en) 2021-11-17 2021-11-17 Video clip combination processing method and system

Country Status (1)

Country Link
CN (1) CN113810766B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115767174A (en) * 2022-10-31 2023-03-07 上海卓越睿新数码科技股份有限公司 Online video editing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537523A (en) * 2014-12-27 2015-04-22 宁波江东远通计算机有限公司 Method, device and system for finding mail deleted by mistake
WO2020119508A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Video cutting method and apparatus, computer device and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697564B1 (en) * 2000-03-03 2004-02-24 Siemens Corporate Research, Inc. Method and system for video browsing and editing by employing audio
GB2404299A (en) * 2003-07-24 2005-01-26 Hewlett Packard Development Co Method and apparatus for reviewing video
US20050050578A1 (en) * 2003-08-29 2005-03-03 Sony Corporation And Sony Electronics Inc. Preference based program deletion in a PVR
CN109429093B (en) * 2017-08-31 2022-08-19 中兴通讯股份有限公司 Video editing method and terminal
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 Video editing method and electronic equipment
CN108848411B (en) * 2018-08-01 2020-09-25 夏颖 System and method for defining program boundaries and advertisement boundaries based on audio signal waveforms
CN110611846A (en) * 2019-09-18 2019-12-24 安徽石轩文化科技有限公司 Automatic short video editing method
CN113052085B (en) * 2021-03-26 2024-08-13 新东方教育科技集团有限公司 Video editing method, device, electronic equipment and storage medium
CN113613068A (en) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537523A (en) * 2014-12-27 2015-04-22 宁波江东远通计算机有限公司 Method, device and system for finding mail deleted by mistake
WO2020119508A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Video cutting method and apparatus, computer device and storage medium

Also Published As

Publication number Publication date
CN113810766A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN111866585B (en) Video processing method and device
US11605229B2 (en) Inmate tracking system in a controlled environment
CN110611841B (en) Integration method, terminal and readable storage medium
EP1742471A1 (en) Imaging device and imaging system
CN113810766B (en) Video clip combination processing method and system
CN104994404A (en) Method and device for obtaining keywords for video
CN105159959A (en) Image file processing method and system
CN110795597A (en) Video keyword determination method, video retrieval method, video keyword determination device, video retrieval device, storage medium and terminal
Hanjalic et al. Semiautomatic news analysis, indexing, and classification system based on topic preselection
JP2003256432A5 (en)
US8134592B2 (en) Associating device
EP2259581A3 (en) Method and apparatus for recording and searching audio/video signal
JP2006311462A (en) Apparatus and method for retrieval contents
US20100169248A1 (en) Content division position determination device, content viewing control device, and program
CN114782879B (en) Video identification method and device, computer equipment and storage medium
US8896708B2 (en) Systems and methods for determining, storing, and using metadata for video media content
KR101783872B1 (en) Video Search System and Method thereof
KR102437857B1 (en) Apparatus and method for providing video retrieval service based on speech to text
US10915715B2 (en) System and method for identifying and tagging assets within an AV file
CN105472407A (en) Automatic video index and alignment method based on continuous image features
CN114900713B (en) Video clip processing method and system
CN112749133A (en) Case information management method, system, device and readable storage medium
KR101722831B1 (en) Device and method for contents production of the device
KR20010004400A (en) Segmentation of acoustic scences in audio/video materials
CN116781990A (en) Text-to-speech conversion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant