CN111432138B - Video splicing method and device, computer readable medium and electronic equipment - Google Patents

Video splicing method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN111432138B
CN111432138B CN202010184000.7A CN202010184000A CN111432138B CN 111432138 B CN111432138 B CN 111432138B CN 202010184000 A CN202010184000 A CN 202010184000A CN 111432138 B CN111432138 B CN 111432138B
Authority
CN
China
Prior art keywords
video
splicing
image
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010184000.7A
Other languages
Chinese (zh)
Other versions
CN111432138A (en
Inventor
陈标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010184000.7A priority Critical patent/CN111432138B/en
Publication of CN111432138A publication Critical patent/CN111432138A/en
Application granted granted Critical
Publication of CN111432138B publication Critical patent/CN111432138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to the field of video processing technologies, and in particular, to a video stitching method, a video stitching apparatus, a computer-readable medium, and an electronic device. The method comprises the following steps: in response to receiving a path to be processed, classifying video files included in the path to be processed according to a preset classification strategy to obtain at least one spliced material group; selecting target splicing materials in at least one splicing material group in response to the selection operation of at least one splicing material group; and splicing the target spliced material according to a preset splicing strategy to obtain a final video file. The method and the device can automatically classify the video files in the path to be processed, so that a user can conveniently select the same type of videos to be spliced, and further the sense of incongruity of the spliced videos is reduced; meanwhile, the efficiency of video splicing can be improved, and non-professional users can conveniently carry out video splicing.

Description

Video splicing method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video stitching method, a video stitching apparatus, a computer-readable medium, and an electronic device.
Background
Video splicing refers to the process of splicing multiple video files or multiple segments of a video into one video. In the related art, a plurality of videos are directly selected and spliced according to a sequence defined by a user or a random sequence to obtain a final spliced video.
When a plurality of videos are spliced, if the plurality of videos are not preprocessed, different types of videos may be merged together, and in this case, videos with large contrast may be spliced with each other, which may cause a problem of mismatching of the spliced videos. In order to reduce the sense of incongruity of the spliced video, a user needs to repeatedly play, cut, intercept and the like a plurality of spliced videos, and also needs to add music, filters, transition effects and the like to each part in the spliced video. However, in this splicing method, a large number of manual editing operations are required for the user, and the efficiency of video splicing is low, which is very unfriendly for non-professional users.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a video splicing method, a video splicing apparatus, a computer-readable medium, and an electronic device, so as to at least improve the degree of automation of video splicing to a certain extent, reduce editing operations by a user, and improve splicing efficiency and effect.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a video stitching method, including: in response to receiving a path to be processed, classifying video files included in the path to be processed according to a preset classification strategy to obtain at least one spliced material group;
selecting target splicing materials in at least one splicing material group in response to the selection operation of at least one splicing material group;
and splicing the target spliced material according to a preset splicing strategy to obtain a final video file.
According to a second aspect of the present disclosure, there is provided a video stitching device, comprising:
the material processing module is used for responding to the received path to be processed, classifying the video files in the path to be processed according to a preset classification strategy so as to obtain at least one spliced material group;
the material determining module is used for responding to the selection operation of at least one splicing material group and selecting target splicing materials in the at least one splicing material group;
and the material splicing module is used for splicing the target spliced material according to a preset splicing strategy so as to obtain a final video file.
According to a third aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the video stitching method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
a processor; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the video stitching method described above.
In the video splicing method provided by the embodiment of the disclosure, different types of splicing material groups can be obtained by classifying video files in a path to be processed; and then, selecting a target splicing material in the splicing material group according to the selection operation of the splicing material group, and further splicing the target splicing material through a preset splicing strategy to obtain a final video file. The technical scheme of the embodiment of the disclosure can automatically classify the video files included in the path to be processed, so that a user can conveniently select the same type of video to be spliced, and further the sense of incongruity of the spliced video is reduced; meanwhile, the videos to be spliced can be automatically classified, so that the process that a user manually edits the videos to be spliced one by one can be avoided, the efficiency of video splicing is improved, and the video splicing is convenient for non-professional users to carry out the video splicing.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates a flow chart of a video stitching method in an exemplary embodiment of the present disclosure;
fig. 2 schematically illustrates a flowchart of a method for classifying video files included in a path to be processed according to a preset classification policy in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of classifying video files according to a segment classification policy based on image tags in an exemplary embodiment of the present disclosure;
fig. 4 is a flowchart schematically illustrating a method for dividing a video file into at least one video segment according to a segment division policy based on an image tag and determining a segment tag corresponding to the video segment in an exemplary embodiment of the present disclosure;
FIG. 5 is a flow diagram schematically illustrating a method for segmenting a video file according to a segment segmentation policy based on a tag sequence to obtain at least one video segment and a corresponding segment tag in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of classifying the video file based on the image tag in an exemplary embodiment of the disclosure;
fig. 7 schematically illustrates a flowchart of a method of adding a preset effect to each spliced material group in an exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a composition diagram of a video stitching device in an exemplary embodiment of the present disclosure;
fig. 9 schematically shows a structural diagram of a computer system of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In view of the above-mentioned drawbacks and deficiencies of the prior art, the exemplary embodiment provides a video splicing method, which can be applied to terminal devices such as mobile phones, tablet computers, and digital cameras.
Fig. 1 shows a flow of the present exemplary embodiment, which may include the following steps S110 to S130:
step S110, in response to receiving the path to be processed, classifying the video files included in the path to be processed according to a preset classification strategy so as to obtain at least one spliced material group.
In an example embodiment of the present disclosure, after the path to be processed is received, all video files in the path to be processed may be classified according to a preset classification policy, so as to obtain at least one splicing material group. And the types of the materials in each spliced material group are the same.
In an example embodiment of the present disclosure, the preset classification policy may include a duration threshold and a segment classification policy. At this time, classifying the video files included in the path to be processed according to a preset classification policy, as shown in fig. 2, may include the following steps S210 to S230:
step S210, scanning and identifying the video file to obtain an image tag corresponding to each frame of image in the video file.
In an example embodiment of the present disclosure, before classifying a video file, an image tag corresponding to each frame of graphics in the video file needs to be acquired, so as to classify the video file according to the image tag. Therefore, the video file can be scanned and identified first to obtain the image tags corresponding to the frames of images in the video file. The scanning and the identification of the video file can be carried out according to a machine learning model or a neural network model, and then image labels corresponding to all the frames of images are obtained.
It should be noted that the image labels may be set according to different splicing environments. For example, in a splicing environment of an ordinary user, it may be set that the image tags include people, animals, landscapes, places, and the like; in the splicing environment of professional users, the image label may be set in more detail, for example, the landscape may be further refined into an urban landscape, a natural landscape, and the like, and the setting of the image label is not particularly limited by the present disclosure.
And step S220, when the duration of the video file is greater than the duration threshold, classifying the video file according to the fragment classification strategy based on the image label.
In an example embodiment of the present disclosure, since a longer video file may have multiple types of videos captured at the same time, for example, people, pets, scenery, and the like may be captured at the same time in the same video. Different classification methods can be employed for different length videos.
In an example embodiment of the present disclosure, a duration threshold may be set to determine the classification policy of a video file according to its duration. Specifically, when the duration of the video file is greater than a duration threshold, a segment classification strategy is selected, and the video file is classified based on the image tags corresponding to the frames of images in the video file.
In an example embodiment of the present disclosure, the segment classification policy may include a segment segmentation policy. At this time, referring to fig. 3, classifying the video file according to the segment classification policy based on the image tag may include the following steps S310 to S320:
step S310, based on the image label, dividing the video file into at least one video segment according to the segment division strategy, and determining a segment label corresponding to the video segment.
In an example embodiment of the present disclosure, a video file may be divided into video segments according to a segment division policy based on image tags corresponding to frames of images in the video file, and then a corresponding segment tag may be determined for each video segment. For example, referring to fig. 4, dividing the video file into at least one video segment according to the segment division policy based on the image tag, and determining a segment tag corresponding to the video segment may include the following steps S410 to S420:
and S410, sequencing the image labels according to the sequence of each frame of image in the video file to obtain a label sequence.
In an example embodiment of the present disclosure, corresponding image tags may be sorted according to an arrangement order of each frame of image in a video file to obtain a group of sequentially arranged image tags, that is, a tag sequence. For example, if the image tags corresponding to the first 3 frames of images of a certain video file are all people, animals and scenery, the tag sequences obtained by the 3 frames of images are people, animals and scenery.
Step S420, based on the tag sequence, segmenting the video file according to a segment segmentation policy to obtain at least one video segment and a corresponding segment tag.
In an example embodiment of the present disclosure, when a video file is segmented according to a segment division policy, the segment division policy may include a preset number and a preset ratio. For example, referring to fig. 5, segmenting the video file according to a segment segmentation policy based on the tag sequence to obtain at least one video segment and a corresponding segment tag may include the following steps S510 to S520:
step S510, intercepting a target tag sequence from the tag sequence, and determining a corresponding portion in the video file as a video clip according to the target tag sequence.
In an example embodiment of the present disclosure, the target label sequence includes a preset number of image labels, and a ratio of the target image labels in the preset number of image labels is greater than or equal to a preset ratio. The preset number and the preset proportion can be set differently according to different requirements, but in general, the preset number can be set to be a smaller value so as to segment the video file; the preset proportion is set to a smaller value so as to ensure that the video segment corresponding to the target label sequence does not comprise a plurality of types of videos. For example, the preset number is 1000, the preset proportion is 70%, the target label sequence is 1000 consecutive image labels in the label sequence, and image labels of 700 frames or more in the 1000 image labels are the same.
In an example embodiment of the present disclosure, a corresponding portion of a video file may be determined as one video clip according to a target tag sequence. Specifically, a video corresponding to each frame image corresponding to each image tag in the target tag sequence may be captured as a video clip. By determining the target label sequence in the label sequence, longer video files can be divided into different types of video clips, so that users can perform the same type of video splicing according to the video clips.
Step S520, determining the target image tag corresponding to the target tag sequence as a segment tag corresponding to the video segment determined by the target tag sequence.
In an example embodiment of the present disclosure, since the ratio of the target image tags in the target tag sequence is greater than or equal to a preset ratio, the target image tags may be used as the segment tags corresponding to the video segments determined by the target tag sequence. For example, when an image tag of 700 frames or more out of 1000 frames of image tags in the target tag sequence is a person, the person may be used as a segment tag corresponding to a video segment specified by the target tag sequence.
Step S320, classifying the video segments corresponding to the segment labels based on the segment labels.
In an example embodiment of the present disclosure, since the segment labels are determined by image labels with a relatively large proportion in the video segments, the video segments may be classified based on the segment labels, and the video segments with the same segment labels are divided into the same type, so as to obtain the corresponding splicing material group.
Step S230, when the duration of the video file is less than or equal to the duration threshold, classifying the video file based on the image tag.
In an example embodiment of the present disclosure, when the duration of a video file is less than or equal to a duration threshold, the video file may be directly classified. Referring to fig. 6, classifying the video file based on the image tag may include the following steps S610 to S620:
step S610, determining the most image tag with the largest occurrence frequency among the image tags corresponding to the video file as the file tag corresponding to the video file.
In an example embodiment of the present disclosure, among image tags corresponding to a video file, an image tag having the largest number of occurrences may represent attribute information of the video file, and thus the image tag having the largest number of occurrences may be determined as a file tag corresponding to the video file.
And step S620, classifying the corresponding video files according to the file labels.
In an example embodiment of the present disclosure, video files may be classified according to file tags, and then the video files with the same file tag are divided into the same type, so as to obtain a corresponding splicing material group.
It should be noted that, when the path to be processed includes video files with duration greater than the preset duration and less than or equal to the preset duration, the video clips and the video files of the same type may be merged into the same splicing material group, so as to obtain the classification results of all the video files in the path to be processed.
And step S120, responding to the selection operation of at least one splicing material group, and selecting target splicing materials in at least one splicing material group.
In an example embodiment of the present disclosure, after classifying video files in a path to be processed, at least one splicing material group may be obtained, and at this time, a user may select a target splicing material to be spliced in the splicing material group through a selection operation on the at least one splicing material group. Because the splicing materials in each splicing material group are the same type of materials, a user can quickly select the same type of materials to splice, and further the sense of incongruity of the finally spliced video is reduced.
In an example embodiment of the present disclosure, referring to fig. 7, before splicing the target splicing material according to a preset splicing policy, the method further includes the following steps S710 to S720:
step S710, setting a corresponding preset effect for at least one of the splicing material groups.
And S720, adding a preset effect to the target splicing materials in each splicing material group.
In an exemplary embodiment of the present disclosure, since the materials to be spliced in the spliced material groups obtained by classification are all similar materials, a corresponding preset effect can be set for each group of similar materials before splicing, and the preset effect is added to the target spliced materials in each group, so that the effects of the target spliced materials in each group are uniform, and finally, the sense of incongruity of the effect corresponding to each target spliced material in the spliced video can also be reduced.
And step S130, splicing the target splicing material according to a preset splicing strategy to obtain a final video file.
In an example embodiment of the present disclosure, the preset splicing policy may include a sequence policy, a mode policy, and a handover policy. The sequence policy may include a policy for determining a sequence of the target splicing materials, for example, the target splicing materials may be spliced in a selected sequence; the manner policy may include a policy for determining a manner of target-stitched material, e.g., 4 target-stitched materials may be stitched together in a quad manner; the switching policy may include a switching policy for determining to switch the target splicing material, for example, the target splicing material 1 and the target splicing material 2 may be switched by way of fade-in and fade-out. In addition, the preset splicing strategy may also include other strategies for defining the splicing process, which is not particularly limited by the present disclosure.
It is to be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to an exemplary embodiment of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 8, in an embodiment of the present example, there is provided a video stitching apparatus 800, including: a material processing module 810, a material determining module 820 and a material splicing module 830. Wherein:
the material processing module 810 may be configured to, in response to receiving the to-be-processed path, classify the video files included in the to-be-processed path according to a preset classification policy to obtain at least one spliced material group.
The material determination module 820 may be configured to select a target splicing material among at least one of the splicing material groups in response to a selection operation of at least one of the splicing material groups.
The material splicing module 830 may be configured to splice the target spliced material according to a preset splicing policy to obtain a final video file.
In an example of the present disclosure, the material processing module 810 may be configured to scan and identify the video file to obtain an image tag corresponding to each frame of image in the video file; when the duration of the video file is greater than the duration threshold, classifying the video file according to the fragment classification strategy based on the image label; and when the duration of the video file is less than or equal to the duration threshold, classifying the video file based on the image tag.
In an example of the present disclosure, the material processing module 810 may be configured to divide the video file into at least one video segment according to the segment division policy based on the image tag, and determine a segment tag corresponding to the video segment; and classifying the video clips corresponding to the clip labels based on the clip labels.
In an example of the present disclosure, the material processing module 810 may be configured to sort the image tags according to an order of each frame image in the video file, so as to obtain a tag sequence; and segmenting the video file according to a segment segmentation strategy based on the label sequence to obtain at least one video segment and a corresponding segment label.
In an example of the present disclosure, the material processing module 810 may be configured to intercept a target tag sequence from the tag sequence, and determine a corresponding portion of a video file as a video clip according to the target tag sequence; the target label sequence comprises a preset number of image labels, and the proportion of the target image labels in the preset number of image labels is greater than or equal to the preset proportion; and determining the target image label corresponding to the target label sequence as a fragment label corresponding to the video fragment determined by the target label sequence.
In an example of the present disclosure, the material processing module 810 may be configured to determine, as a file tag corresponding to the video file, a most frequently occurring image tag among image tags corresponding to the video file; and classifying the corresponding video files according to the file labels.
In an example of the present disclosure, the video splicing apparatus 800 further includes a material editing module, which can be configured to set a corresponding preset effect for at least one of the splicing material groups; and adding a preset effect to the target splicing materials in each splicing material group.
In an example of the present disclosure, the preset splicing policy includes at least one or a combination of a sequence policy, a manner policy, and an effect policy.
The specific details of each module in the video splicing apparatus have been described in detail in the corresponding video splicing method, and therefore, the details are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
FIG. 9 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
It should be noted that the computer system 900 of the electronic device shown in fig. 9 is only an example, and should not bring any limitation to the function and the scope of the application of the embodiment of the present invention.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for system operation are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 901.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 1 to 7.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (6)

1. A method for video stitching, the method comprising:
in response to receiving a path to be processed, classifying video files included in the path to be processed according to a preset classification strategy to obtain at least one spliced material group;
selecting target splicing materials in at least one splicing material group in response to the selection operation of at least one splicing material group;
splicing the target spliced material according to a preset splicing strategy to obtain a final video file;
the preset classification strategy comprises a duration threshold and a segment classification strategy;
the classifying the video files included in the path to be processed according to a preset classification strategy comprises the following steps:
scanning and identifying the video file to acquire image labels corresponding to all frames of images in the video file;
when the duration of the video file is less than or equal to the duration threshold, classifying the video file based on the image tag;
when the duration of the video file is greater than the duration threshold, classifying the video file according to the fragment classification strategy based on the image label;
the segment classification strategies comprise preset quantities and preset proportions;
the classifying the video file according to the segment classification policy based on the image tag includes:
sequencing the image labels according to the sequence of each frame of image in the video file to obtain a label sequence;
intercepting a target tag sequence from the tag sequence, and determining a corresponding part in a video file as a video clip according to the target tag sequence; the target label sequence comprises a preset number of image labels, and the proportion of the target image labels in the preset number of image labels is greater than or equal to the preset proportion;
determining a target image tag corresponding to the target tag sequence as a segment tag corresponding to a video segment determined by the target tag sequence;
and classifying the video clips corresponding to the clip labels based on the clip labels.
2. The method of claim 1, wherein the classifying the video file based on the image tag comprises:
determining the most image tags with the most occurrence times in the image tags corresponding to the video files as file tags corresponding to the video files;
and classifying the corresponding video files according to the file labels.
3. The method according to claim 1, wherein before splicing the target spliced material according to a preset splicing strategy, the method comprises:
setting a corresponding preset effect for at least one splicing material group;
and adding a preset effect to the target splicing materials in each splicing material group.
4. A video stitching device, comprising:
the material processing module is used for responding to the received path to be processed, classifying the video files in the path to be processed according to a preset classification strategy so as to obtain at least one spliced material group;
the material determining module is used for responding to the selection operation of at least one splicing material group and selecting target splicing materials in the at least one splicing material group;
the material splicing module is used for splicing the target spliced material according to a preset splicing strategy so as to obtain a final video file;
the preset classification strategy comprises a duration threshold and a segment classification strategy;
the classifying the video files included in the path to be processed according to a preset classification strategy comprises the following steps:
scanning and identifying the video file to acquire image labels corresponding to all frames of images in the video file;
when the duration of the video file is less than or equal to the duration threshold, classifying the video file based on the image tag;
when the duration of the video file is greater than the duration threshold, classifying the video file according to the fragment classification strategy based on the image label;
the segment classification strategies comprise preset quantities and preset proportions;
the classifying the video file according to the segment classification policy based on the image tag includes:
sequencing the image labels according to the sequence of each frame of image in the video file to obtain a label sequence;
intercepting a target tag sequence from the tag sequence, and determining a corresponding part in a video file as a video clip according to the target tag sequence; the target label sequence comprises a preset number of image labels, and the proportion of the target image labels in the preset number of image labels is greater than or equal to the preset proportion;
determining a target image tag corresponding to the target tag sequence as a segment tag corresponding to a video segment determined by the target tag sequence;
and classifying the video clips corresponding to the clip labels based on the clip labels.
5. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the video stitching method according to any one of claims 1 to 3.
6. An electronic device, comprising:
a processor; and
memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the video splicing method of any of claims 1 to 3.
CN202010184000.7A 2020-03-16 2020-03-16 Video splicing method and device, computer readable medium and electronic equipment Active CN111432138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010184000.7A CN111432138B (en) 2020-03-16 2020-03-16 Video splicing method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010184000.7A CN111432138B (en) 2020-03-16 2020-03-16 Video splicing method and device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111432138A CN111432138A (en) 2020-07-17
CN111432138B true CN111432138B (en) 2022-04-26

Family

ID=71549523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010184000.7A Active CN111432138B (en) 2020-03-16 2020-03-16 Video splicing method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111432138B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113242464A (en) * 2021-01-28 2021-08-10 维沃移动通信有限公司 Video editing method and device
CN112801861A (en) * 2021-01-29 2021-05-14 恒安嘉新(北京)科技股份公司 Method, device and equipment for manufacturing film and television works and storage medium
CN113301386B (en) * 2021-05-21 2023-04-07 北京达佳互联信息技术有限公司 Video processing method, device, server and storage medium
CN113905189B (en) * 2021-09-28 2024-06-14 安徽尚趣玩网络科技有限公司 Video content dynamic splicing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124636A (en) * 2005-10-25 2007-05-17 Mitsubishi Electric Research Laboratories Inc Method and system for generating summary of video including plurality of frame
CN105518783A (en) * 2013-08-19 2016-04-20 谷歌公司 Content-based video segmentation
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
CN107180074A (en) * 2017-03-31 2017-09-19 北京奇艺世纪科技有限公司 A kind of video classification methods and device
CN108090497A (en) * 2017-12-28 2018-05-29 广东欧珀移动通信有限公司 Video classification methods, device, storage medium and electronic equipment
CN108830208A (en) * 2018-06-08 2018-11-16 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN110602546A (en) * 2019-09-06 2019-12-20 Oppo广东移动通信有限公司 Video generation method, terminal and computer-readable storage medium
CN110751224A (en) * 2019-10-25 2020-02-04 Oppo广东移动通信有限公司 Training method of video classification model, video classification method, device and equipment
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327469B (en) * 2015-06-29 2019-06-18 北京航空航天大学 A kind of video picture segmentation method of semantic label guidance

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124636A (en) * 2005-10-25 2007-05-17 Mitsubishi Electric Research Laboratories Inc Method and system for generating summary of video including plurality of frame
CN105518783A (en) * 2013-08-19 2016-04-20 谷歌公司 Content-based video segmentation
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
CN107180074A (en) * 2017-03-31 2017-09-19 北京奇艺世纪科技有限公司 A kind of video classification methods and device
CN108090497A (en) * 2017-12-28 2018-05-29 广东欧珀移动通信有限公司 Video classification methods, device, storage medium and electronic equipment
CN108830208A (en) * 2018-06-08 2018-11-16 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108875619A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN110602546A (en) * 2019-09-06 2019-12-20 Oppo广东移动通信有限公司 Video generation method, terminal and computer-readable storage medium
CN110751224A (en) * 2019-10-25 2020-02-04 Oppo广东移动通信有限公司 Training method of video classification model, video classification method, device and equipment
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium

Also Published As

Publication number Publication date
CN111432138A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111432138B (en) Video splicing method and device, computer readable medium and electronic equipment
CN109688463B (en) Clip video generation method and device, terminal equipment and storage medium
CN110070896B (en) Image processing method, device and hardware device
CN111327945A (en) Method and apparatus for segmenting video
CN113613065A (en) Video editing method and device, electronic equipment and storage medium
CN111246289A (en) Video generation method and device, electronic equipment and storage medium
CN111553362A (en) Video processing method, electronic equipment and computer readable storage medium
CN107133909B (en) Method and device for recombining shaders
US11996124B2 (en) Video processing method, apparatus, readable medium and electronic device
CN107801093A (en) Video Rendering method, apparatus, computer equipment and readable storage medium storing program for executing
CN108665769B (en) Network teaching method and device based on convolutional neural network
CN110134300A (en) Picture editing method and device
CN108334626B (en) News column generation method and device and computer equipment
CN111901536A (en) Video editing method, system, device and storage medium based on scene recognition
CN114286181B (en) Video optimization method and device, electronic equipment and storage medium
CN112784159B (en) Content recommendation method and device, terminal equipment and computer readable storage medium
CN114125498A (en) Video data processing method, device, equipment and storage medium
CN113242464A (en) Video editing method and device
CN114666656B (en) Video editing method, device, electronic equipment and computer readable medium
CN111010606B (en) Video processing method and device
CN111339367A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN111797845B (en) Picture processing method and device, storage medium and electronic equipment
CN115858854B (en) Video data sorting method and device, electronic equipment and storage medium
CN111489769B (en) Image processing method, device and hardware device
CN110930292B (en) Image processing method, device, computer storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant