CN111031394B - Video production method, device, equipment and storage medium - Google Patents

Video production method, device, equipment and storage medium Download PDF

Info

Publication number
CN111031394B
CN111031394B CN201911388446.5A CN201911388446A CN111031394B CN 111031394 B CN111031394 B CN 111031394B CN 201911388446 A CN201911388446 A CN 201911388446A CN 111031394 B CN111031394 B CN 111031394B
Authority
CN
China
Prior art keywords
video
segment
video segment
clip
repeated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911388446.5A
Other languages
Chinese (zh)
Other versions
CN111031394A (en
Inventor
吴晗
李文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911388446.5A priority Critical patent/CN111031394B/en
Publication of CN111031394A publication Critical patent/CN111031394A/en
Application granted granted Critical
Publication of CN111031394B publication Critical patent/CN111031394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video production method, a video production device, video production equipment and a storage medium, and belongs to the technical field of the Internet. The method comprises the steps of obtaining a plurality of video segments and background audio; selecting a target video segment which needs to be subjected to segment repeated processing from the plurality of video segments; determining a segment repeat video segment in the target video segment, and modifying the target video segment into a stuck segment repeat video segment based on the segment repeat video segment, wherein the duration of the stuck segment repeat video segment is equal to the accent time interval of the background audio; and generating a composite video based on the click segment repeated video segment, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio. By the method and the device, the efficiency of making the ghost livestock video by the user can be improved.

Description

Video production method, device, equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a storage medium for video production.
Background
As short videos explode more and more, more and more people begin to make videos by themselves, and various effects are added to the videos, for example, a stuck ghost video is made, namely, a plurality of sections of videos are subjected to stuck playing, and one section of video is selected from the corresponding video section to be played repeatedly. People can upload videos made by themselves to the video platform and share the videos made by themselves with other users of the platform.
At present, a user needs to use professional video making software to make a stuck ghost video, firstly, the user needs to select background music, then, according to each accent time point of the background music, intercept a plurality of sections of videos, according to the accent time point of the background music, determine a video section for repeatedly processing the sections, determine the repeated playing times of the ghost video section, finally, through the video making software, the intercepted and processed plurality of sections of videos are spliced to make the stuck ghost video.
In the process of implementing the present application, the inventor finds that the prior art has at least the following problems:
a user uses video making software to make a stuck ghost video, a plurality of sections of videos need to be intercepted according to accent time points in background music, and the user needs to repeatedly process the sections of the videos, so that the process is complex and tedious, and the efficiency of making the stuck ghost video by the user is low.
Disclosure of Invention
The embodiment of the application provides a video production method, a video production device, video production equipment and a storage medium, and can improve the efficiency of producing ghost livestock videos by users. The technical scheme is as follows:
in one aspect, a method of video production is provided, the method comprising:
acquiring a plurality of video segments and background audio;
selecting a target video segment which needs to be subjected to segment repeated processing from the plurality of video segments;
determining a segment repeat video segment in the target video segment, and modifying the target video segment into a stuck segment repeat video segment based on the segment repeat video segment, wherein the duration of the stuck segment repeat video segment is equal to the accent time interval of the background audio;
and generating a composite video based on the click segment repeated video segment, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio.
Optionally, the determining a segment repetition video segment in the target video segment includes:
and determining a segment repeated video segment in the target video segment based on the accent time interval of the background audio.
Optionally, the determining a segment repetition video segment in the target video segment based on the accent time interval of the background audio includes:
selecting a part with a period of time as the interval time of the accent time from the target video segment to obtain a stuck video segment;
determining a basic video clip duration based on the accent time interval of the background audio;
determining a part with the time length before the tail end of the click video segment as the time length of the basic video segment as a basic video segment;
and circulating the basic video clip for preset times to obtain a clip repetition frequency band.
Optionally, the modifying the target video segment into a stuck point segment repeated video segment based on the segment repeated video segment includes:
deleting the part of the front end of the stuck video segment, the duration of which is the duration of the segment repeat video segment, and splicing the segment repeat video segment at the tail end of the stuck video segment to obtain the stuck segment repeat video segment.
Optionally, the deleting the portion of the front end of the stuck video segment, where the duration of the deleting is the duration of the segment repeat video segment, and splicing the segment repeat video segment at the end of the stuck video segment to obtain the stuck segment repeat video segment includes:
carrying out special effect processing on the segment repetition video band;
deleting the part with the duration being the duration of the segment repeat video band at the front end of the stuck point video band, and splicing the segment repeat video band after the special effect processing at the tail end of the stuck point video band to obtain the stuck point segment repeat video band.
Optionally, the determining the repetition duration of the basic video segment includes:
and determining the product of the accent time interval of the background audio and a preset proportion value as the repetition duration of the basic video clip.
In another aspect, an apparatus for video production is provided, the apparatus comprising:
an acquisition module configured to acquire a plurality of video segments and background audio;
the selecting module is configured to select a target video segment which needs to be subjected to segment repeated processing from the plurality of video segments;
a processing module configured to determine a segment repeat video segment in the target video segment, and modify the target video segment into a stuck segment repeat video segment based on the segment repeat video segment, wherein a duration of the stuck segment repeat video segment is equal to an accent time interval of the background audio;
a composition module configured to generate a composite video based on the stuck-point segment repeat video segment, video segments of the plurality of video segments other than the target video segment, the background audio, and an accent time interval of the background audio.
Optionally, the processing module is configured to:
and determining a segment repeated video segment in the target video segment based on the accent time interval of the background audio.
Optionally, the processing module is configured to:
selecting a part with a period of time as the interval time of the accent time from the target video segment to obtain a stuck video segment;
determining a basic video clip duration based on the accent time interval of the background audio;
determining a part with the time length before the tail end of the click video segment as the time length of the basic video segment as a basic video segment;
and circulating the basic video clip for preset times to obtain a clip repetition frequency band.
Optionally, the processing module is configured to:
deleting the part of the front end of the stuck video segment, the duration of which is the duration of the segment repeat video segment, and splicing the segment repeat video segment at the tail end of the stuck video segment to obtain the stuck segment repeat video segment.
Optionally, the processing module is configured to:
carrying out special effect processing on the segment repetition video band;
deleting the part with the duration being the duration of the segment repeat video band at the front end of the stuck point video band, and splicing the segment repeat video band after the special effect processing at the tail end of the stuck point video band to obtain the stuck point segment repeat video band.
Optionally, the processing module is configured to:
and determining the time length of the basic video clip as the product of the accent time interval of the background audio and a preset proportion value.
In yet another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method of video production as described above.
In yet another aspect, a computer-readable storage medium having at least one instruction stored therein is provided, the at least one instruction being loaded and executed by a processor to implement the operations performed by the method of video production as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
determining a ghost image video segment (namely a ghost image segment repeated video segment) by the accent time interval of the background audio, cutting a plurality of video segments, splicing the plurality of video segments and the ghost image video segment, and synthesizing the spliced video segments and the background music to generate the ghost image video. Therefore, the method and the device do not need a user to manually intercept the video and repeatedly process the video, and can effectively improve the efficiency of the user in making the ghost livestock video.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for video production according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video production method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a video production method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video production method provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a video production method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus for video production according to an embodiment of the present application;
fig. 7 is a block diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The video production method can be realized by the terminal. The terminal can be operated with an application program with a video editing function, the terminal can be provided with a camera, an earphone, a loudspeaker and other components, the terminal has a communication function and can be connected with the Internet, and the terminal can be a mobile phone, a tablet personal computer, intelligent wearable equipment, a desktop computer, a notebook computer and the like.
The ghost cartoon video (i.e. the repeat video of the cartoon segments) is synthesized by a plurality of video segments and background music, the plurality of video segments are switched according to the drum points (the accent points) in the background music, so that the video is changed to have a rhythmic feeling, and a certain video segment or a plurality of video segments can be played repeatedly in different video segments, thereby achieving the effect of an interesting video. The method comprises the steps of selecting background music for making the ghost image videos, recording time points of all drum points in the background music, cutting a plurality of video segments according to the time points of all the drum points, determining the repeatedly played videos, and calculating the time length and the times of the repeatedly played parts. The video production method provided by the embodiment of the application can be used for producing the Kadian ghost video only by selecting the background music and the multiple sections of videos by the user, and can be applied to various application programs with video production, such as a live broadcast application program, a short video application program and a video editing program. The embodiments of the present application take short video applications as examples, and the detailed description of the schemes is given, and other cases are similar and will not be repeated.
Fig. 1 is a flowchart of video production according to an embodiment of the present disclosure. Referring to fig. 1, the embodiment includes:
step 101, acquiring a plurality of video segments and background audio.
The plurality of video segments may be videos stored in a user terminal, the background audio may be background music selected in a network, the accent time points included in the background audio may be recorded in an accent time point file, the time points corresponding to each accent point of the background audio, that is, the accent time points, are recorded in the accent time file, the accent points may be drum points in the background music, and generally, intervals between adjacent accent time points in the background music are the same.
In practice, a user may produce a ghost video at a short video application. The user can select the background music required by making the Kappan ghost video on a background music selection interface in the short video application program, a recommendation list and a search box are arranged in the background music selection interface, and the user can select the background music recommended by the short video application program from the recommendation list and can also select the background music which the user wants to use in the search box. When the user finishes selecting the background music, the terminal can send a corresponding acquisition request to the server, and the server can send the audio frequency of the background music selected by the user and the accent time file corresponding to the background music to the terminal according to the acquisition request. Then, the user can enter a video selection interface, a plurality of preview windows are displayed on the video selection interface, the video cover stored in the local area is displayed in the preview windows, and the user can click the selected option set in the preview windows to select the video materials required by the user for making the click videos. In addition, the terminal may prompt the duration of the video segment selected by the user according to each adjacent accent point time interval in the accent time point file corresponding to the background music selected by the user, that is, the video duration of the video segment selected by the user may be greater than each adjacent accent point time interval in the accent time point file. For example, if the interval between adjacent stress points in the stress point file is 6 seconds, the user can select a video segment stored in the terminal for more than 6 seconds.
And 102, selecting a target video segment which needs to be subjected to segment repeated processing from a plurality of video segments.
The segment repeat processing is to select one or more segments of video segments from the video segments, repeat playing the video segments, and then splicing the video segments into the original video segments.
In implementation, after the user selects a video, the user may enter a video production page, a plurality of video segments selected by the user are displayed in the video production page, and the user may select a target video segment that needs to be subjected to segment repetition processing again from the selected plurality of video segments, where the user may select one of the video segments or select a plurality of the video segments as the target video segment that needs to be subjected to segment repetition processing.
And 103, determining a segment repeated video band in the target video band, and modifying the target video band into a stuck segment repeated video band based on the segment repeated video band.
And the duration of the click segment repeated video segment is equal to the accent time interval of the background audio.
In implementation, according to an accent time point file corresponding to background music, an accent time interval of background audio, that is, a time interval of each adjacent accent point in the accent point file, is obtained, then a section of video can be determined in a target video section, then the section of video is repeatedly circulated to obtain ghost video sections (that is, a section repeated video section), and then the ghost video sections are cut and spliced according to the section repeated video section and the target video section to generate a stuck ghost video section.
Alternatively, a segment repeat video segment can be determined in the target video segment based on the accent time interval of the background audio.
In implementation, the duration of the ghost video segment may be determined according to the time interval of the accent of the background audio, for example, the duration of the ghost video segment may be one half of the time interval of the accent of the background audio, a video segment with a duration of one sixth of the time interval of the accent may be captured from the target video segment, and then the video segment is played circularly 3 times to obtain the ghost video segment, where the duration of the ghost video segment is one half of the time interval of the accent of the background audio.
Optionally, when the segment repeat video band is determined in the target video segment, a part with a period of time as the duration of the stress time interval may be selected from the target video segment to obtain a stuck video segment; determining the duration of a basic video clip based on the accent time interval of the background audio; determining a part with the time length before the tail end of the click video segment as the time length of the basic video as a basic video segment; and circulating the basic video clip for preset times to obtain the clip repetition frequency band.
The duration of the basic video clip can be set by a technician and can be determined according to the duration of the accent time interval of the background audio.
In an implementation, before determining the ghost video segment, the stuck video segment may be determined based on an emphasis time interval of the background audio. Wherein the duration of the click video segment is equal to the accent time interval of the background audio. Then, a base video clip is determined among the stuck video clips. As shown in fig. 2, the terminal may automatically select a portion having a duration of the interval between the background audio accent time points after the start of the target video segment as the stuck video segment. In addition, a user can select a click video segment from the target video segment, and after the user selects the target video segment, the user can enter a video production page, a progress bar of the target video segment is set in the video production page, and each video frame of the target video segment is displayed in the progress bar of the target video segment, as shown in fig. 3, the user can drag the progress bar to select the click video segment, and the duration of the click video segment selected by the user is equal to the interval duration of the background audio accent time points. For example, the time interval of the accent points is 4 seconds, the target video segment is 6 seconds, and the user can drag the progress bar to select a continuous video segment with a duration of 4 seconds from the target video segment. After the terminal determines the card video segment, the video segment with the time length before the tail end of the card video segment as the time length of the basic video segment can be intercepted and used as the basic video segment, and then the basic video segment is circulated for preset times to obtain the ghost video segment, namely the ghost video segment comprises the basic video segment with preset times.
When the duration of the basic video segment is determined according to the accent time interval of the background audio, the corresponding processing may be as follows: and determining the time length of the basic video clip as the product of the accent time interval of the background audio and a preset proportion value.
In an implementation, a value obtained by multiplying the accent time interval of the background audio by a preset ratio value may be determined as the base video segment duration. The preset ratio value may be preset by a technician. For example, if the preset ratio value is set to 0.2, the duration of the base video segment is one fifth of the duration of the accent time interval of the background audio, that is, one fifth of the duration of the stuck video segment.
Optionally, after the ghost video segment is determined, the target video segment may be modified into the stuck ghost video segment according to the ghost video segment, and the corresponding processing is as follows: deleting the part with the duration being the duration of the segment repeat video band at the front end of the stuck video band, and splicing the segment repeat video band at the tail end of the stuck video band to obtain the stuck segment repeat video band.
In implementation, after obtaining the ghost video segment based on the base video segment, the front end of the card point video segment may be deleted for a part of the ghost video segment, and the ghost video segment may be spliced at the end of the card point video segment to obtain the card point ghost video segment. In general, in background music, a soft sound point (weak drum point) is present at an intermediate time point between two hard sound points, so that the time for starting ghost is selected to be the time point at which the soft sound point appears, thereby increasing the rhythm of the ghost effect. As shown in fig. 4, the duration of the base video segment may be one sixth of the duration of the accent time interval of the background audio, the preset number of times may be 3, and the ghost video segment is obtained after the base video segment is cycled for 3 times, where the duration of the ghost video segment is one half of the duration of the accent time interval. And deleting the part of the video with the duration being one half of the duration of the accent time interval from the front end of the clamped video segment, and then clamping the tail end of the clamped video segment of the ghost video segment, wherein the time point of the obtained clamped ghost video segment for starting playing the segment repeated video segment is the middle time point of the clamped ghost video segment.
In addition, before the ghost video segment is spliced at the tail end of the card point video segment, special effect processing can be performed on the ghost video, and the corresponding processing can be as follows: carrying out special effect processing on the repeated video segments of the segments; deleting the part of which the time length is the time length of the segment repeated video segment at the front end of the stuck point video segment, and splicing the segment repeated video segment after the special effect processing at the tail end of the stuck point video segment to obtain the stuck point segment repeated video segment.
In implementation, the obtained ghost video segment may be subjected to special effect processing, for example, a filter special effect is added to the ghost video segment, and a video frame of the ghost video segment is enlarged. In addition, different special effect processing can be performed on each basic video segment in the ghost video segment, for example, different filter special effects are added to each basic video segment, the amplification ratio of the video frame in each basic video segment is different, and the like. After the ghost video segment is subjected to special effect processing, the front end of the ghost video segment with the duration being the duration of the ghost video segment can be deleted, and the ghost video segment subjected to special effect processing is spliced at the tail end of the ghost video segment to obtain the ghost video segment with the duration being the duration of the ghost video segment.
And 104, generating a composite video based on the segment repeated video segment, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time points included by the background audio.
In implementation, after the ghost video segment corresponding to the target video segment is generated, the user can preview the generated ghost video segment, and when the ghost video segment is previewed, the terminal interface also displays options of confirmation and non-confirmation, and when the option of non-confirmation selected by the user, the method can return to the step 103 to reselect the ghost video segment, and then the ghost video segment is created again, or return to the step 102 to reselect the target video segment needing ghost processing. When the user selects the confirmed option, the terminal can cut the video segments of the plurality of video segments except the target video segment into the click videos corresponding to the plurality of video segments according to the accent time points included in the background audio. Generally, the time lengths of a plurality of video segments selected by a user are different, and the terminal can cut the video selected by the user according to the time of each accent point recorded in the accent point file, so that the time length of each video segment is the same as the interval time length of adjacent accent points in the background music selected by the user. As shown in fig. 5, if the interval duration of adjacent accent points in the background audio selected by the user is 3 seconds, and the duration of the video selected by the user is 5 seconds, 4 seconds, 6 seconds, 7 seconds, etc., then the video selected by the user can be uniformly cut into video segments with a duration of 3 seconds. In addition, when the video is cut, a video segment three seconds before the start of the video, a video segment 3 seconds after the start of the video, or a video segment 3 seconds in the middle of the video may be cut, and a position where the video is cut may be preset by a technician, which is not limited herein. After the stuck video segments corresponding to the plurality of video segments are cut, the ghost video segments and the stuck video segments can be spliced into a video segment, and then the spliced video is synthesized with the background music selected by the user to generate the ghost stuck video.
According to the method and the device, the ghost image video segments are determined through the accent time interval of the background audio (namely the ghost image segments are repeated), the multiple video segments are cut, finally the multiple video segments and the ghost image video segments are spliced and synthesized with the background music, and ghost image videos are generated. Therefore, the method and the device do not need a user to manually intercept the video and repeatedly process the video, and can effectively improve the efficiency of the user in making the ghost livestock video.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 6 provides a schematic structural diagram of an apparatus for video production, which may be a terminal in the above embodiment, and the apparatus includes:
an acquisition module 610 configured to acquire a plurality of video segments and background audio;
a selecting module 620 configured to select a target video segment to be subjected to segment repeat processing from the plurality of video segments;
a processing module 630 configured to determine a segment repeat video segment in the target video segment, modify the target video segment into a stuck segment repeat video segment based on the segment repeat video segment, wherein a duration of the stuck segment repeat video segment is equal to an accent time interval of the background audio;
a composition module 640 configured to generate a composite video based on the stuck video segment repeat video segment, video segments of the plurality of video segments other than the target video segment, the background audio, and an accent time interval of the background audio.
Optionally, the processing module 630 is configured to:
and determining a segment repeated video segment in the target video segment based on the accent time interval of the background audio.
Optionally, the processing module 630 is configured to:
selecting a part with a period of time as the interval time of the accent time from the target video segment to obtain a stuck video segment;
determining a basic video clip duration based on the accent time interval of the background audio;
determining a part with the time length before the tail end of the click video segment as the time length of the basic video segment as a basic video segment;
and circulating the basic video clip for preset times to obtain a clip repetition frequency band.
Optionally, the processing module 630 is configured to:
deleting the part of the front end of the stuck video segment, the duration of which is the duration of the segment repeat video segment, and splicing the segment repeat video segment at the tail end of the stuck video segment to obtain the stuck segment repeat video segment.
Optionally, the processing module 630 is configured to:
carrying out special effect processing on the segment repetition video band;
deleting the part with the duration being the duration of the segment repeat video band at the front end of the stuck point video band, and splicing the segment repeat video band after the special effect processing at the tail end of the stuck point video band to obtain the stuck point segment repeat video band.
Optionally, the processing module 630 is configured to:
and determining the time length of the basic video clip as the product of the accent time interval of the background audio and a preset proportion value.
It should be noted that: in the video production apparatus provided in the above embodiment, when producing a video, only the division of the above functional modules is exemplified, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video production apparatus and the video production method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 7 shows a block diagram of a computer device provided in an exemplary embodiment of the present application. The computer device may be a terminal 700, for example: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement a method of video production as provided by method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic Location of the terminal 700 for navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of video production in the above-described embodiments is also provided. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only, and not to limit the present application, and any modifications, equivalents, improvements, etc. made within the spirit and scope of the present application are intended to be included therein.

Claims (7)

1. A method of video production, the method comprising:
acquiring a plurality of video segments and background audio;
selecting a target video segment which needs to be subjected to segment repeated processing from the plurality of video segments;
selecting a part with a period of time as the interval time of the accent time from the target video segment to obtain a stuck video segment;
determining a basic video clip duration based on the accent time interval of the background audio;
determining a part with the time length before the tail end of the click video segment as the time length of the basic video segment as a basic video segment;
circulating the basic video clip for a preset number of times to obtain a clip repetition frequency band;
deleting a part with the duration being the duration of the clip repeated video segment at the front end of the clip video segment, and splicing the clip repeated video segment at the tail end of the clip video segment to obtain the clip segment repeated video segment, wherein the time point when the clip segment repeated video segment starts playing the clip repeated video segment is the middle time point of the clip segment repeated video segment, and the duration of the clip segment repeated video segment is equal to the repeat time interval of the background audio;
and generating a composite video based on the click segment repeated video segment, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio.
2. The method according to claim 1, wherein said deleting a portion of the front end of the clip video segment whose duration is the duration of the segment of the repeating video segment and splicing the segment of the repeating video segment at the end of the clip video segment to obtain the clip segment of the repeating video segment comprises:
carrying out special effect processing on the segment repetition video band;
deleting the part of which the duration is the duration of the segment repeated video segment at the front end of the stuck video segment, and splicing the segment repeated video segment after the special effect processing at the tail end of the stuck video segment to obtain the stuck segment repeated video segment.
3. The method of claim 1, wherein determining the base video segment duration comprises:
and determining the time length of the basic video clip as the product of the accent time interval of the background audio and a preset proportion value.
4. An apparatus for video production, the apparatus comprising:
an acquisition module configured to acquire a plurality of video segments and background audio;
the selecting module is configured to select a target video segment which needs to be subjected to segment repeated processing from the plurality of video segments;
the processing module is configured to select a part with a period of time as the duration of the accent time interval in the target video segment to obtain a stuck video segment; determining a basic video clip duration based on the accent time interval of the background audio; determining a part with the time length before the tail end of the click video segment as the time length of the basic video segment as a basic video segment; circulating the basic video clip for a preset number of times to obtain a clip repetition frequency band; deleting a part with the duration being the duration of the clip repeated video segment at the front end of the clip video segment, and splicing the clip repeated video segment at the tail end of the clip video segment to obtain the clip segment repeated video segment, wherein the time point when the clip segment repeated video segment starts playing the clip repeated video segment is the middle time point of the clip segment repeated video segment, and the duration of the clip segment repeated video segment is equal to the repeat time interval of the background audio;
a composition module configured to generate a composite video based on the stuck-point segment repeat video segment, video segments of the plurality of video segments other than the target video segment, the background audio, and an accent time interval of the background audio.
5. The apparatus of claim 4, wherein the processing module is configured to:
and determining a segment repeated video segment in the target video segment based on the accent time interval of the background audio.
6. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by a method of video production according to any one of claims 1 to 3.
7. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of video production as claimed in any one of claims 1 to 3.
CN201911388446.5A 2019-12-30 2019-12-30 Video production method, device, equipment and storage medium Active CN111031394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911388446.5A CN111031394B (en) 2019-12-30 2019-12-30 Video production method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911388446.5A CN111031394B (en) 2019-12-30 2019-12-30 Video production method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111031394A CN111031394A (en) 2020-04-17
CN111031394B true CN111031394B (en) 2022-03-22

Family

ID=70199174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911388446.5A Active CN111031394B (en) 2019-12-30 2019-12-30 Video production method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111031394B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311961A (en) * 2020-11-13 2021-02-02 深圳市前海手绘科技文化有限公司 Method and device for setting lens in short video
CN113438547B (en) * 2021-05-28 2022-03-25 北京达佳互联信息技术有限公司 Music generation method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602552A (en) * 2019-09-16 2019-12-20 广州酷狗计算机科技有限公司 Video synthesis method, device, terminal and computer readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042215A (en) * 2011-08-11 2013-02-28 Canon Inc Video editing device and control method therefor
US10388321B2 (en) * 2015-08-26 2019-08-20 Twitter, Inc. Looping audio-visual file generation based on audio and video analysis
CN108259984A (en) * 2017-12-29 2018-07-06 广州市百果园信息技术有限公司 Method of video image processing, computer readable storage medium and terminal
CN110392281B (en) * 2018-04-20 2022-03-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN109040615A (en) * 2018-08-10 2018-12-18 北京微播视界科技有限公司 Special video effect adding method, device, terminal device and computer storage medium
CN109657100B (en) * 2019-01-25 2021-10-29 深圳市商汤科技有限公司 Video collection generation method and device, electronic equipment and storage medium
CN110233976B (en) * 2019-06-21 2022-09-09 广州酷狗计算机科技有限公司 Video synthesis method and device
CN110336960B (en) * 2019-07-17 2021-12-10 广州酷狗计算机科技有限公司 Video synthesis method, device, terminal and storage medium
CN110545476B (en) * 2019-09-23 2022-03-25 广州酷狗计算机科技有限公司 Video synthesis method and device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110602552A (en) * 2019-09-16 2019-12-20 广州酷狗计算机科技有限公司 Video synthesis method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111031394A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN110233976B (en) Video synthesis method and device
CN109167950B (en) Video recording method, video playing method, device, equipment and storage medium
CN111065001B (en) Video production method, device, equipment and storage medium
CN108391171B (en) Video playing control method and device, and terminal
CN108769561B (en) Video recording method and device
CN110602552B (en) Video synthesis method, device, terminal and computer readable storage medium
CN109033335B (en) Audio recording method, device, terminal and storage medium
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN108965757B (en) Video recording method, device, terminal and storage medium
CN109346111B (en) Data processing method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110769313B (en) Video processing method and device and storage medium
CN110688082B (en) Method, device, equipment and storage medium for determining adjustment proportion information of volume
CN109192218B (en) Method and apparatus for audio processing
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN108831424B (en) Audio splicing method and device and storage medium
CN110225390B (en) Video preview method, device, terminal and computer readable storage medium
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN109982129B (en) Short video playing control method and device and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN111142838A (en) Audio playing method and device, computer equipment and storage medium
CN111276122A (en) Audio generation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant