CN111065001B - Video production method, device, equipment and storage medium - Google Patents

Video production method, device, equipment and storage medium Download PDF

Info

Publication number
CN111065001B
CN111065001B CN201911357836.6A CN201911357836A CN111065001B CN 111065001 B CN111065001 B CN 111065001B CN 201911357836 A CN201911357836 A CN 201911357836A CN 111065001 B CN111065001 B CN 111065001B
Authority
CN
China
Prior art keywords
video segment
video
frame
stuck
freeze
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911357836.6A
Other languages
Chinese (zh)
Other versions
CN111065001A (en
Inventor
吴晗
李文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911357836.6A priority Critical patent/CN111065001B/en
Publication of CN111065001A publication Critical patent/CN111065001A/en
Application granted granted Critical
Publication of CN111065001B publication Critical patent/CN111065001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video production method, a video production device, video production equipment and a storage medium. Belongs to the technical field of internet. The method comprises the following steps: acquiring a plurality of video segments and background audio; selecting a target video segment needing freeze frame processing from the plurality of video segments; determining a lattice frame in the target video segment, and modifying the target video segment into a stuck-point lattice video segment based on the lattice frame, wherein the duration of the stuck-point lattice video segment is equal to the accent time interval of the background audio; and generating a composite video based on the video segment with the stuck-at stop, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio. By the adoption of the method and the device, the efficiency of making the stuck-point stop-motion video by the user can be effectively improved.

Description

Video production method, device, equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a storage medium for video production.
Background
As short videos explode more and more, more and more people begin to make videos by themselves, and various effects are added to the videos, for example, a stuck-point stop-motion video is made, that is, a plurality of sections of videos are subjected to stuck-point playing, and a stop-motion special effect is added to a corresponding video section. People can upload videos made by themselves to the video platform and share the videos made by themselves with other users of the platform.
At present, a user needs to use video making software to make a stuck point stop motion video, firstly, the user selects background music, then, according to each accent time point of the background music, captures a plurality of sections of videos, then, selects a stop motion frame (namely a stop motion picture) from the selected video section, then, according to the accent time point of the background music, determines the stop motion duration of the stop motion frame, and finally, through the video making software, splices the plurality of sections of videos to make the stuck point stop motion video.
In the process of implementing the present application, the inventor finds that the prior art has at least the following problems:
the user uses video making software to make the stuck point stop-motion video, needs to intercept multiple sections of video according to accent time points in background music, and needs to perform stop-motion processing on the multiple sections of video, so that the process is complex and tedious, and the efficiency of making the stuck point stop-motion video by the user is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for video production, which can effectively improve the efficiency of producing a stuck-point stop-motion video by a user. The technical scheme is as follows:
in one aspect, a method of video production is provided, the method comprising:
acquiring a plurality of video segments and background audio;
selecting a target video segment needing freeze frame processing from the plurality of video segments;
determining a lattice frame in the target video segment, and modifying the target video segment into a stuck-point lattice video segment based on the lattice frame, wherein the duration of the stuck-point lattice video segment is equal to the accent time interval of the background audio;
and generating a composite video based on the video segment with the stuck-at stop, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio.
Optionally, the determining a frame to be formatted in the target video segment includes:
determining a lattice frame in the target video segment based on the accent time interval of the background audio.
Optionally, the determining a frame to be formatted in the target video segment based on the accent time interval of the background audio includes:
and in the target video segment, determining a frame with the interval duration of the front end of the target video segment equal to the accent time interval of the background audio as a lattice frame.
Optionally, the modifying the target video segment into a stuck-at freeze-frame video segment based on the freeze frame includes:
intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment;
determining a freeze frame time length, and generating a freeze video segment of the freeze frame time length based on the freeze frame;
deleting the part with the fixed frame time length at the front end of the stuck point video segment, and splicing the fixed frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed frame video segment.
Optionally, the deleting the portion of which the time duration is the stop motion time duration at the front end of the stuck video segment, and splicing the stop motion video segment at the tail end of the stuck video segment to obtain the stuck stop motion video segment includes:
performing special effect processing on the stop motion frame in the stop motion video segment;
deleting the part with the duration being the stop motion duration at the front end of the stuck point video segment, and splicing the stop motion video segment after special effect processing at the tail end of the stuck point video segment to obtain the stuck point stop motion video segment.
Optionally, the determining the freeze frame duration includes:
and determining the product of the accent time interval of the background audio and a preset proportion value as the freeze-frame time length.
In another aspect, an apparatus for video production is provided, the apparatus comprising:
an acquisition module configured to acquire a plurality of video segments and background audio;
the selecting module is configured to select a target video segment needing freeze frame processing from the plurality of video segments;
a processing module configured to determine a freeze frame in the target video segment, and modify the target video segment into a stuck freeze video segment based on the freeze frame, wherein a duration of the stuck freeze video segment is equal to an accent time interval of the background audio;
a composition module configured to generate a composite video based on the stuck-at-stop video segment, video segments of the plurality of video segments other than the target video segment, the background audio, and an accent time interval of the background audio.
Optionally, the processing module is configured to:
determining a lattice frame in the target video segment based on the accent time interval of the background audio.
Optionally, the processing module is configured to
And in the target video segment, determining a frame with the interval duration of the front end of the target video segment equal to the accent time interval of the background audio as a lattice frame.
Optionally, the processing module is configured to:
intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment;
determining a freeze frame time length, and generating a freeze video segment of the freeze frame time length based on the freeze frame;
deleting the part with the fixed frame time length at the front end of the stuck point video segment, and splicing the fixed frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed frame video segment.
Optionally, the processing module is configured to:
performing special effect processing on the stop motion frame in the stop motion video segment;
deleting the part with the duration being the stop motion duration at the front end of the stuck point video segment, and splicing the stop motion video segment after special effect processing at the tail end of the stuck point video segment to obtain the stuck point stop motion video segment.
Optionally, the processing module is configured to:
and determining the product of the accent time interval of the background audio and a preset proportion value as the freeze-frame time length.
In yet another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method for video production as described above.
In yet another aspect, a computer-readable storage medium is provided, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for video production as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the method and the device have the advantages that the freeze frame needing freeze frame processing is determined through the interval of the accent time points of the background audio, the video segments are cut through the interval of the accent time points, finally the video segments are spliced and synthesized with the background music, and accordingly the method and the device do not need to be used for manually capturing the video and freeze the video, and the efficiency of making the click freeze frame video by a user can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for video production according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video production method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a video production method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video production method provided by an embodiment of the present application;
fig. 5 is a block diagram of an apparatus for video production according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The video production method can be realized by the terminal. The terminal can be operated with an application program with a video editing function, the terminal can be provided with a camera, an earphone, a loudspeaker and other components, the terminal has a communication function and can be connected with the Internet, and the terminal can be a mobile phone, a tablet personal computer, intelligent wearable equipment, a desktop computer, a notebook computer and the like.
The video production method provided by the embodiment of the application can be used for producing the click stop-motion video. The video with stuck freeze frame is composed of a plurality of video segments and background music, the video segments are switched according to the drum points (accent points) in the background music, and one or more frames of pictures in the video segments are subjected to freeze frame playing, namely, only one frame of picture is displayed in a certain time. The method comprises the steps of selecting background music for making a cartoon stop-motion video, recording time points of various drum points in the background music, cutting a plurality of video segments according to the time points of the various drum points, and calculating the time length of a stop-motion frame (a picture needing stop-motion playing). The video production method provided by the embodiment of the application can be used for producing the click stop-motion video only by selecting the background music and the multiple sections of videos by the user, and can be applied to various application programs with video production, such as a live broadcast application program, a short video application program and a video editing program. The embodiments of the present application take short video applications as examples, and the detailed description of the schemes is given, and other cases are similar and will not be repeated.
Fig. 1 is a flowchart of video production according to an embodiment of the present disclosure. Referring to fig. 1, the embodiment includes:
step 101, acquiring a plurality of video segments and background audio.
The plurality of video segments may be videos stored in a user terminal, the background audio may be background music selected by a user in a network, the accent time points of the background audio may be recorded in an accent time point file, time points at which each accent point corresponding to the background audio appears, that is, accent time points, are recorded in the accent time point file, the accent points may be drum points in the background audio, and generally, time intervals of each adjacent accent point in the background audio are the same.
In implementation, a user may produce a stuck-stop video at a short video application. The user can select the background music required by making the click freeze frame video on a background music selection interface in the short video application program, a recommendation list and a search box are arranged in the background music selection interface, and the user can select the background music recommended by the short video application program from the recommendation list and can also select the background music which the user wants to use in the search box. When the user finishes selecting the background music, the terminal can send a corresponding acquisition request to the server, and the server can send the audio frequency of the background music selected by the user and the accent time point file corresponding to the background music to the terminal according to the acquisition request. Then, the user can enter a video selection interface, a plurality of preview windows are displayed on the video selection interface, the video cover stored in the local area is displayed in the preview windows, and the user can click the selected option set in the preview windows to select the video materials required by the user for making the click videos. In addition, the terminal may prompt the duration of the video segment selected by the user according to the time interval of each adjacent accent point in the accent time point file corresponding to the background music selected by the user, that is, the video duration of the video segment selected by the user may be greater than the time interval of each adjacent accent point in the accent time point file. For example, if the time interval between adjacent accent points in the accent time point file is 6 seconds, the user can select a video segment stored in the terminal for more than 6 seconds.
And 102, selecting a target video segment needing freeze frame processing from the plurality of video segments.
In implementation, after the user selects a video, the user may enter a video production page, a plurality of video segments selected by the user are displayed in the video production page, and the user may select a target video segment to be subjected to freeze-frame processing again from the selected plurality of video segments, where the user may select one of the video segments or select a plurality of the video segments as the target video segment.
And 103, determining a lattice frame in the target video segment, and modifying the target video segment into a stuck-at lattice video segment based on the lattice frame.
And the duration of the video segment of the cartoon freeze frame is equal to the accent time interval of the background audio.
In implementation, according to the accent time point file corresponding to the background music, the accent time interval of the background audio, that is, the time interval of each adjacent accent point in the accent time point file, is obtained, then a freeze frame may be determined in the target video segment, and the freeze frame is processed into a video, and then according to the accent time interval of the background audio. And cutting and splicing the target video segment to obtain the stuck point stop-motion video segment.
Optionally, the stop-motion frame may also be determined by an accent time interval of the background audio, and the corresponding processing is as follows: based on the accent time interval of the background audio, a freeze frame is determined in the target video segment.
In implementation, after the user selects the target video segment that needs to be subjected to the freeze frame processing, the terminal may select the freeze frame in the target video segment according to the accent time interval of the background audio selected by the user. As shown in fig. 2, a frame having an interval duration equal to the accent time interval of the background audio from the front end of the target video segment may be determined as the freeze frame. Namely, selecting a video frame with the duration being the duration of the accent time interval of the background audio from the starting time point of the target video segment as a lattice frame. For example, if the duration of the accent time interval of the background audio is 4 seconds and the duration of the target video segment is 6 seconds, the video frame of the target video segment at the end of the 4 th second may be selected as the lattice frame.
In addition, a user may select one freeze frame from the target video segment, after the user selects the target video segment, the user may enter a video production page, a progress bar of the target video segment is set in the video production page, each video frame of the target video segment is displayed in the progress bar of the target video segment, and the user may drag each video frame in the progress bar to select the freeze frame, as shown in fig. 3, a time duration after a start time point of the target video segment is an unselected range of a accent time interval of the background audio, a portion other than the selected range is a selectable range, the user may select the freeze frame within the selectable range, for example, a time interval of the accent point is 4 seconds, a time interval of the target video segment is 6 seconds, and the user may drag each video frame in the progress bar to select a video frame after 4 seconds of the target video segment as the freeze frame.
After determining the freeze frame, the process of modifying the target video segment into a stuck freeze video segment based on the freeze frame may be as follows: intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment; determining the freeze frame time length, and generating a freeze frame video segment with the freeze frame time length based on the freeze frame; deleting the part with the fixed-frame time length at the front end of the stuck point video segment, and splicing the fixed-frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed-frame video segment.
In an implementation, after determining that the freeze frame is determined in the target video segment, a video having a duration that is the duration of the accent point time interval of the background audio may be intercepted from before the freeze frame of the target video segment as a stuck video segment. And determining the freeze-time duration, wherein the freeze-time duration can be preset by a technician or can be determined according to the time interval of the accent points of the background audio. And generating a freeze-frame video segment according to the freeze-frame and the freeze-frame time length. Each video frame in a freeze video segment is a freeze frame. And finally, deleting the part with the fixed-frame time length at the front end of the stuck point video segment, and splicing the fixed-frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed-frame video segment.
Alternatively, the product of the accent time interval of the background audio and the preset proportional value may be determined as the freeze time duration.
In implementation, the accent time interval of the background audio may be multiplied by a preset ratio value, and the obtained value is the fixed-time duration. In general, in background music, there is also a soft point (weak drum point) at an intermediate time point between two hard points, so that the timing of starting the freeze is selected to be at the time point when the soft point appears, thereby increasing the rhythm of the video freeze effect. Therefore, the preset proportional value can be set to 0.5, i.e. the freeze time duration is half of the duration of the accent time interval of the background audio. It should be noted that the preset ratio value may be other than 0.5, and may be set by a technician according to the background audio, which is not limited herein.
Optionally, the stop motion video segment is spliced before the end of the stuck video segment, and special effect processing can be performed on the stop motion video segment. Special effects processing can be performed on the stop motion frames in the stop motion video segment, and a filter, such as a black and white filter, is added to each video frame. To distinguish from the picture before freeze. In addition, each video frame may be enlarged, for example, by a certain ratio. When the stop motion video segment after the special effect processing is obtained, the front end deletion duration of the stuck point video segment can be the part of the stop motion duration, and the stop motion video segment after the special effect processing is spliced at the tail end of the stuck point video segment to obtain the stuck point stop motion video segment.
And 104, generating a composite video based on the video segments of the stuck-at-a-stop lattice, the video segments of the plurality of video segments except the target video segment, the background audio and the accent time interval of the background audio.
In practice, after the clip freeze video segment corresponding to the target video segment is generated, the user can preview the generated clip freeze video segment, and when the clip freeze video segment is previewed, options of confirmation and non-confirmation are displayed on the terminal interface, and when the option of non-confirmation is selected by the user, the method can return to the step 103 to reselect the freeze frame, and the clip freeze video segment is made again, or return to the step 102 to reselect the target video segment which needs to be subjected to freeze processing. When the user selects the confirmed option, the terminal can cut the video segments of the plurality of video segments except the target video segment into the click video segments corresponding to the plurality of video segments according to the accent time points included in the background audio. Generally, the time lengths of a plurality of video segments selected by a user are different, and the terminal can cut the video selected by the user according to the accent time interval of the background audio, so that the time length of each video segment is the same as the accent time interval time length in the background audio selected by the user. For example, if the duration of the accent time interval in the background audio selected by the user is 4 seconds, and the durations of the videos selected by the user are 5 seconds, 4 seconds, 6 seconds, 7 seconds, etc., respectively, it is possible to uniformly clip the videos selected by the user into the stuck video segments with a duration of 4 seconds. In addition, when the video is cut, a video segment 4 seconds before the start of the video, a video segment 4 seconds after the start of the video, or a video segment 4 seconds in the middle of the video may be cut, and a position where the video is cut may be preset by a technician or selected by a user, which is not limited herein. As shown in fig. 4, after the clip point video segments corresponding to the multiple video segments are cut, the clip point stop motion video segments and the clip point video segments may be spliced into a video segment, and then the spliced video segment is synthesized with the background music selected by the user, where the synthesized video segment is the clip point stop motion video.
According to the embodiment of the application, the freeze frame needing freeze frame processing is determined through the interval of the accent time points of the background audio, the video segments are cut through the interval of the accent time points, and finally the video segments are spliced and synthesized with the background music.
Fig. 5 is a block diagram of a device for video production according to an embodiment of the present application, where the device may be a terminal in the foregoing embodiment, and the device includes:
an acquisition module 510 configured to acquire a plurality of video segments and background audio;
a selecting module 520 configured to select a target video segment to be freeze-processed from the plurality of video segments;
a processing module 530 configured to determine a freeze frame in the target video segment, and modify the target video segment into a stuck freeze video segment based on the freeze frame, wherein a duration of the stuck freeze video segment is equal to an accent time interval of the background audio;
a composition module 540 configured to generate a composite video based on the stuck-at-stop video segment, the video segments of the plurality of video segments other than the target video segment, the background audio, and the accent time interval of the background audio.
Optionally, the processing module 530 is configured to:
determining a lattice frame in the target video segment based on the accent time interval of the background audio.
Optionally, the processing module 530 is configured to
And in the target video segment, determining a frame with the interval duration of the front end of the target video segment equal to the accent time interval of the background audio as a lattice frame.
Optionally, the processing module 530 is configured to:
intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment;
determining a freeze frame time length, and generating a freeze video segment of the freeze frame time length based on the freeze frame;
deleting the part with the fixed frame time length at the front end of the stuck point video segment, and splicing the fixed frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed frame video segment.
Optionally, the processing module 530 is configured to:
performing special effect processing on the stop motion frame in the stop motion video segment;
deleting the part with the duration being the stop motion duration at the front end of the stuck point video segment, and splicing the stop motion video segment after special effect processing at the tail end of the stuck point video segment to obtain the stuck point stop motion video segment.
Optionally, the processing module 530 is configured to:
and determining the product of the accent time interval of the background audio and a preset proportion value as the freeze-frame time length.
It should be noted that: in the video production apparatus provided in the above embodiment, when producing a video, only the division of the above functional modules is exemplified, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video production apparatus and the video production method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 6 shows a block diagram of a computer device provided in an exemplary embodiment of the present application. The computer device may be a terminal 600, for example: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the video production method provided by the method embodiments of the present application.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side frame of the terminal 600 and/or on a lower layer of the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually decreases, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of video production in the above-described embodiments is also provided. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only, and not to limit the present application, and any modifications, equivalents, improvements, etc. made within the spirit and scope of the present application are intended to be included therein.

Claims (9)

1. A method of video production, the method comprising:
acquiring a plurality of video segments and background audio;
selecting a target video segment needing freeze frame processing from the plurality of video segments;
determining a lattice frame in the target video segment, and modifying the target video segment into a stuck-point lattice video segment based on the lattice frame, wherein the duration of the stuck-point lattice video segment is equal to the accent time interval of the background audio;
generating a composite video based on the stuck-stop video segment, the video segments of the plurality of video segments except the target video segment, the background audio, and the accent time interval of the background audio;
wherein the modifying the target video segment into a stuck-at freeze video segment based on the freeze frame comprises:
intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment;
determining a freeze frame time length, and generating a freeze video segment of the freeze frame time length based on the freeze frame;
deleting the part with the fixed frame time length at the front end of the stuck point video segment, and splicing the fixed frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed frame video segment.
2. The method according to claim 1, wherein said determining a freeze frame in said target video segment comprises:
determining a lattice frame in the target video segment based on the accent time interval of the background audio.
3. The method according to claim 2, wherein the determining a frame of a trellis in the target video segment based on the accent time interval of the background audio comprises:
and in the target video segment, determining a frame with the interval duration of the front end of the target video segment equal to the accent time interval of the background audio as a lattice frame.
4. The method according to claim 1, wherein said deleting a portion of the stop-motion duration at the front end of the stuck video segment and splicing the stop-motion video segment at the end of the stuck video segment to obtain a stuck stop-motion video segment comprises:
performing special effect processing on the stop motion frame in the stop motion video segment;
deleting the part with the duration being the stop motion duration at the front end of the stuck point video segment, and splicing the stop motion video segment after special effect processing at the tail end of the stuck point video segment to obtain the stuck point stop motion video segment.
5. The method of claim 1, wherein determining the freeze time duration comprises:
and determining the product of the accent time interval of the background audio and a preset proportion value as the freeze-frame time length.
6. An apparatus for video production, the apparatus comprising:
an acquisition module configured to acquire a plurality of video segments and background audio;
the selecting module is configured to select a target video segment needing freeze frame processing from the plurality of video segments;
a processing module configured to determine a freeze frame in the target video segment, and modify the target video segment into a stuck freeze video segment based on the freeze frame, wherein a duration of the stuck freeze video segment is equal to an accent time interval of the background audio;
a composition module configured to generate a composite video based on the stuck-at-stop video segment, video segments of the plurality of video segments other than the target video segment, the background audio, and an accent time interval of the background audio;
the processing module configured to:
intercepting a part before the freeze frame in the target video segment to obtain a stuck video segment;
determining a freeze frame time length, and generating a freeze video segment of the freeze frame time length based on the freeze frame;
deleting the part with the fixed frame time length at the front end of the stuck point video segment, and splicing the fixed frame video segment at the tail end of the stuck point video segment to obtain the stuck point fixed frame video segment.
7. The apparatus of claim 6, wherein the processing module is configured to:
determining a lattice frame in the target video segment based on the accent time interval of the background audio.
8. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by a method of video production according to any one of claims 1 to 5.
9. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of video production as claimed in any one of claims 1 to 5.
CN201911357836.6A 2019-12-25 2019-12-25 Video production method, device, equipment and storage medium Active CN111065001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357836.6A CN111065001B (en) 2019-12-25 2019-12-25 Video production method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357836.6A CN111065001B (en) 2019-12-25 2019-12-25 Video production method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111065001A CN111065001A (en) 2020-04-24
CN111065001B true CN111065001B (en) 2022-03-22

Family

ID=70303486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357836.6A Active CN111065001B (en) 2019-12-25 2019-12-25 Video production method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111065001B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235631B (en) * 2019-07-15 2022-05-03 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN111835986B (en) * 2020-07-09 2021-08-24 腾讯科技(深圳)有限公司 Video editing processing method and device and electronic equipment
CN112866584B (en) * 2020-12-31 2023-01-20 北京达佳互联信息技术有限公司 Video synthesis method, device, terminal and storage medium
CN112837709B (en) * 2021-02-24 2022-07-22 北京达佳互联信息技术有限公司 Method and device for splicing audio files
CN113709559B (en) * 2021-03-05 2023-06-30 腾讯科技(深圳)有限公司 Video dividing method, device, computer equipment and storage medium
CN113099297B (en) * 2021-03-24 2022-09-30 北京达佳互联信息技术有限公司 Method and device for generating click video, electronic equipment and storage medium
CN114286171B (en) * 2021-08-19 2023-04-07 腾讯科技(深圳)有限公司 Video processing method, device, equipment and storage medium
WO2023051245A1 (en) * 2021-09-29 2023-04-06 北京字跳网络技术有限公司 Video processing method and apparatus, and device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401124A (en) * 2018-03-16 2018-08-14 广州酷狗计算机科技有限公司 The method and apparatus of video record

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013042215A (en) * 2011-08-11 2013-02-28 Canon Inc Video editing device and control method therefor
CN104036536B (en) * 2013-03-07 2018-06-15 腾讯科技(深圳)有限公司 The generation method and device of a kind of stop-motion animation
US10388321B2 (en) * 2015-08-26 2019-08-20 Twitter, Inc. Looping audio-visual file generation based on audio and video analysis
CN108259984A (en) * 2017-12-29 2018-07-06 广州市百果园信息技术有限公司 Method of video image processing, computer readable storage medium and terminal
CN110233976B (en) * 2019-06-21 2022-09-09 广州酷狗计算机科技有限公司 Video synthesis method and device
CN110336960B (en) * 2019-07-17 2021-12-10 广州酷狗计算机科技有限公司 Video synthesis method, device, terminal and storage medium
CN110545476B (en) * 2019-09-23 2022-03-25 广州酷狗计算机科技有限公司 Video synthesis method and device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401124A (en) * 2018-03-16 2018-08-14 广州酷狗计算机科技有限公司 The method and apparatus of video record

Also Published As

Publication number Publication date
CN111065001A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN111065001B (en) Video production method, device, equipment and storage medium
CN109167950B (en) Video recording method, video playing method, device, equipment and storage medium
CN110233976B (en) Video synthesis method and device
CN108769562B (en) Method and device for generating special effect video
CN108391171B (en) Video playing control method and device, and terminal
CN108965922B (en) Video cover generation method and device and storage medium
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN109346111B (en) Data processing method, device, terminal and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN111355998B (en) Video processing method and device
CN110225390B (en) Video preview method, device, terminal and computer readable storage medium
CN110769313B (en) Video processing method and device and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN109982129B (en) Short video playing control method and device and storage medium
CN109743461B (en) Audio data processing method, device, terminal and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111031394B (en) Video production method, device, equipment and storage medium
CN112866584B (en) Video synthesis method, device, terminal and storage medium
CN111954058B (en) Image processing method, device, electronic equipment and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant