CN108966026B - Method and device for making video file - Google Patents

Method and device for making video file Download PDF

Info

Publication number
CN108966026B
CN108966026B CN201810876288.7A CN201810876288A CN108966026B CN 108966026 B CN108966026 B CN 108966026B CN 201810876288 A CN201810876288 A CN 201810876288A CN 108966026 B CN108966026 B CN 108966026B
Authority
CN
China
Prior art keywords
video
video file
file
existing
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810876288.7A
Other languages
Chinese (zh)
Other versions
CN108966026A (en
Inventor
廖宇辉
陈金源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201810876288.7A priority Critical patent/CN108966026B/en
Publication of CN108966026A publication Critical patent/CN108966026A/en
Application granted granted Critical
Publication of CN108966026B publication Critical patent/CN108966026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The invention discloses a method and a device for making a video file, and belongs to the technical field of videos. The method comprises the following steps: creating a first video file; adding at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file into the first video file when the existing video adding instruction is received; and adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received. By adopting the invention, the flexibility of video production can be improved.

Description

Method and device for making video file
Technical Field
The present invention relates to the field of video technologies, and in particular, to a method and an apparatus for creating a video file.
Background
The gradual development of network technology raises the tide of shooting short videos, and as the influence range of the short videos is enlarged, the video production method is also widely concerned by the public.
At present, the commonly used video production method: and selecting a section of audio data by a user, recording videos while playing the audio data, and replacing the recorded audio data with the selected audio data in the recorded video file to finally obtain a manufactured video file.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
the video production method is implemented by replacing audio data. This video production method is less flexible.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for creating a video file. The technical scheme is as follows:
in a first aspect, a method for making a video file is provided, the method comprising:
creating a first video file;
adding at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file into the first video file when the existing video adding instruction is received;
and adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received.
Optionally, adding at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file to the first video file each time the existing video adding instruction is received, where the adding includes:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
Optionally, adding at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file to the first video file each time the existing video adding instruction is received, where the adding includes:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file;
and starting from the position, corresponding to the playing time length, in a pre-selected second video file, acquiring at least one video frame from the second video file based on the duration of the current existing video adding instruction, and adding the video frame into the first video file.
Optionally, the adding to the first video file includes:
added to the end of the video of the first video file.
Optionally, the method further includes:
and in the process of adding the video frame to the first video file, playing the added video frame.
In a second aspect, there is provided an apparatus for producing a video file, the apparatus comprising:
a creation module for creating a first video file;
the adding module is used for adding at least one video frame corresponding to the existing video adding instruction in a pre-selected second video file into the first video file when the existing video adding instruction is received; and adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received.
Optionally, the adding module is configured to:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
Optionally, the adding module is configured to:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file;
and starting from the position, corresponding to the playing time length, in a pre-selected second video file, acquiring at least one video frame from the second video file based on the duration of the current existing video adding instruction, and adding the video frame into the first video file.
Optionally, the adding module is configured to:
added to the end of the video of the first video file.
Optionally, the apparatus further includes a playing module, configured to:
and in the process of adding the video frame to the first video file, playing the added video frame.
In a third aspect, there is provided a computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method of producing a video file according to the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method of producing a video file according to the first aspect.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the embodiment of the invention, a user provides various video making modes by adding the pre-selected second video file or the currently recorded video frame to the first video file in real time, so that the flexibility of the video making method is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for creating a video file according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a video production interface provided by an embodiment of the invention;
FIG. 3 is a diagram illustrating a first way of creating a video file according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a second way of creating a video file according to an embodiment of the present invention;
FIG. 5 is a block diagram of an apparatus for creating a video file according to an embodiment of the present invention;
fig. 6 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides a method for making a video file, which can be realized by a terminal. The terminal can be a mobile terminal such as a mobile phone, a tablet computer and a notebook computer, and can also be a fixed terminal such as a desktop computer.
The terminal may include components such as a processor, memory, input components, screen, etc. The processor, which may be a CPU (Central Processing Unit), may be used to create a video file, receive an instruction, add a video frame to the video file, control a display to display, and so on. The Memory may be a RAM (Random Access Memory), a Flash (Flash Memory), and the like, and may be configured to store received data, data required by the processing procedure, data generated in the processing procedure, and the like, such as a video file, and the like. The input component can be a mouse, a touch screen, a touch pad, a keyboard and the like, and can generate corresponding instructions based on the operation of a user. The screen may be a touch screen or a non-touch screen, and may be used to display an operation interface of an application program, and the like. The terminal may further include a transceiver, an image detection part, an audio output part, an audio input part, and the like. The transceiver, which may be used for data transmission with other devices, may include an antenna, matching circuitry, a modem, and the like. The image detection means may be a camera or the like. The audio output component may be a speaker, headphones, or the like. The audio input means may be a microphone or the like.
In one aspect, a method of making a video file is provided, the method comprising:
as shown in fig. 1, the processing flow of the method may include the following steps:
in step 101, a first video file is created.
In implementation, a user can install an application program for video production on a terminal, and when the user wants to produce a video, the user can click a shortcut icon to run the application program and select a function option for splicing the video in a main interface of the application program. At this time, a video file selection window is displayed in the application program, and the user can select a locally stored video file or a video file in the network through the video file selection window. After the selection is finished, the application program enters a video production interface, at this time, the terminal can create a new video file for video production, the video file is the first video file, and a user can operate to add various video data in the first video file.
In step 102, whenever an existing video adding instruction is received, at least one video frame corresponding to the existing video adding instruction in the pre-selected second video file is added to the first video file.
The second video file selected in advance refers to a video file which is selected by a user through a video file selection window and is stored locally or a video file in a network.
In implementation, as shown in fig. 2, a production video playing window may be displayed in the video production interface, and various operation keys may also be displayed, such as a selected video adding key (hereinafter referred to as a first key), a recorded video adding key (hereinafter referred to as a second key), an ending key (hereinafter referred to as a third key), and the like. The user can optionally add the selected video clip or the recorded video clip in the second video file in the created first video file based on the own needs, for example, the user can add the video clip in one section of the second video file first, then add two sections of the recorded video clip, then add the video clips in two sections of the second video file, and so on. When a user wants to add a video clip in the second video file, the user can operate the first key, the terminal receives an existing video adding instruction, the terminal adds the video frame in the second video file to the first video file, and the video frame can be selected in various ways, for example, the video frame can be selected by the user through operation, or the starting position and the duration of the video frame selected in the second video file can be determined according to a certain pre-stored selection mechanism, so that the corresponding video frame can be selected, and the like. The selected video frame is at least one video frame corresponding to the existing video adding instruction. The user's operation and the selection of the video frames will be described in detail below.
In a first mode, as shown in fig. 3, based on the playing time sequence of the video frames in the second video file, each time the user continuously presses the first key, the video frames in the video content having the same duration as the pressing duration are obtained in the second video file, and the video content obtained each time is mutually consecutive, and accordingly, the processing procedure in step 102 may be as follows:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
In implementation, when a user wants to add video content of a second video file to a first video file, the user may press a first key in a video production interface, and when the user continuously presses the first key, the terminal continuously receives an existing video adding instruction (hereinafter referred to as a first adding instruction). When the terminal starts to receive the first adding instruction, the terminal can judge whether the currently received first adding instruction is the first adding instruction received in the first video file manufacturing process for the first time. If the video frame is not acquired for the first time, the video frames are continuously acquired in the playing sequence from the end position of the video frame acquired in the second video file last time. The total playing duration of the continuously acquired video frames is equal to the duration of the first adding instruction, namely the duration of pressing the first key by the user, and the terminal continuously adds the acquired video frames into the first video file. And when the user stops pressing the first key, the first adding instruction is ended, and the terminal stops acquiring the video frame in the second video file.
In a second way, as shown in fig. 4, the principle of obtaining the video frame added to the first video file in the second video file may be to keep the playing time point of the added video frame in the second video file the same as the playing time point in the first video file, for example, the video frame added at the playing time point of 8 seconds in the first video file should be selected from the playing time point of 8 seconds in the second video file, and accordingly, the processing procedure in step 102 may be as follows:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file, starting from a position corresponding to the playing time length in a pre-selected second video file, and acquiring at least one video frame in the second video file based on the duration of the existing video adding instruction and adding the video frame to the first video file.
In implementation, when a user wants to add video content of a second video file to a first video file, the user can press a first key in a video production interface, and when the user continuously presses the first key, the terminal continuously receives a first adding instruction. And after the terminal receives the first adding instruction, determining the playing time lengths of all the video frames in the first video file.
Two methods for determining the play duration of all video frames in the first video file are given below: in the first method, after a first video file is created, a terminal respectively counts the total duration of a first key and the total duration of a second key of a video production interface which are continuously pressed by a user, and the sum of the two total durations is the playing duration of all video frames in the first video file. In the second method, a terminal presets a calculation rule of video frames, can acquire the number of current video frames in a first video file, and determines the product of the preset frame interval duration of the first video file and the number as the playing duration of all video frames in the first video file.
After the terminal determines the playing time lengths of all the video frames in the first video file, the terminal acquires the video frames from the position, corresponding to the playing time length, in the second video file, wherein the total playing time length of the acquired video frames is equal to the duration of the first adding instruction, namely the time length of pressing the first key by the user, and the terminal continuously adds the acquired video frames into the first video file. And when the user stops pressing the first key, the first adding instruction is ended, and the terminal stops acquiring the video frame in the second video file.
The above-mentioned addition to the first video file of each video frame may be added at the end of the video of the first video file, i.e. after the current last video frame of the first video file.
In step 103, whenever a recorded video adding instruction is received, at least one currently recorded video frame is added to the first video file.
In implementation, each time the user wants to add a video clip of the recorded video, the second key may be operated, and when the user continuously presses the second key, the terminal receives a recorded video adding instruction (hereinafter referred to as a second adding instruction), and the terminal adds a video frame of the recorded video to the first video file. And when the user stops pressing the second key, the second adding instruction is ended, and the terminal stops acquiring the video frame of the recorded video.
When each video frame is added to the first video file, it may be added at the end of the video of the first video file, i.e. after the current last video frame of the first video file.
Optionally, the terminal may play the video frame added to the first video file in real time according to the operation of the user on the first key and the second key, and the processing mode may be as follows:
and in the process of adding the video frame to the first video file, playing the added video frame.
In an implementation, during the process of continuously pressing the first key by the user, the terminal adds video frames in the second video file to the first video file one by one in the playing time sequence, and the terminal adds one video frame to the first video file every time the user presses the first key for one frame interval duration (for example, 25 milliseconds), so that the speed of adding the video frame is the same as the playing speed of the video. Meanwhile, after each video frame of the second video file is added into the first video file, the terminal displays one video frame in the manufactured video playing window, so that the video content of the first video file can be displayed in the manufactured video playing window in real time. When the user stops pressing the first key, the terminal stops adding the video frame in the second video file into the first video file, the video playing is also paused in the video playing window, and the last added video frame is kept displayed.
And when the user continuously presses the second key and the terminal receives a second adding instruction, the terminal synchronously plays and records the video content of the video through the command making video playing window. And in the process that the user continuously presses the second key, the terminal adds the video frames for recording the video into the first video file one by one according to the recording time sequence, and the preset frame interval duration for recording the video is the same as the frame interval duration of the second video file. Meanwhile, after each video frame of the recorded video is added into the first video file, the terminal displays one video frame in the manufactured video playing window, so that the video content of the recorded video can be displayed in the manufactured video playing window in real time. And when the user stops pressing the second key, the terminal stops adding the video frame of the recorded video into the first video file, the playing of the recorded video is also paused in the process of manufacturing the video playing window, and the last added video frame is kept displayed.
In the embodiment of the present invention, the above-mentioned unnecessary timing relationship between step 102 and step 103 is that a user may perform any operation on the first key and the second key based on his own requirement, where the operation is performed for unlimited times, unlimited order, and unlimited duration, step 102 is performed when the user operates the first key, and step 103 is performed when the user operates the second key. After the user adds all the video contents which the user wants to add to the first video file, the user can click the end button, and the terminal ends editing the first video file and stores the first video file. Various operation keys for video playing, such as a play key, a fast forward key, a pause key and the like, can be arranged in the video making interface, and a user can click the corresponding keys to browse videos.
In the embodiment of the invention, a user provides various video making modes by adding the video frame of the current recorded video of the pre-selected second video file to the first video file in real time, so that the flexibility of the video making method is improved.
Based on the same technical concept, an embodiment of the present invention further provides an apparatus for making a video file, where the apparatus may be a terminal in the foregoing embodiment, as shown in fig. 5, the apparatus includes: a create module 510 and an add module 520.
A creation module 510 for creating a first video file;
an adding module 520, configured to add, to the first video file, at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file whenever the existing video adding instruction is received; and adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received.
Optionally, the adding module 520 is configured to:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
Optionally, the adding module 520 is configured to:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file;
and starting from the position, corresponding to the playing time length, in a pre-selected second video file, acquiring at least one video frame from the second video file based on the duration of the current existing video adding instruction, and adding the video frame into the first video file.
Optionally, the adding module 520 is configured to:
added to the end of the video of the first video file.
Optionally, the apparatus further includes a playing module, configured to:
and in the process of adding the video frame to the first video file, playing the added video frame.
In the embodiment of the invention, a user provides various video making modes by adding the video frame of the current recorded video of the pre-selected second video file to the first video file in real time, so that the flexibility of the video making method is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that: in the apparatus for creating a video file according to the foregoing embodiment, when creating a video file, only the division of the functional modules is described as an example, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for creating a video file and the method for creating a video file provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 6 is a block diagram of a terminal according to an embodiment of the present invention. The terminal 600 may be a portable mobile terminal such as: smart phones, tablet computers. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, a 6-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 602 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the method of producing a video file provided herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display screen 605 also has the ability to acquire touch signals on or over the surface of the touch display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. The touch display 605 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the touch display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the touch display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the touch display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the touch screen display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch screen 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 607 is used to provide an audio interface between the user and the terminal 600. Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 613 may be disposed on the side frame of the terminal 600 and/or on the lower layers of the touch display screen 605. When the pressure sensor 613 is disposed at the side frame of the terminal 600, a user's grip signal on the terminal 600 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, it is possible to control an operability control on the UI interface according to a pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of the user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front face of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually decreases, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction, at least one program, code set, or instruction set is stored, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the method for identifying an action category in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method of making a video file, the method comprising:
creating a first video file;
adding at least one video frame corresponding to an existing video adding instruction in a pre-selected second video file into the first video file when the existing video adding instruction is received;
adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received;
when an existing video adding instruction is received, adding at least one video frame corresponding to the existing video adding instruction in a pre-selected second video file into the first video file, including:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
2. The method according to claim 1, wherein the adding at least one video frame corresponding to the existing video adding instruction in a pre-selected second video file to the first video file each time an existing video adding instruction is received comprises:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file;
and starting from the position, corresponding to the playing time length, in a pre-selected second video file, acquiring at least one video frame from the second video file based on the duration of the current existing video adding instruction, and adding the video frame into the first video file.
3. The method of any of claims 1-2, wherein the adding to the first video file comprises:
added to the end of the video of the first video file.
4. The method according to any one of claims 1-2, further comprising: and in the process of adding the video frame to the first video file, playing the added video frame.
5. An apparatus for producing a video file, the apparatus comprising:
a creation module for creating a first video file;
the adding module is used for adding at least one video frame corresponding to the existing video adding instruction in a pre-selected second video file into the first video file when the existing video adding instruction is received; adding at least one currently recorded video frame into the first video file when a recorded video adding instruction is received;
the adding module is used for:
when an existing video adding instruction is received, if the existing video adding instruction is received for the first time after a first video file is created, at least one video frame is obtained from a second video file selected in advance from a video starting position of the second video file and added into the first video file based on the current duration of the existing video adding instruction, if the existing video adding instruction is not received for the first time after the first video file is created, at least one video frame is obtained from an ending position of the video frame obtained from the second video file for the last time, and the at least one video frame is obtained from the second video file and added into the first video file based on the current duration of the existing video adding instruction.
6. The apparatus of claim 5, wherein the adding module is configured to:
when an existing video adding instruction is received, determining the playing time lengths of all video frames added currently in the first video file;
and starting from the position, corresponding to the playing time length, in a pre-selected second video file, acquiring at least one video frame from the second video file based on the duration of the current existing video adding instruction, and adding the video frame into the first video file.
7. The apparatus of any one of claims 5-6, wherein the adding module is to:
added to the end of the video of the first video file.
8. The apparatus according to any one of claims 5-6, wherein the apparatus further comprises a playback module configured to:
and in the process of adding the video frame to the first video file, playing the added video frame.
9. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes or set of instructions, said at least one instruction, said at least one program, set of codes or set of instructions being loaded and executed by said processor to implement a method of producing a video file according to any one of claims 1 to 4.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to carry out the method of producing a video file according to any one of claims 1 to 4.
CN201810876288.7A 2018-08-03 2018-08-03 Method and device for making video file Active CN108966026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810876288.7A CN108966026B (en) 2018-08-03 2018-08-03 Method and device for making video file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810876288.7A CN108966026B (en) 2018-08-03 2018-08-03 Method and device for making video file

Publications (2)

Publication Number Publication Date
CN108966026A CN108966026A (en) 2018-12-07
CN108966026B true CN108966026B (en) 2021-03-30

Family

ID=64467042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810876288.7A Active CN108966026B (en) 2018-08-03 2018-08-03 Method and device for making video file

Country Status (1)

Country Link
CN (1) CN108966026B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111629255B (en) * 2020-05-20 2022-07-01 广州视源电子科技股份有限公司 Audio and video recording method and device, computer equipment and storage medium
CN113473224B (en) * 2021-06-29 2023-05-23 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813399A (en) * 2012-11-05 2015-07-29 奈斯瑞明株式会社 Method for editing motion picture, terminal for same and recording medium
US9111571B2 (en) * 2012-11-12 2015-08-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
CN104967864A (en) * 2014-09-26 2015-10-07 腾讯科技(北京)有限公司 Video merging method and apparatus
CN105338259A (en) * 2014-06-26 2016-02-17 北京新媒传信科技有限公司 Video merging method and device
CN106331797A (en) * 2015-07-03 2017-01-11 Lg电子株式会社 Mobile terminal and method for controlling the same
CN106559686A (en) * 2015-09-25 2017-04-05 Lg电子株式会社 Mobile terminal and its control method
CN106792152A (en) * 2017-01-17 2017-05-31 腾讯科技(深圳)有限公司 A kind of image synthesizing method and terminal
CN107529095A (en) * 2017-08-24 2017-12-29 上海与德科技有限公司 A kind of video-splicing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053531A1 (en) * 2016-08-18 2018-02-22 Bryan Joseph Wrzesinski Real time video performance instrument

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813399A (en) * 2012-11-05 2015-07-29 奈斯瑞明株式会社 Method for editing motion picture, terminal for same and recording medium
US9111571B2 (en) * 2012-11-12 2015-08-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
CN105338259A (en) * 2014-06-26 2016-02-17 北京新媒传信科技有限公司 Video merging method and device
CN104967864A (en) * 2014-09-26 2015-10-07 腾讯科技(北京)有限公司 Video merging method and apparatus
CN106331797A (en) * 2015-07-03 2017-01-11 Lg电子株式会社 Mobile terminal and method for controlling the same
CN106559686A (en) * 2015-09-25 2017-04-05 Lg电子株式会社 Mobile terminal and its control method
CN106792152A (en) * 2017-01-17 2017-05-31 腾讯科技(深圳)有限公司 A kind of image synthesizing method and terminal
CN107529095A (en) * 2017-08-24 2017-12-29 上海与德科技有限公司 A kind of video-splicing method and device

Also Published As

Publication number Publication date
CN108966026A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108401124B (en) Video recording method and device
CN108391171B (en) Video playing control method and device, and terminal
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN111065001B (en) Video production method, device, equipment and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN110022489B (en) Video playing method, device and storage medium
CN108449641B (en) Method, device, computer equipment and storage medium for playing media stream
CN108965757B (en) Video recording method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN109068008B (en) Ringtone setting method, device, terminal and storage medium
CN109346111B (en) Data processing method, device, terminal and storage medium
CN110769313B (en) Video processing method and device and storage medium
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN109982129B (en) Short video playing control method and device and storage medium
CN109743461B (en) Audio data processing method, device, terminal and storage medium
CN109451248B (en) Video data processing method and device, terminal and storage medium
CN111142838A (en) Audio playing method and device, computer equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
CN110234036B (en) Method, device and system for playing multimedia file
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN107888975B (en) Video playing method, device and storage medium
CN108966026B (en) Method and device for making video file
CN111031394B (en) Video production method, device, equipment and storage medium
CN110933454B (en) Method, device, equipment and storage medium for processing live broadcast budding gift

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant