CN111935395A - Video file generation method and electronic equipment - Google Patents

Video file generation method and electronic equipment Download PDF

Info

Publication number
CN111935395A
CN111935395A CN202010608654.8A CN202010608654A CN111935395A CN 111935395 A CN111935395 A CN 111935395A CN 202010608654 A CN202010608654 A CN 202010608654A CN 111935395 A CN111935395 A CN 111935395A
Authority
CN
China
Prior art keywords
file
sound information
image frames
image frame
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010608654.8A
Other languages
Chinese (zh)
Inventor
陈文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010608654.8A priority Critical patent/CN111935395A/en
Publication of CN111935395A publication Critical patent/CN111935395A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application discloses a video file generation method and device. The method comprises the following steps: acquiring a file, wherein the file comprises continuous image frames; acquiring sound information in the environment of the current file; and synthesizing the file and the sound information into a video file. By adopting the scheme provided by the application, after the file containing the continuous image frames and the sound information in the environment where the current file is located are obtained, the file and the sound information can be synthesized into the video file, so that the continuous image frames meeting the requirements of the user can be synthesized into the video file, and the watching experience of the user is improved.

Description

Video file generation method and electronic equipment
Technical Field
The present disclosure relates to the field of image control, and in particular, to a video file generation method and an electronic device.
Background
With the continuous development of chip technology, the shooting functions of mobile phones and cameras become more and more powerful, for example, the current shooting functions include background blurring, streamer shutter, delayed photography, filter, color mixing, continuous shooting and the like, so that great convenience is provided for users. Taking the continuous shooting function as an example, a group of images with strong continuity of pictures can be obtained through the continuous shooting function, and when the images with strong continuity are watched through the switching function, the character dynamics, the occurrence events and the like in the shooting process can be restored to a certain extent.
Disclosure of Invention
The embodiment of the application aims to provide a video file generation method and device.
In order to solve the technical problem, the embodiment of the application adopts the following technical scheme: a video file generation method, comprising:
acquiring a file, wherein the file comprises continuous image frames;
acquiring sound information in the environment of the current file;
and synthesizing the file and the sound information into a video file.
The beneficial effect of this application lies in: after the file containing the continuous image frames and the sound information in the environment where the current file is located are obtained, the file and the sound information can be synthesized into a video file, so that the continuous image frames meeting the requirements of a user can be synthesized into the video file, and the watching experience of the user is improved.
In one embodiment, the obtaining the file includes:
acquiring an instruction for entering a photographing preview interface, wherein an image in the photographing preview interface is an image subjected to first processing;
entering the photographing preview interface based on the instruction, and extracting the continuous image frames subjected to the first processing in the photographing preview interface according to a preset frequency;
and generating the file according to the continuous image frames subjected to the first processing.
The beneficial effect of this embodiment lies in: the image subjected to the first processing in the photographing preview interface can be extracted into the continuous image frames subjected to the first processing according to the preset frequency, the file is generated according to the continuous image frames subjected to the first processing, and therefore under the condition that the video recording function is not started, the video is generated through the photographing preview interface, the video consisting of the image frames subjected to the first processing can still be generated under the condition that the video recording function does not have the function corresponding to the first processing, and the multiplexing of the function corresponding to the first processing is achieved through the video recording function.
In one embodiment, the obtaining the file includes:
acquiring an instruction for entering a photographing preview interface;
entering a photographing preview interface based on the instruction, and extracting continuous image frames in the photographing preview interface according to a preset frequency;
generating the file from the successive image frames.
In one embodiment, said synthesizing said file with said sound information into a video file comprises:
acquiring an image frame in the file;
performing second processing on the image frames in the file;
and synthesizing a video file based on the second processed image frame and the sound information.
In one embodiment, after the obtaining of the sound information in the environment where the current file is located, the method further includes:
and respectively storing the file and the sound information in the environment where the current file is located.
In one embodiment, after synthesizing the file with the sound information into a video file, the method further comprises:
and deleting the stored files and the sound information in the environment where the current file is located.
The beneficial effect of this embodiment lies in: after the file and the sound information are synthesized into the video file, the file and the sound information which are locally stored and used for synthesizing the video file are disabled, so that the stored file and the sound information in the environment where the current file is located are deleted, and the storage space is saved.
In one embodiment, further comprising:
respectively sending the continuous image frames and the sound information in the environment of the current file to an encoder;
and extracting the coded image frame and the coded sound information from the coder.
In one embodiment, said synthesizing said file with said sound information into a video file comprises:
sending an encoded image frame, a timestamp corresponding to the image frame, encoded sound information and a timestamp corresponding to the sound information to a branching mixer, wherein the timestamp corresponding to the image frame is used for representing the acquisition time of the image frame, the timestamp corresponding to the sound information is used for representing the acquisition time of the sound information, and the branching mixer is used for synthesizing the encoded image frame and the encoded sound information into a video file based on the timestamps corresponding to the image frame and the sound information;
extracting the synthesized video file from the branching mixer.
In one embodiment, further comprising:
under the condition that a playing instruction of the video file is received, acquiring image frames and audio data coded in the video file;
decoding the encoded image frames and audio data to obtain original image frames before encoding, time stamps corresponding to the image frames, original sound information before encoding and time stamps corresponding to the sound information;
and synchronously playing the original image frame and the original sound information based on the time stamp corresponding to the image frame and the time stamp corresponding to the sound information.
The application provides an electronic device, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a file, and the file comprises continuous image frames;
the second acquisition module is used for acquiring the sound information in the environment where the current file is located;
and the synthesis module is used for synthesizing the file and the sound information into a video file.
In one embodiment, the first obtaining module includes:
the first acquisition sub-module is used for acquiring an instruction for entering a photographing preview interface, wherein an image in the photographing preview interface is an image subjected to first processing;
the first extraction submodule is used for entering the photographing preview interface based on the instruction and extracting the continuous image frames subjected to the first processing in the photographing preview interface according to a preset frequency;
and the first generation sub-module is used for generating the file according to the continuous image frames subjected to the first processing.
In one embodiment, the first obtaining module includes:
the second acquisition submodule is used for acquiring an instruction for entering a photographing preview interface;
the second extraction submodule is used for entering a photographing preview interface based on the instruction and extracting continuous image frames in the photographing preview interface according to a preset frequency;
and the second generation submodule is used for generating the file according to the continuous image frames.
In one embodiment, the synthesis module comprises:
the third obtaining sub-module is used for obtaining the image frames in the file;
the processing submodule is used for carrying out second processing on the image frames in the file;
and the first synthesis submodule is used for synthesizing a video file based on the second processed image frame and the sound information.
In one embodiment, further comprising:
and the storage module is used for respectively storing the file and the sound information in the environment where the current file is located after the sound information in the environment where the current file is located is obtained.
In one embodiment, further comprising:
and the deleting module is used for deleting the stored file and the sound information in the environment where the current file is located after the file and the sound information are synthesized into a video file.
In one embodiment, further comprising:
the sending module is used for respectively sending the continuous image frames and the sound information in the environment where the current file is located to an encoder;
and the extraction module is used for extracting the coded image frame and the coded sound information from the coder.
In one embodiment, the synthesis module comprises:
the second synthesis submodule is used for sending the encoded image frame, the timestamp corresponding to the image frame, the encoded sound information and the timestamp corresponding to the sound information to the shunt mixer, wherein the timestamp corresponding to the image frame is used for representing the acquisition time of the image frame, the timestamp corresponding to the sound information is used for representing the acquisition time of the sound information, and the shunt mixer is used for synthesizing the encoded image frame and the encoded sound information into a video file based on the timestamps corresponding to the image frame and the sound information;
and the extraction submodule is used for extracting the synthesized video file from the shunt mixer.
In one embodiment, further comprising:
the third acquisition module is used for acquiring the image frame and the audio data coded in the video file under the condition of receiving a playing instruction of the video file;
the decoding module is used for decoding the encoded image frames and the encoded audio data to obtain original image frames before encoding, timestamps corresponding to the image frames, original sound information before encoding and timestamps corresponding to the sound information;
and the playing module is used for synchronously playing the original image frame and the original sound information based on the time stamp corresponding to the image frame and the time stamp corresponding to the sound information.
Drawings
Fig. 1 is a flowchart of a video file generation method according to an embodiment of the present application;
FIG. 2A is a flowchart of a video file generation method according to another embodiment of the present application;
fig. 2B is a flowchart illustrating a method for generating a video file by using a photo preview function according to an embodiment of the present application;
FIG. 2C is a schematic flowchart illustrating a method for generating a video file using a photo preview function according to another embodiment of the present application;
FIG. 2D is a schematic flow chart illustrating the generation of a video file by a software package according to an embodiment of the present application;
FIG. 3 is a flowchart of a video file generation method according to another embodiment of the present application;
FIG. 4 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
Various aspects and features of the present application are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It is also understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the application, which can be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application of the invention in unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the application.
Fig. 1 is a flowchart of a video file generation method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps S11-S13:
in step S11, a file is acquired, the file containing successive image frames;
in step S12, sound information in the environment where the current file is located is acquired;
in step S13, the file is synthesized with the sound information into a video file.
The embodiment can be used for equipment with a photographing function or an image storage function, such as a computer, a mobile terminal and the like, and the equipment acquires a file containing continuous image frames; the continuous image frames in the file may be a plurality of images continuously shot based on a photographing function, may also be continuous image frames extracted from a photographing preview interface based on a preset frequency, and may also be images with continuity that are locally pre-stored.
Of course, the file may be a file including a plurality of images having continuity, or may be a video file generated based on a plurality of images having continuity and not including sound information.
Acquiring sound information in the environment of the current file; specifically, the recording function may be started while the photographing function or the photographing preview interface is started, and the microphone or the microphone of the device collects sound information in the current environment. The file is then combined with the sound information into a video file.
The beneficial effect of this application lies in: after the file containing the continuous image frames and the sound information in the environment where the current file is located are obtained, the file and the sound information can be synthesized into a video file, so that the continuous image frames meeting the requirements of a user can be synthesized into the video file, and the watching experience of the user is improved.
In one embodiment, as shown in FIG. 2A, the above step S11 can be implemented as the following steps S21-S23:
in step S21, an instruction to enter a photo preview interface is obtained, where an image in the photo preview interface is an image subjected to first processing;
in step S22, entering a photographing preview interface based on the instruction, and extracting consecutive image frames subjected to the first processing in the photographing preview interface according to a preset frequency;
in step S23, a file is generated from the consecutive image frames subjected to the first processing.
The embodiment introduces a technical scheme for acquiring a file containing continuous image frames through a photo preview interface, which is specifically as follows:
acquiring an instruction for entering a photographing preview interface, wherein an image in the photographing preview interface is an image subjected to first processing;
the photographing preview interface is used for displaying the picture aligned with the camera, and the continuous image frames of the first processing may be image frames in which one or more functions of picture style, filter, color mixing, front camera blurring, rear camera blurring, and the like are added to the image frames according to a preset modification mode meeting the user's requirements.
Entering a photographing preview interface based on the instruction, and extracting continuous image frames subjected to first processing in the photographing preview interface according to a preset frequency;
the preset frequency can be automatically adjusted according to needs, and in general, if image frames with certain continuity are sequentially displayed at the speed of 25-30 frames per second, a picture seen by human eyes is a dynamic picture similar to a video, so that when the continuous image frames are extracted, extraction can be carried out at the frequency of 1/30 seconds per frame to 1/25 seconds per frame in order to ensure the continuity of the generated video. A file is then generated from the successive image frames that have undergone the first process.
For a mobile phone provider, some additional functions in the functions of taking pictures and recording videos in the mobile phone are designed relatively independently, for example, the functions of the above-mentioned picture style, filter, color matching, blurring of a front camera, blurring of a rear camera, and the like, when the additional functions are added to the function of taking pictures, a system modification and algorithm integration are required to be performed on a processing chip once, and the mobile phone provider needs to pay a certain fee to the processing chip provider for the additional functions. When the mobile phone is added to the video recording function, the mobile phone provider needs to pay a certain fee to the processing chip provider. Moreover, for the chip provider, when these functions are added to the video recording function, additional software layer modification and secondary algorithm integration are needed. Multiple integrations of the algorithm may also require more manpower and algorithm overhead to be consumed. On the premise, the application develops a software package shown in fig. 2B or fig. 2C based on a development platform, the specific structure of the software package is shown in fig. 2D, and the software package comprises a rendering module, an image coding module, a sound recording module, a sound coding module and a synthesis module, and is used for acquiring image frames in a shooting preview interface and sound information in the current environment when the shooting preview interface is opened, then rendering the images through the rendering module, and after rendering, coding the images through the image coding module, for example, coding the original images into a video file without sound, for example, coding the images in a JPG format into a video in an h.264 format, sending the coded video file without sound to the synthesis module, and simultaneously recording the sound information in the current environment through the sound recording module, the sound information is encoded by the sound encoding module, for example, the sound information in PCM format is encoded into AAC format, and the encoded sound information is also sent to the synthesizing module, and the sound information and the video information are synthesized into a sound video file by the synthesizing module, for example, into an MP4 file.
In the present application, on the premise that the photographing function already has the functions of the above-mentioned picture style, filter, color mixing, front camera blurring, rear camera blurring, etc., one or more of these functions may be turned on in the photographing preview interface, that is, an image of the photographing preview interface is changed into an image subjected to the first processing, and the image in the photographing preview interface is extracted by using the characteristic that the picture of the photographing preview interface is synchronized with the picture aligned with the camera to obtain continuous image frames of the continuous image frames subjected to the first processing, and then a file including the continuous image frames subjected to the first processing is generated.
In this embodiment, the image in the photo preview interface may be changed to the image subjected to the first processing based on the following two ways:
in a first mode
Receiving click operation of an additional function button in the photographing preview interface under the condition of entering the photographing preview interface; entering a photo preview interface of the additional function corresponding to the additional function button, for example, the photo preview interface includes a button corresponding to the functions such as picture style, filter, color mixing, blurring, etc., and clicking the corresponding button changes the picture in the photo preview interface into a picture including the additional function corresponding to the button.
Mode two
Receiving a starting instruction of a photographing program containing a preset additional function; starting a photographing program based on the starting instruction; under the condition that the photographing program enters the photographing preview interface, a preset additional function is added to the photographing interface, for example, a user clicks a beauty camera APP, the beauty camera APP comprises a beauty function, the beauty camera APP is started based on the user clicking, and the beauty function is added to the photographing preview interface entered through the beauty camera APP.
The beneficial effect of this embodiment lies in: the image subjected to the first processing in the photographing preview interface can be extracted into the continuous image frames subjected to the first processing according to the preset frequency, and the file is generated according to the continuous image frames subjected to the first processing, so that the video is generated through the photographing preview interface under the condition that the video recording function is not started, the video consisting of the image frames subjected to the first processing can still be generated under the condition that the video recording function does not have the function corresponding to the first processing, and the multiplexing of the function corresponding to the first processing is realized through the video recording function.
In one embodiment, the above step S11 can also be implemented as the following steps A1-A3:
in step a1, acquiring an instruction for entering a photo preview interface;
in step a2, entering a photo preview interface based on the instruction, and extracting consecutive image frames in the photo preview interface according to a preset frequency;
in step a3, a file is generated from successive image frames.
The embodiment aims to provide a method for acquiring continuous image frames without first processing based on a photographing preview interface. Specifically, an instruction for entering a photographing interface is obtained, the photographing preview interface is photographed based on the instruction, and continuous image frames in the photographing preview interface are extracted according to a preset frequency; in this embodiment, the continuous image frame is not subjected to the first processing, and the image frame may be considered as an image frame without any additional functions such as screen style, filter, color mixing, front camera blurring, rear camera blurring, and the like. A file is generated from successive image frames.
In one embodiment, the above step S13 can be implemented as the following steps B1-B3:
in step B1, image frames in a file are acquired;
in step B2, second processing is performed on the image frames in the file;
in step B3, a video file is synthesized based on the second processed image frame and the sound information.
In the embodiment, image frames in a file are acquired; performing second processing on the image frames in the file; the second processing may refer to processing such as picture style adjustment, color adjustment, additional blurring function, additional filter function, and the like for the image frame. In this embodiment, the image frames in the file may be processed after the file is generated, and in this case, even if the photographing function does not include functions such as a picture style, a filter, color matching, blurring of a front camera, blurring of a rear camera, and the like, a video file with special effects such as a picture style, a filter, color matching, blurring, and the like may still be obtained. Acquiring an image frame in a file; performing second processing on the image frames in the file; and synthesizing a video file based on the image frame and the sound information after the second processing.
Of course, in addition to the above, the image frames may be processed at the time of file generation, that is, at the time of the above step S11, and at this time, the step S11 may be performed by:
acquiring an instruction for entering a photographing preview interface; entering a photographing preview interface based on the instruction, and extracting continuous image frames in the photographing preview interface according to a preset frequency; performing a third process on the continuous image frames; and generating a file according to the continuous image frames after the third processing. The third process may be a step of adding a special effect to the picture, that is, may be the same as the second process.
In one embodiment, after the above step S12, the method may further be implemented as the following steps:
and respectively storing the file and the sound information in the environment where the current file is located.
In one embodiment, after the above step S13, the method may further be implemented as the following steps:
and deleting the stored files and the sound information in the environment where the current file is located.
In this embodiment, when the sound information in the environment where the file and the current file are located is obtained, the sound information in the environment where the file and the current file are located may be stored locally, so that the video file is synthesized based on the sound information in the environment where the file and the current file are located. After the file and the sound information are synthesized into the video file, the file and the sound information which are locally stored and used for synthesizing the video file are lost, so that the stored file and the sound information in the environment where the current file is located are deleted, and the storage space is saved.
In one embodiment, as shown in FIG. 3, the method may also be implemented as steps S31-S32 as follows:
in step S31, the sound information in the environment where the continuous image frames and the current file are located are respectively sent to the encoder;
in step S31, the encoded image frame and the encoded sound information are extracted from the encoder.
As shown in fig. 2D, in this embodiment, the continuous image frames and the sound information in the environment where the current file is located are respectively sent to the encoder, that is, to the image encoder and the sound encoder, so as to implement noise reduction and compression on the image through encoding, reduce redundant information in the sound, thereby reducing space resources occupied by the sound and the image, and after encoding, extract the encoded image frames and the encoded sound frames from the encoder.
In one embodiment, the above step S13 can also be implemented as the following steps C1-C2:
in step C1, sending the encoded image frames, timestamps corresponding to the image frames, encoded sound information, and timestamps corresponding to the sound information to a branching mixer, where the timestamps corresponding to the image frames are used to represent the acquiring time of the image frames, the timestamps corresponding to the sound information are used to represent the acquiring time of the sound information, and the branching mixer is used to synthesize the encoded image frames and the encoded sound information into a video file based on the timestamps corresponding to the image frames and the sound information;
in step C2, the synthesized video file is extracted from the branching mixer.
In this embodiment, the audio processing device is configured to synthesize encoded image frames extracted from an image encoder and encoded audio information extracted from an audio encoder, and during the synthesis, send the encoded image frames, timestamps corresponding to the image frames, encoded audio information, and timestamps corresponding to the audio information to a branching mixer, where the timestamps corresponding to the image frames are used to represent acquisition times of the image frames, the timestamps corresponding to the audio information are used to represent acquisition times of the audio information, and the branching mixer is configured to synthesize a video file based on the image frames and the timestamps corresponding to the audio information, so that it is ensured that a displayed image and an output audio can be kept synchronized in the video file synthesized by the encoded image frames and the encoded audio information.
In one embodiment, the method may also be implemented as the following steps D1-D3:
in step D1, when a play instruction for the video file is received, acquiring the encoded image frame and audio data in the video file;
in step D2, decoding the encoded image frames and audio data to obtain original image frames before encoding, timestamps corresponding to the image frames, original sound information before encoding, and timestamps corresponding to the sound information;
in step D3, the original image frame and the original sound information are played simultaneously based on the time stamp corresponding to the image frame and the time stamp corresponding to the sound information.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 4, the electronic device includes the following modules:
a first obtaining module 41, configured to obtain a file, where the file includes consecutive image frames;
a second obtaining module 42, configured to obtain sound information in an environment where the current file is located;
and a synthesizing module 43 for synthesizing the file and the sound information into a video file.
In one embodiment, as shown in fig. 5, the first obtaining module 41 includes:
the first obtaining submodule 51 is configured to obtain an instruction to enter a photo preview interface, where an image in the photo preview interface is an image subjected to first processing;
the first extraction submodule 52 is configured to enter the photo preview interface based on the instruction, and extract continuous image frames subjected to the first processing in the photo preview interface according to a preset frequency;
a first generation submodule 53 for generating a file from the consecutive image frames subjected to the first processing.
In one embodiment, the first obtaining module includes:
the second acquisition submodule is used for acquiring an instruction for entering a photographing preview interface;
the second extraction submodule is used for entering the photographing preview interface based on the instruction and extracting continuous image frames in the photographing preview interface according to the preset frequency;
and the second generation submodule is used for generating a file according to the continuous image frames.
In one embodiment, a synthesis module comprises:
the third acquisition submodule is used for acquiring image frames in the file;
the processing submodule is used for carrying out second processing on the image frames in the file;
and the first synthesis submodule is used for synthesizing a video file based on the image frame and the sound information after the second processing.
In one embodiment, further comprising:
and the storage module is used for respectively storing the file and the sound information in the environment where the current file is located after the sound information in the environment where the current file is located is obtained.
In one embodiment, further comprising:
and the deleting module is used for deleting the stored files and the sound information in the environment where the current file is located after the files and the sound information are synthesized into the video file.
The beneficial effect of this embodiment lies in: after the file and the sound information are synthesized into the video file, the file and the sound information which are locally stored and used for synthesizing the video file are disabled, so that the stored file and the sound information in the environment where the current file is located are deleted, and the storage space is saved.
In one embodiment, further comprising:
the sending module is used for respectively sending the continuous image frames and the sound information in the environment where the current file is located to the encoder;
and the extraction module is used for extracting the coded image frame and the coded sound information from the coder.
In one embodiment, a synthesis module comprises:
the second synthesis submodule is used for sending the encoded image frame, the timestamp corresponding to the image frame, the encoded sound information and the timestamp corresponding to the sound information to the shunt mixer, wherein the timestamp corresponding to the image frame is used for representing the acquisition time of the image frame, the timestamp corresponding to the sound information is used for representing the acquisition time of the sound information, and the shunt mixer is used for synthesizing the encoded image frame and the encoded sound information into a video file based on the timestamps corresponding to the image frame and the sound information;
and the extraction submodule is used for extracting the synthesized video file from the shunt mixer.
In one embodiment, further comprising:
the third acquisition module is used for acquiring the image frame and the audio data which are coded in the video file under the condition of receiving a playing instruction of the video file;
the decoding module is used for decoding the encoded image frames and the encoded audio data to obtain original image frames before encoding, timestamps corresponding to the image frames, original sound information before encoding and timestamps corresponding to the sound information;
and the playing module is used for synchronously playing the original image frame and the original sound information based on the time stamp corresponding to the image frame and the time stamp corresponding to the sound information.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (10)

1. A video file generation method, comprising:
acquiring a file, wherein the file comprises continuous image frames;
acquiring sound information in the environment of the current file;
and synthesizing the file and the sound information into a video file.
2. The method of claim 1, the obtaining the file, comprising:
acquiring an instruction for entering a photographing preview interface, wherein an image in the photographing preview interface is an image subjected to first processing;
entering the photographing preview interface based on the instruction, and extracting the continuous image frames subjected to the first processing in the photographing preview interface according to a preset frequency;
and generating the file according to the continuous image frames subjected to the first processing.
3. The method of claim 1, the obtaining the file, comprising:
acquiring an instruction for entering a photographing preview interface;
entering a photographing preview interface based on the instruction, and extracting continuous image frames in the photographing preview interface according to a preset frequency;
generating the file from the successive image frames.
4. The method of claim 3, said synthesizing said file with said sound information into a video file, comprising:
acquiring an image frame in the file;
performing second processing on the image frames in the file;
and synthesizing a video file based on the second processed image frame and the sound information.
5. The method of claim 1, after obtaining the sound information in the environment where the current file is located, further comprising:
and respectively storing the file and the sound information in the environment where the current file is located.
6. The method of claim 5, further comprising, after synthesizing the file with the sound information into a video file:
and deleting the stored files and the sound information in the environment where the current file is located.
7. The method of claim 1, further comprising:
respectively sending the continuous image frames and the sound information in the environment of the current file to an encoder;
and extracting the coded image frame and the coded sound information from the coder.
8. The method of claim 7, said synthesizing said file with said sound information into a video file, comprising:
sending an encoded image frame, a timestamp corresponding to the image frame, encoded sound information and a timestamp corresponding to the sound information to a branching mixer, wherein the timestamp corresponding to the image frame is used for representing the acquisition time of the image frame, the timestamp corresponding to the sound information is used for representing the acquisition time of the sound information, and the branching mixer is used for synthesizing the encoded image frame and the encoded sound information into a video file based on the timestamps corresponding to the image frame and the sound information;
extracting the synthesized video file from the branching mixer.
9. The method of claim 8, further comprising:
under the condition that a playing instruction of the video file is received, acquiring image frames and audio data coded in the video file;
decoding the encoded image frame and audio data to obtain an original image frame before encoding, a time stamp corresponding to the image frame, original sound information before encoding and a time stamp corresponding to the sound information;
and synchronously playing the original image frame and the original sound information based on the time stamp corresponding to the image frame and the time stamp corresponding to the sound information.
10. An electronic device, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a file, and the file comprises continuous image frames;
the second acquisition module is used for acquiring the sound information in the environment where the current file is located;
and the synthesis module is used for synthesizing the file and the sound information into a video file.
CN202010608654.8A 2020-06-29 2020-06-29 Video file generation method and electronic equipment Pending CN111935395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608654.8A CN111935395A (en) 2020-06-29 2020-06-29 Video file generation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608654.8A CN111935395A (en) 2020-06-29 2020-06-29 Video file generation method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111935395A true CN111935395A (en) 2020-11-13

Family

ID=73316399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608654.8A Pending CN111935395A (en) 2020-06-29 2020-06-29 Video file generation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111935395A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595925A (en) * 2013-11-15 2014-02-19 深圳市中兴移动通信有限公司 Method and device for synthesizing video with photos
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN106610982A (en) * 2015-10-22 2017-05-03 中兴通讯股份有限公司 Media file generation method and apparatus
CN107197187A (en) * 2017-05-27 2017-09-22 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of video
CN107295284A (en) * 2017-08-03 2017-10-24 浙江大学 A kind of generation of video file being made up of audio and picture and index playing method, device
CN108769572A (en) * 2018-04-26 2018-11-06 国政通科技股份有限公司 Monitor video file generated, device and terminal device
CN108965757A (en) * 2018-08-02 2018-12-07 广州酷狗计算机科技有限公司 video recording method, device, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103595925A (en) * 2013-11-15 2014-02-19 深圳市中兴移动通信有限公司 Method and device for synthesizing video with photos
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN106610982A (en) * 2015-10-22 2017-05-03 中兴通讯股份有限公司 Media file generation method and apparatus
CN107197187A (en) * 2017-05-27 2017-09-22 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of video
CN107295284A (en) * 2017-08-03 2017-10-24 浙江大学 A kind of generation of video file being made up of audio and picture and index playing method, device
CN108769572A (en) * 2018-04-26 2018-11-06 国政通科技股份有限公司 Monitor video file generated, device and terminal device
CN108965757A (en) * 2018-08-02 2018-12-07 广州酷狗计算机科技有限公司 video recording method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN105991962B (en) Connection method, information display method, device and system
KR100566253B1 (en) Device and method for displaying picture in wireless terminal
CN103595925A (en) Method and device for synthesizing video with photos
CN111885305A (en) Preview picture processing method and device, storage medium and electronic equipment
EP1938208A1 (en) Face annotation in streaming video
CN104079833A (en) Method and device for shooting star orbit videos
CN109302628B (en) Live broadcast-based face processing method, device, equipment and storage medium
KR101841313B1 (en) Methods for processing multimedia flows and corresponding devices
CN107578777A (en) Word-information display method, apparatus and system, audio recognition method and device
CN106485653B (en) User terminal and panoramic picture dynamic thumbnail generation method
KR20060118838A (en) Method for displaying special effect to image data
CN109286760B (en) Entertainment video production method and terminal thereof
CN111935395A (en) Video file generation method and electronic equipment
CN112601099A (en) Live image processing method and device, storage medium and electronic equipment
KR101176852B1 (en) Method for Compositing Image(or Video)
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
CN111629223A (en) Video synchronization method and device, computer readable storage medium and electronic device
CN113395531B (en) Play switching method and device, electronic equipment and computer readable storage medium
KR101143408B1 (en) System for Providing Service of Compositing Imageor Video
CN115514989A (en) Data transmission method, system and storage medium
CN106060394B (en) A kind of photographic method, device and terminal device
CN113810725A (en) Video processing method, device, storage medium and video communication terminal
KR20080028183A (en) Images control system and method thereof for potable device using a projection function
JP2002077844A (en) Apparatus and method for transmitting image as well as image transmission program recording computer readable recording medium
CN112887515A (en) Video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113