CN112887515B - Video generation method and device - Google Patents

Video generation method and device Download PDF

Info

Publication number
CN112887515B
CN112887515B CN202110104965.5A CN202110104965A CN112887515B CN 112887515 B CN112887515 B CN 112887515B CN 202110104965 A CN202110104965 A CN 202110104965A CN 112887515 B CN112887515 B CN 112887515B
Authority
CN
China
Prior art keywords
video
video frame
picture
pictures
shooting object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110104965.5A
Other languages
Chinese (zh)
Other versions
CN112887515A (en
Inventor
张波
王青鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110104965.5A priority Critical patent/CN112887515B/en
Publication of CN112887515A publication Critical patent/CN112887515A/en
Application granted granted Critical
Publication of CN112887515B publication Critical patent/CN112887515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2228Video assist systems used in motion picture production, e.g. video cameras connected to viewfinders of motion picture cameras or related video signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video generation method and device, and belongs to the technical field of communication. The method comprises the following steps: acquiring N Zhang Shipin frames of image video frame pictures of a first shooting object in a motion state, wherein N is an integer greater than 1, and N frames of video frame pictures are pictures acquired by the first shooting object in the motion state; collecting a long exposure picture shot by a first shooting object in a motion state, wherein the long exposure picture comprises a smear of the first shooting object; generating a first video based on the long exposure picture and the N video frame pictures, wherein the first video comprises N synthesized pictures, and the synthesized pictures are as follows: and synthesizing the smear of the first shooting object into a picture generated after the video frame picture. The video generation method provided by the application can reduce the operation amount of a user in acquiring the video with the better motion effect, thereby improving the operation convenience of the electronic equipment.

Description

Video generation method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video generation method and device.
Background
The video recording function is one of the main functions of the electronic equipment, so that a user can dynamically record everything happening nearby, and the user experience effect of the electronic equipment is improved. However, at present, a user uses video recorded by an electronic device in a sports scene, and the sports effect of a moving object (such as a person, an animal, a vehicle, etc.) cannot be displayed with high quality. If the user needs to acquire the video with better motion effect, the recorded video needs to be input into a specific application program in the electronic equipment, the motion effect is increased for the video through the specific application program, the operation is complex and time-consuming, and the operation convenience of the electronic equipment is reduced.
Therefore, the current electronic equipment can acquire the video with the better motion effect, and has the problem of low operation convenience.
Disclosure of Invention
The embodiment of the application aims to provide a video generation method and device, which can solve the problem that the current electronic equipment is low in operation convenience when acquiring videos with better motion effects.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video generating method, including:
acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired by a first shooting object in a motion state;
collecting a long exposure picture shot under the state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
generating a first video based on the long exposure picture and the N video frame pictures, wherein the first video comprises N synthesized pictures, and the synthesized pictures are: and synthesizing the smear of the first shooting object to the picture generated after the video frame picture.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
The video frame picture acquisition module is used for acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired when a first shooting object is in a motion state;
the long exposure picture acquisition module is used for acquiring a long exposure picture shot under the state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
the video generation module is used for generating a first video based on the long exposure picture and the N video frame pictures, wherein the first video comprises N synthesized pictures, and the synthesized pictures are as follows: and synthesizing the smear of the first shooting object to the picture generated after the video frame picture.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
According to the embodiment of the application, the smear of the first shooting object in the motion state in the long exposure picture is synthesized to N video frame pictures of the first shooting object in the motion state, N synthesized pictures are generated, and a first video comprising the N synthesized pictures is generated, so that the frame images in the first video have the smear, and the motion effect of the first video is increased through the smear.
Drawings
Fig. 1 is a schematic flow chart of a video generating method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 4 is a third schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 8 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged where appropriate so that embodiments of the application may be practiced in sequences other than those illustrated and described herein, and that the objects identified by "first", "second", etc. are generally of a type and do not limit the number of objects, e.g., the first shot object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video generating method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a video generating method applied to an electronic device, as shown in fig. 1, including the following steps:
step 101, acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired by a first shooting object in a motion state;
102, collecting a long exposure picture shot in a state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
step 103, generating a first video based on the long exposure picture and the N video frame pictures, wherein the first video includes N synthesized pictures, and the synthesized pictures are: and synthesizing the smear of the first shooting object to the picture generated after the video frame picture.
The electronic device can synthesize the smear of the first shooting object in the motion state in the long exposure picture to N video frame pictures of the first shooting object in the motion state, generate N synthesized pictures, and generate the first video comprising the N synthesized pictures, so that the frame images in the first video have the smear to realize the motion effect of the first video increased by the smear.
In the step 101, N video frame pictures of the first shooting object in the motion state are acquired, and the N video frame pictures may be acquired when the electronic device receives a video generation instruction for instructing to generate a video with a motion effect based on the N video frame pictures.
It should be noted that, the video generating instruction may be an instruction that the electronic device receives a video generating input trigger of a user; or, the video generating instruction may also be an instruction that the electronic device automatically triggers according to a preset rule.
The video generating input may be any input for instructing to generate a video with a motion effect based on the N video frame pictures, and may include at least one of a voice input, a space gesture input, a touch input, and the like.
For example, the electronic device may acquire the N video frame pictures of the first shooting object in a moving state when receiving a touch input that the user selects the N video frame pictures in the picture set and clicks a "generate moving video" button.
In addition, the N video frame pictures are pictures acquired under the motion state of the first shooting object, that is, when the first shooting object is in the motion state, the electronic device shoots the N video frame pictures, each video frame picture includes an image area of the first shooting object, and in the N video frame pictures, the image area of the first shooting object in at least two video frame pictures changes.
For example, in the case where the electronic apparatus photographs the running of the person 21 (i.e., the first photographic subject), the display position of the image area of the person 21 in the photographing interface of the electronic apparatus may be moved in the direction indicated by the arrow, and may be moved from the display position 22 shown in fig. 2 to the display position 23 shown in fig. 3 during the running of the person 21.
In addition, the N video frame pictures may be pictures stored in advance by the electronic device, that is, the acquiring N video frame pictures may be that the electronic device reads the N video frame pictures in a memory thereof according to the video generation instruction.
Of course, in order to achieve the effect of real-time improvement of the motion effect of the recorded video by the electronic device, the N video frame pictures may also be N video frame pictures currently collected by the electronic device, and specifically, before the step 101, the method may further include:
if a first input is received or a first shooting object in a motion state is detected to exist in the shooting preview interface under the condition that the shooting preview interface is displayed, starting the motion shooting mode;
the acquiring N video frame pictures comprises
And collecting N video frame pictures in the motion shooting mode.
Here, the electronic device may start the motion shooting mode according to the received first input or detecting that the first shooting object in a motion state exists in the shooting preview interface when the shooting preview interface is displayed, and collect the N video frame pictures in the motion shooting mode, so that the first video with a motion effect may be generated based on the N video frame pictures collected in real time, thereby improving timeliness of video processing.
The first input may be any input for indicating to start the motion shooting mode, and may be at least one of a voice input, a space gesture input, a touch input, and the like.
For example, as shown in fig. 4, the electronic device displays a shooting preview interface, and when the electronic device receives a click input of the user for the "smear shooting" button 41 in the shooting preview interface, the electronic device turns on the above-described motion shooting mode.
It should be noted that, when the electronic device receives the first input, the N video frame pictures may be acquired in the motion shooting mode, or when the motion shooting mode is turned on, the electronic device detects whether the first shooting object in the motion state exists in the shooting preview interface, and if the first shooting object in the motion state exists, the N video frame pictures are acquired.
Of course, when the electronic device displays the shooting preview interface, the electronic device may detect whether a first shooting object in a motion state exists in the shooting preview interface, and if the first shooting object exists, the electronic device may start the motion shooting mode and collect the N video frame pictures; and under the condition that the first shooting object is not present, the electronic equipment can continuously detect whether the first shooting object in a motion state exists in the shooting preview interface or not until the first shooting object is detected.
In addition, the first subject may be at least one subject whose image is displayed in the photographing preview interface and in a moving state, and the subject may be any subject capable of moving, which may include at least one of a person, an animal, a vehicle, and the like. For example, as shown in fig. 2, the first photographic subject includes a person 21.
In the embodiment of the application, the electronic device acquires the N frames of video frame pictures, which can be acquired and cached in a shooting preview mode based on a preset frame rate, and the electronic device can read the N frames of video frame pictures after acquiring the N frames of video frame pictures.
The preset frame rate may be a frame rate preset in the electronic device for recording video. For example, the predetermined frame rate may be 100fps, at which time the electronic device may acquire the video frame pictures at a rate of 100 frames per second, i.e., one frame of video frame pictures at one hundredth of a second intervals, etc.
In addition, the electronic device may acquire N video frame pictures, or the electronic device may encode the N video frame pictures that are captured in the video generation mode to obtain a recorded first video, and decode the first video, thereby obtaining N decoded video frame pictures.
In step 102, under the condition that the first shooting object is in a motion state, the electronic device may control the camera to take a long exposure picture of the first shooting object, so that a smear capable of showing a motion effect is generated in the long exposure picture obtained by taking the long exposure picture, and the electronic device may obtain the long exposure picture.
The step 102 may be performed before the step 101, may be performed after the step 101, or may be performed simultaneously with the step 101, and is not limited thereto.
Specifically, the N video frame pictures are pictures collected by the first camera;
the capturing a long exposure picture captured under a motion state of the first capturing object may include:
and in the process of acquiring N video frame pictures by the first camera, controlling a second camera to shoot long exposure pictures when the first shooting object is in a motion state.
Here, the electronic device can also shoot the long exposure picture through the second camera at the same time in the process of collecting the N video frame pictures by the first camera, so that the processing efficiency of the video can be improved, the scene similarity between the N video frame pictures and the long exposure picture can be ensured, and the quality of the processed video is improved.
For example, when the electronic device turns on the motion shooting mode, the electronic device may control the camera a (i.e. the first camera) and the camera B (i.e. the second camera) to operate simultaneously, i.e. the following steps: controlling the camera A to acquire N Zhang Shipin frames of images and cache the images in a shooting preview mode, or controlling the camera A to record a first video in a video generation mode, namely, the first video is formed by encoding N Zhang Shipin frames of images; and controlling the camera B to take a long exposure picture, and obtaining the long exposure picture.
In addition, the long exposure picture may be a picture taken by a camera of the electronic device when an exposure parameter of the camera is higher than a preset value, and when the camera is taken under the exposure parameter, if the first object is in a motion state, a smear of the first object may be generated in the long exposure picture obtained by taking.
For example, in the process that the person 21 shot by the electronic device moves from the display position 22 shown in fig. 2 to the display position 23 shown in fig. 3, if the electronic device controls the camera B thereof to take a long exposure photograph, a long exposure picture shown in fig. 5 may be shot, where the long exposure picture includes a smear 51 of the person 21, and the smear 51 may show that the person 21 is in motion.
In the step 103, after the electronic device obtains the N video frame pictures and the long exposure picture, the electronic device may generate the first video based on the long exposure picture and the N video frame pictures.
In the embodiment of the application, the first video is generated based on the long exposure picture and the N video frame pictures, which may be that the electronic device extracts the smear in the long exposure picture, synthesizes the extracted smear into the image area associated with the first shooting object in each video frame picture, obtains the N frame synthesized picture, and encodes the N frame synthesized picture to generate the first video, thereby making each frame picture in the first video have the smear of the first shooting object, and achieving the purpose of improving the motion effect of the recorded video.
For example, in the process that the person 21 shot by the electronic device moves from the display position 22 shown in fig. 2 to the display position 23 shown in fig. 3, the electronic device may control the camera a thereof to acquire N Zhang Shipin frames of images during running of the person 21, control the camera to shoot a long exposure picture shown in fig. 5, extract a smear 51 generated by the person 21 in the long exposure picture, and synthesize the smear 51 to an image area behind the person 21 in the N video frames, and encode the N video frames synthesized with the smear to generate the first video.
The obtaining N video frame pictures may be decoding the recorded video, and in order to avoid errors of the generated first video in the process of generating the first video, the generated N composite pictures may be encoded based on encoding information of the N video frame pictures in the recorded video.
Specifically, the acquiring N video frame pictures may include:
recording a second video of the first shooting object in a motion state;
decoding the second video to obtain N video frame pictures of the second video;
the generating the first video may include:
Synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and encoding the N synthesized pictures to generate a first video, wherein the encoding information of the synthesized pictures is the encoding information of the video frame pictures corresponding to the synthesized pictures in the second video.
Here, the electronic device may synthesize the smear from N video frame pictures obtained by decoding the recorded video, and encode the synthesized picture corresponding to each video frame picture based on the encoding information of the video frame picture to generate the first video, so that not only can the motion effect of the recorded video be improved, but also the quality of the generated video can be improved.
For example, in the electronic device on motion mode, the electronic device may record, through its camera a, video 1 (i.e., the second video) of the person 21 moving from the display position 22 shown in fig. 2 to the display position 23 shown in fig. 3, and decode the video 1 to obtain N video frame pictures; then, the smear 51 of the long exposure picture character 21 shot by the camera B can be extracted, and the smear 51 is synthesized into each video frame picture obtained by encoding to generate N synthesized pictures; and finally, acquiring N synthesized pictures and the coding information of video frame pictures corresponding to each synthesized picture, and coding the N synthesized pictures according to the acquired coding information to generate a video 2 (namely a first video).
The information of the video frame picture encoded in the second video may be an encoding sequence number of the video frame picture in the second video, and the like.
In addition, the N video frame pictures may be pictures collected and cached by the electronic device in the shooting preview mode, so the electronic device may also generate the first video with a motion effect based on the N video frame pictures collected in the shooting preview mode.
Specifically, the acquiring N video frame pictures may include:
collecting N video frame pictures in a video generation preview mode, and obtaining time stamps of the N video frame pictures;
the generating the first video may include:
synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and encoding each synthesized picture in the N synthesized pictures according to the timestamp of the video frame picture corresponding to the synthesized picture to generate a first video.
Here, the electronic device may synthesize the smear for N video frame pictures collected in the shooting preview mode, and encode the synthesized picture corresponding to each video frame picture based on the timestamp of the video frame picture to generate the first video, so as to not only improve the motion effect of the recorded video, but also improve the quality of the generated video; and the video processing process can be carried out without video decoding, so that the complexity of video processing is reduced, and the processing efficiency is further improved.
For example, in the electronic device on motion mode, the electronic device may first acquire, through its camera a, N Zhang Shipin frames of images of the person 21 moving from the display position 22 shown in fig. 2 to the display position 23 shown in fig. 3 and buffer the images; then, the smear 51 of the long exposure picture character 21 shot by the camera B can be extracted, and the smear 51 is synthesized into each video frame picture obtained by encoding to generate N synthesized pictures; finally, N synthesized pictures and the time stamp of the video frame picture corresponding to each synthesized picture are obtained, and the N synthesized pictures are encoded according to the obtained time stamps to generate a video 2 (namely a first video).
It should be noted that, the time stamp is a time point of collecting each video frame image, and the encoding of the synthesized picture corresponding to each video frame image based on the time stamp of each video frame image to generate the first video may be that encoding the N synthesized pictures to obtain the first video according to the sequence of the time stamps of the video frame images corresponding to each synthesized picture.
In the embodiment of the present application, the foregoing synthesizing the smear of the first shooting object into each video frame image to obtain N synthesized images may be synthesizing the smear into a preset area associated with the first shooting object in the video frame image. For example, the smear may be set in an image area located behind the first photographic subject.
Or, as the movement direction of the first shooting object may change during the movement, the electronic device may implement setting the smear in the image area further associated with the movement direction through a certain rule.
Specifically, the synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures includes:
acquiring first position information of a kth frame of video frame pictures in N video frame pictures and second position information of at least one video frame picture acquired before the kth frame of video frame pictures, wherein the first position information and the second position information are position information of the first shooting object;
determining a movement direction of the first shooting object based on the first position information and second position information of at least one video frame picture;
and combining the smear of the first shooting object into an image area associated with the motion direction in the kth frame of video frame picture.
Here, the electronic device may determine, according to the position information of the first shooting object in the kth frame video frame picture and at least one video frame image acquired before the kth frame video frame picture, a motion direction of the first shooting object in the kth frame video frame picture, and synthesize a smear of the first shooting object into an image area associated with the determined motion direction in the kth frame video picture, so that when the motion direction of the first shooting object changes, the image area in which the smear is set can be adjusted in time, and accuracy of increasing a motion effect in the first video is improved.
The first position information and the second position information may be coordinates of a center position of the first shooting object in the video frame picture.
In addition, the image area associated with the motion direction may be an image area determined in the kth frame of video frame picture based on the motion direction according to a preset rule.
For example, as shown in fig. 2, assuming that the current frame picture is the kth frame video frame picture, if it is determined that the person 21 runs to the left, it is possible to determine that the image area behind the person 21 is the image area associated with the movement direction, that is, that the smear is provided in the image area behind the person 21; if it is determined that the person 21 is backing up to the right, it is possible to determine that the image area in front of the person 21 is the image area associated with the above-described movement direction, that is, the image area in front of the person 21 is set with the smear, and so on.
It should be noted that, in the video generating method provided by the embodiment of the present application, the execution subject may be a video generating device, or a control module of the method for generating video in the video generating device. In the embodiment of the present application, a method for executing video generation by a video generation device is taken as an example, and the video generation device provided in the embodiment of the present application is described.
Referring to fig. 6, an embodiment of the present application provides a video generating apparatus, as shown in fig. 6, the video generating apparatus 600 includes:
the video frame picture acquisition module 601 is configured to acquire N video frame pictures, where N is an integer greater than 1, and the N video frame pictures are pictures acquired by a first shooting object in a motion state;
a long exposure picture acquisition module 602, configured to acquire a long exposure picture taken when the first photographic subject is in a motion state, where the long exposure picture includes a smear of the first photographic subject;
the video generating module 603 is configured to generate a first video based on the long exposure picture and N video frame pictures, where the first video includes N synthesized pictures, and the synthesized pictures are: and synthesizing the smear of the first shooting object to the picture generated after the video frame picture.
Here, the video generating apparatus 600 may automatically generate a video with a better motion effect based on the acquired long exposure picture and N video frame pictures, so as to reduce the operation amount of the user in acquiring the video with the better motion effect, thereby improving the operation convenience of the electronic device.
Optionally, the video frame picture obtaining module 601 includes:
the video recording unit is used for recording a second video of the first shooting object in a motion state;
the video decoding unit is used for decoding the second video to obtain N video frame pictures of the second video;
the video generation module 603 includes:
the first picture synthesis unit is used for synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and the first video coding unit is used for coding the N synthesized pictures to generate a first video, wherein the coding information of the synthesized pictures is the coding information of the video frame pictures corresponding to the synthesized pictures in the second video.
Here, the video generating apparatus 600 may synthesize a smear from N video frame pictures obtained by decoding a recorded video, and encode a synthesized picture corresponding to each video frame picture based on encoding information of the video frame picture to generate a first video, so that not only a motion effect of the recorded video but also quality of the generated video may be improved.
Optionally, the video frame picture obtaining module 601 is specifically configured to:
Collecting N video frame pictures in a video generation preview mode, and obtaining time stamps of the N video frame pictures;
the video generation module 603 includes:
the second picture synthesis unit is used for synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and the second video coding unit is used for coding each synthesized picture in the N synthesized pictures according to the time stamp of the video frame picture corresponding to the synthesized picture to generate a first video.
Here, the video generating apparatus 600 may synthesize a smear for N video frame pictures acquired in the photographing preview mode, and encode a synthesized picture corresponding to each video frame picture based on a time stamp of the video frame picture to generate a first video, so that not only a motion effect of the recorded video may be improved, but also quality of the generated video may be improved; and the video processing process can be carried out without video decoding, so that the complexity of video processing is reduced, and the processing efficiency is further improved.
Optionally, the first picture synthesis unit includes:
a position information obtaining subunit, configured to obtain first position information of a kth frame of video frame pictures in the N video frame pictures, and second position information of at least one video frame picture acquired before the kth frame of video frame pictures, where the first position information and the second position information are position information of the first shooting object;
A motion direction determining subunit, configured to determine a motion direction of the first shooting object based on the first location information and second location information of at least one video frame picture;
and the synthesizing subunit is used for synthesizing the smear of the first shooting object into an image area associated with the motion direction in the kth frame of video frame picture.
Here, the video generating apparatus 600 may determine the motion direction of the first shot in the kth frame of video frame picture according to the position information of the first shot in the kth frame of video frame picture and at least one video frame image acquired before the kth frame of video frame picture, and synthesize the smear of the first shot into the image area associated with the determined motion direction in the kth frame of video picture, so that when the motion direction of the first shot changes, the image area in which the smear is set can be adjusted in time, and accuracy of increasing the motion effect in the first video is improved.
Optionally, the N video frame pictures are pictures acquired by the first camera;
the long exposure picture acquisition module 602 is specifically configured to:
and in the process of acquiring N video frame pictures by the first camera, controlling a second camera to shoot long exposure pictures when the first shooting object is in a motion state.
Here, in the process that the first camera collects the N video frame pictures, the video generating apparatus 600 may also simultaneously capture the long exposure picture through the second camera, so that not only the processing efficiency of the video may be improved, but also the scene similarity between the N video frame pictures and the long exposure picture may be ensured, thereby improving the quality of the processed video.
Optionally, the apparatus 600 further includes:
the mode starting module is used for starting the motion shooting mode if a first input is received or a first shooting object in a motion state is detected in the shooting preview interface under the condition that the shooting preview interface is displayed;
the video frame picture obtaining module 601 is specifically configured to:
and collecting N video frame pictures in the motion shooting mode.
Here, the video generating apparatus 600 may generate the first video with the motion effect based on N video frame pictures acquired in real time, so as to improve timeliness of video processing.
The video generating device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The video generating apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video generating apparatus provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and capable of running on the processor 701, where the program or the instruction implements each process of the embodiment of the video generating method when executed by the processor 701, and the process can achieve the same technical effects, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the processor 810 is configured to:
acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired by a first shooting object in a motion state;
collecting a long exposure picture shot under the state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
generating a first video based on the long exposure picture and the N video frame pictures, wherein the first video comprises N synthesized pictures, and the synthesized pictures are: and synthesizing the smear of the first shooting object to the picture generated after the video frame picture.
Here, the electronic device 800 may automatically generate a video with a better motion effect based on the acquired long exposure picture and N video frame pictures, so that the operation amount of the user in acquiring the video with the better motion effect may be reduced, and further, the operation convenience of the electronic device may be improved.
Optionally, the processor 810 is specifically configured to:
recording a second video of the first shooting object in a motion state;
decoding the second video to obtain N video frame pictures of the second video;
synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and encoding the N synthesized pictures to generate a first video, wherein the encoding information of the synthesized pictures is the encoding information of the video frame pictures corresponding to the synthesized pictures in the second video.
Here, the electronic device 800 may synthesize a smear from N video frame pictures obtained by decoding a recorded video, and encode a synthesized picture corresponding to each video frame picture based on encoding information of the video frame picture to generate a first video, so that not only can a motion effect of the recorded video be improved, but also quality of the generated video can be improved.
Optionally, the processor 810 is specifically configured to:
collecting N video frame pictures in a video generation preview mode, and obtaining time stamps of the N video frame pictures;
synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and encoding each synthesized picture in the N synthesized pictures according to the timestamp of the video frame picture corresponding to the synthesized picture to generate a first video.
Here, the electronic device 800 may synthesize a smear for N video frame pictures acquired in the shooting preview mode, and encode a synthesized picture corresponding to each video frame picture based on a time stamp of the video frame picture to generate a first video, so that not only a motion effect of the recorded video may be improved, but also quality of the generated video may be improved; and the video processing process can be carried out without video decoding, so that the complexity of video processing is reduced, and the processing efficiency is further improved.
Optionally, the processor 810 is specifically configured to:
acquiring first position information of a kth frame of video frame pictures in N video frame pictures and second position information of at least one video frame picture acquired before the kth frame of video frame pictures, wherein the first position information and the second position information are position information of the first shooting object;
Determining a movement direction of the first shooting object based on the first position information and second position information of at least one video frame picture;
and combining the smear of the first shooting object into an image area associated with the motion direction in the kth frame of video frame picture.
Here, the electronic device 800 may determine, according to the position information of the first shooting object in the kth frame video frame picture and at least one video frame image acquired before the kth frame video frame picture, a motion direction of the first shooting object in the kth frame video frame picture, and synthesize a smear of the first shooting object into an image area associated with the determined motion direction in the kth frame video picture, so that when the motion direction of the first shooting object changes, the image area in which the smear is set can be adjusted in time, and accuracy of increasing a motion effect in the first video is improved.
Optionally, the N video frame pictures are pictures acquired by the first camera;
the processor 810 is specifically configured to:
and in the process of acquiring N video frame pictures by the first camera, controlling a second camera to shoot long exposure pictures when the first shooting object is in a motion state.
Here, in the process that the first camera collects the N video frame pictures, the electronic device 800 may also simultaneously capture the long exposure picture through the second camera, so that not only the processing efficiency of the video may be improved, but also the scene similarity between the N video frame pictures and the long exposure picture may be ensured, thereby improving the quality of the processed video.
Optionally, the processor 810 is further configured to:
if a first input is received or a first shooting object in a motion state is detected to exist in the shooting preview interface under the condition that the shooting preview interface is displayed, starting the motion shooting mode;
the obtaining N video frame pictures includes:
and collecting N video frame pictures in the motion shooting mode.
Here, the electronic device 800 may generate the first video with the motion effect based on N video frame pictures acquired in real time, so as to improve timeliness of video processing.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8081 and a microphone 8042, with the graphics processor 8081 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 810 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the embodiment of the video generating method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the video generation method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (8)

1. A video generation method, comprising:
acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired by a first shooting object in a motion state;
collecting a long exposure picture shot under the state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
generating a first video based on the long exposure picture and the N video frame pictures;
the generating a first video based on the long exposure picture and the N video frame pictures includes:
extracting a smear in the long exposure picture;
acquiring first position information of a kth frame of video frame pictures in N video frame pictures and second position information of at least one video frame picture acquired before the kth frame of video frame pictures, wherein the first position information and the second position information are position information of the first shooting object;
determining a movement direction of the first shooting object based on the first position information and second position information of at least one video frame picture;
combining the smear of the first shooting object into an image area associated with the motion direction in the kth frame of video frame picture to obtain an N frame of combined picture;
And encoding the N frames of synthesized pictures to generate a first video.
2. The method of claim 1, wherein the acquiring N video frame pictures comprises:
recording a second video of the first shooting object in a motion state;
decoding the second video to obtain N video frame pictures of the second video,
the coding information of the synthesized picture is the coding information of the video frame picture corresponding to the synthesized picture in the second video.
3. The method of claim 1, wherein the acquiring N video frame pictures comprises:
collecting N video frame pictures in a video generation preview mode, and obtaining time stamps of the N video frame pictures;
the generating a first video includes:
synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and encoding each synthesized picture in the N synthesized pictures according to the timestamp of the video frame picture corresponding to the synthesized picture to generate a first video.
4. The method of claim 1, wherein prior to the acquiring the N video frame pictures, the method further comprises:
If a first input is received or a first shooting object in a motion state is detected to exist in the shooting preview interface under the condition that the shooting preview interface is displayed, starting a motion shooting mode;
the obtaining N video frame pictures includes:
and collecting N video frame pictures in the motion shooting mode.
5. A video generating apparatus, comprising:
the video frame picture acquisition module is used for acquiring N video frame pictures, wherein N is an integer greater than 1, and the N video frame pictures are pictures acquired when a first shooting object is in a motion state;
the long exposure picture acquisition module is used for acquiring a long exposure picture shot under the state that the first shooting object is in motion, wherein the long exposure picture comprises a smear of the first shooting object;
the video generation module is used for generating a first video based on the long exposure picture and N video frame pictures;
the video generation module is specifically configured to:
extracting a smear in the long exposure picture; acquiring first position information of a kth frame of video frame pictures in N video frame pictures and second position information of at least one video frame picture acquired before the kth frame of video frame pictures, wherein the first position information and the second position information are position information of the first shooting object; determining a movement direction of the first shooting object based on the first position information and second position information of at least one video frame picture; combining the smear of the first shooting object into an image area associated with the motion direction in the kth frame of video frame picture to obtain an N frame of combined picture; and encoding the N frames of synthesized pictures to generate a first video.
6. The apparatus of claim 5, wherein the video frame picture acquisition module comprises:
the video recording unit is used for recording a second video of the first shooting object in a motion state;
a video decoding unit, configured to decode the second video to obtain N video frame pictures of the second video,
the coding information of the synthesized picture is the coding information of the video frame picture corresponding to the synthesized picture in the second video.
7. The apparatus of claim 5, wherein the video frame picture acquisition module is specifically configured to:
collecting N video frame pictures in a video generation preview mode, and obtaining time stamps of the N video frame pictures;
the video generation module comprises:
the second picture synthesis unit is used for synthesizing the smear of the first shooting object to each video frame picture to obtain N synthesized pictures;
and the second video coding unit is used for coding each synthesized picture in the N synthesized pictures according to the time stamp of the video frame picture corresponding to the synthesized picture to generate a first video.
8. The apparatus of claim 5, wherein the apparatus further comprises:
The mode starting module is used for starting a motion shooting mode if a first input is received or a first shooting object in a motion state is detected in the shooting preview interface under the condition that the shooting preview interface is displayed;
the video frame picture acquisition module is specifically configured to:
and collecting N video frame pictures in the motion shooting mode.
CN202110104965.5A 2021-01-26 2021-01-26 Video generation method and device Active CN112887515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110104965.5A CN112887515B (en) 2021-01-26 2021-01-26 Video generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110104965.5A CN112887515B (en) 2021-01-26 2021-01-26 Video generation method and device

Publications (2)

Publication Number Publication Date
CN112887515A CN112887515A (en) 2021-06-01
CN112887515B true CN112887515B (en) 2023-09-19

Family

ID=76053242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110104965.5A Active CN112887515B (en) 2021-01-26 2021-01-26 Video generation method and device

Country Status (1)

Country Link
CN (1) CN112887515B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923368B (en) * 2021-11-25 2024-06-18 维沃移动通信有限公司 Shooting method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004186994A (en) * 2002-12-03 2004-07-02 Toshiba Corp Method, device, and program for compositing object image
JP2006146823A (en) * 2004-11-24 2006-06-08 Nippon Hoso Kyokai <Nhk> Video object trajectory adding system and video object trajectory adding program
JP2017111719A (en) * 2015-12-18 2017-06-22 株式会社コーエーテクモゲームス Video processing device, video processing method and video processing program
CN107333056A (en) * 2017-06-13 2017-11-07 努比亚技术有限公司 Image processing method, device and the computer-readable recording medium of moving object
CN107835363A (en) * 2017-10-30 2018-03-23 努比亚技术有限公司 A kind of filming control method, terminal device and storage medium
CN108055477A (en) * 2017-11-23 2018-05-18 北京美摄网络科技有限公司 A kind of method and apparatus for realizing smear special efficacy
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device
CN111654628A (en) * 2020-06-10 2020-09-11 努比亚技术有限公司 Video shooting method and device and computer readable storage medium
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004186994A (en) * 2002-12-03 2004-07-02 Toshiba Corp Method, device, and program for compositing object image
JP2006146823A (en) * 2004-11-24 2006-06-08 Nippon Hoso Kyokai <Nhk> Video object trajectory adding system and video object trajectory adding program
JP2017111719A (en) * 2015-12-18 2017-06-22 株式会社コーエーテクモゲームス Video processing device, video processing method and video processing program
CN107333056A (en) * 2017-06-13 2017-11-07 努比亚技术有限公司 Image processing method, device and the computer-readable recording medium of moving object
CN107835363A (en) * 2017-10-30 2018-03-23 努比亚技术有限公司 A kind of filming control method, terminal device and storage medium
CN108055477A (en) * 2017-11-23 2018-05-18 北京美摄网络科技有限公司 A kind of method and apparatus for realizing smear special efficacy
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN110189285A (en) * 2019-05-28 2019-08-30 北京迈格威科技有限公司 A kind of frames fusion method and device
CN111654628A (en) * 2020-06-10 2020-09-11 努比亚技术有限公司 Video shooting method and device and computer readable storage medium
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium

Also Published As

Publication number Publication date
CN112887515A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN113766313B (en) Video data processing method and device, electronic equipment and storage medium
CN106713768A (en) Person-scenery image synthesis method and system, and computer device
CN112153301B (en) Shooting method and electronic equipment
CN112637500B (en) Image processing method and device
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112770059B (en) Photographing method and device and electronic equipment
CN112669381B (en) Pose determination method and device, electronic equipment and storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN112333382B (en) Shooting method and device and electronic equipment
WO2023006009A1 (en) Photographing parameter determination method and apparatus, and electronic device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN112887515B (en) Video generation method and device
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN113891018A (en) Shooting method and device and electronic equipment
CN112954212B (en) Video generation method, device and equipment
CN112511743B (en) Video shooting method and device
CN112565603A (en) Image processing method and device and electronic equipment
CN114466140B (en) Image shooting method and device
CN113014799B (en) Image display method and device and electronic equipment
CN112738398B (en) Image anti-shake method and device and electronic equipment
CN114093005A (en) Image processing method and device, electronic equipment and readable storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112291474A (en) Image acquisition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant