CN113891018A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113891018A
CN113891018A CN202111393507.4A CN202111393507A CN113891018A CN 113891018 A CN113891018 A CN 113891018A CN 202111393507 A CN202111393507 A CN 202111393507A CN 113891018 A CN113891018 A CN 113891018A
Authority
CN
China
Prior art keywords
images
target
initial
initial images
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111393507.4A
Other languages
Chinese (zh)
Inventor
齐琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111393507.4A priority Critical patent/CN113891018A/en
Publication of CN113891018A publication Critical patent/CN113891018A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of camera shooting. The shooting method comprises the following steps: when a dynamic picture of a target is shot, controlling a camera module to acquire N initial images; performing video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo; the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a shooting method and device and electronic equipment.
Background
With the development of electronic devices, photo shooting modes of electronic devices such as mobile phones are increasing. For example: many electronic devices have a dynamic photo shooting mode, and frame data acquired before and after a period of shooting can be synthesized through the dynamic photo shooting mode to obtain video data (or called as short video) corresponding to a dynamic photo; thus, when the dynamic photos are browsed, the corresponding short videos can be played. However, the short video obtained by the current shooting method of the dynamic photo has an unobvious dynamic effect and a single display effect.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that the display effect of a short video obtained by the existing dynamic photo shooting method is single when the short video is played.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, including:
when a dynamic picture of a target is shot, controlling a camera module to acquire N initial images;
performing video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo;
the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including:
the control module is used for controlling the camera module to acquire N initial images when shooting a dynamic picture of a target;
the synthesis module is used for carrying out video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo;
the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the shooting method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the shooting method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the shooting method according to the first aspect.
In the embodiment of the application, when a dynamic picture of a target is shot, a camera module is controlled to acquire N initial images; and performing video synthesis on the M target images determined based on the N initial images to obtain a target video corresponding to the target dynamic photo. Therefore, as the field of view range of the video frame sequence in the synthesized target video changes according to the preset rule, an obvious image dynamic change effect can be presented when the target video corresponding to the target dynamic photo is played, and the problem that the display effect of the video obtained by the current dynamic photo shooting method is single when the video is played is solved.
Drawings
Fig. 1 is one of flowcharts of a shooting method of an embodiment of the present application;
FIG. 2 is a second flowchart of a photographing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a dynamic photo taking interface according to an embodiment of the present application;
fig. 4 is a third flowchart of a photographing method according to an embodiment of the present application;
fig. 5 is a block diagram of a photographing apparatus according to an embodiment of the present application;
FIG. 6 is a block diagram of an electronic device of an embodiment of the application;
fig. 7 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a shooting method, which may specifically include the following steps:
step 11: when a dynamic picture of a target is shot, controlling a camera module to acquire N initial images; wherein N is a positive integer greater than 1.
Optionally, the N initial images may include Na initial images before photographing and Nb initial images after photographing. Before photographing, the means before receiving the input of the user to the photographing control, and after photographing, the means after receiving the input of the user to the photographing control.
For example: in the case of taking a moving picture, a storage space (buffer) for one frame may be provided for storing and managing video frame data. The buffer storing the video frame can repeatedly and covertly store the preview frame data within a period of time before the picture is taken (for example, store the preview frame data within 1.5 seconds before the picture is taken). When a photographing instruction input by a user (such as clicking input of the photographing control by the user) is received, the photo is generated. Meanwhile, the covering storage of the preview frame data in the buffer is stopped, the storage of the preview frame data is switched to the continuous storage of the preview frame data, and the preview frame data after the photographing for a certain period of time (for example, the preview frame data within 1.5 seconds after the photographing) can be continuously stored, so that N initial images can be obtained.
Step 12: performing video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo; the M target images are determined according to the N initial images, and the field of view range of a video frame sequence in the target video changes according to a preset rule; m is a positive integer greater than 1, and M is less than or equal to N.
Optionally, M may be less than or equal to N, that is, this embodiment of the present application may be: m target images used for generating the target video corresponding to the target dynamic photo are determined according to part or all of the N initial images.
Optionally, the field of view range of the video frame sequence in the target video changes according to a preset rule, and the field of view range of the frame-by-frame image in the target video may increase and/or decrease regularly. For example: the field of view range of the frame-by-frame image in the target video is gradually increased, or the field of view range of the frame-by-frame image in the target video is gradually decreased, or the field of view range of the frame-by-frame image in the target video is increased after being increased, or the field of view range of the frame-by-frame image in the target video is decreased before being increased, or the field of view range of the frame-by-frame image in the target video is decreased after being increased, or the field of view range of the frame-by-frame image in the target video is decreased before being increased, or the like.
The Field of view range can be understood as the size of a Field of view (FOV), and for example, the larger the Field of view of the camera module, the larger the Field of view range of the image captured by the camera module; the smaller the angle of view, the smaller the field of view range of the captured image. Alternatively, the field range may also be understood as how much the captured image contains the picture content, for example, the larger the field range is, the more the picture content is contained; the smaller the field of view range, the less picture content it contains (if image a contains object 1 and object 2, and image B contains object 1, the larger the field of view range of image a than image B). Alternatively, the field of view range may also be understood as the ratio of the number of pixels of the target object to the total number of pixels of the image, for example, the larger the field of view range, the smaller the ratio of the number of pixels of the target object to the total number of pixels of the image; the smaller the field of view range, the larger the proportion of the number of pixels of the target object to the total number of pixels of the image.
Optionally, the video synthesizing the M target images to obtain the target video corresponding to the target moving picture may specifically include: the method comprises the steps of carrying out video coding by calling a native coder, transmitting M target images (or called frame data) into the coder, generating a target video by the coder, and establishing association between a photo and the target video in a mode of storing the generated target video in a media database. Therefore, when a photo is previewed through an album (or called as a gallery), the playing of the dynamic photo can be realized by calling the target video corresponding to the photo in the media database, and the effect of obvious dynamic change of the picture is realized when the corresponding target video is played because the field of view range of the video frame sequence in the target video changes according to the preset rule.
According to the scheme, when a dynamic picture of a target is shot, the camera module is controlled to acquire N initial images; and performing video synthesis on the M target images determined based on the N initial images to obtain a target video corresponding to the target dynamic photo. Therefore, as the field of view range of the video frame sequence in the synthesized target video changes according to the preset rule, the target video corresponding to the target dynamic photo can be displayed with obvious dynamic picture change effect when played, and the problem that the video obtained by the current dynamic photo shooting method has single display effect when played is solved.
Optionally, before the video synthesis is performed on the M target images to obtain the target video corresponding to the target dynamic photo, the method further includes:
obtaining M target images according to the N initial images; wherein, the corresponding field ranges of K1 target images in the M target images are different; k1 is a positive integer greater than 1, and K1 is equal to or less than M.
Optionally, K1 may be less than or equal to M, that is, in M target images used for generating a target video corresponding to the target dynamic photo, the field of view ranges corresponding to all or part of the target images may be different, so as to ensure that the field of view ranges of the video frame sequence in the synthesized target video change according to a preset rule, so that when the target video of the dynamic photo is played, an effect of obvious dynamic change of the picture may be presented.
Specifically, the field of view range corresponding to each of the M target images is different; or, the corresponding field ranges of the M target images are different for every a target images, for example: m is 9, A is 3, for example, the field of view ranges corresponding to the 1 st to 3 rd target images are the same, the field of view ranges corresponding to the 4 th to 6 th target images are the same, the field of view ranges corresponding to the 7 th to 9 th target images are the same, the field of view ranges corresponding to the 1 st to 3 th target images are different from the field of view ranges corresponding to the 4 th to 6 th target images, and the field of view ranges corresponding to the 4 th to 6 th target images are different from the field of view ranges corresponding to the 7 th to 9 th target images; the field of view range corresponding to the 1 st to 3 rd target images and the field of view range corresponding to the 7 th to 9 th target images may be the same or different, and the embodiment of the application is not limited thereto.
Optionally, the obtaining M target images according to the N initial images includes:
respectively performing cropping processing on the K2 initial images to obtain K2 first intermediate images, namely respectively performing cropping processing on each initial image in the K2 initial images to obtain K2 first intermediate images; wherein K2 is a positive integer, and K2 is not more than N;
carrying out scaling processing on each first intermediate image to obtain K2 second intermediate images; wherein the K2 second intermediate images are the same size;
obtaining the M target images according to the K2 second intermediate images; or obtaining the M target images according to K2 second intermediate images and the N initial images.
As an implementation, determining the M target images according to the K2 second intermediate images may include: if K2 is equal to M, the K2 second intermediate images are determined as the M target images.
For example: m may be less than or equal to N, that is, all or part of the N initial images may be selected, and the cropping processing and the scaling processing may be performed on each selected initial image, respectively, to obtain K2 second intermediate images, and determine the K2 second intermediate images as the M target images.
As another implementation, determining the M target images according to the K2 second intermediate images may include: if K2 > M, determining M second intermediate images of the K2 second intermediate images as the M target images.
For example: k2 may be less than or equal to M, that is, all or part of the N initial images may be selected, and cropping processing and scaling processing may be performed on each selected initial image, resulting in K2 second intermediate images; and M may be less than or equal to K2, i.e., M second intermediate images may be selected from the K2 second intermediate images and determined as the M target images.
As still another implementation, determining the M target images according to the K2 second intermediate images and the N initial images may include: if K2 < M, determining K3 initial images of the K2 second intermediate images and the N initial images as the M target images; wherein, K2+ K3 is M.
Where the K2 second intermediate images are obtained after the cropping and enlarging process, the K3 initial images may be K initial images that are sequentially taken before the K2 second intermediate images from among the N initial images; for example: when K3 is 1, the first initial image of the N initial images arranged in the shooting order is not subjected to the cropping and scaling processing, and K2 initial images of the N initial images are subjected to the cropping processing and the scaling processing, respectively, to obtain K2 second intermediate images, and the first initial image and the K2 second intermediate images are determined as the M target images.
Optionally, the performing a cropping process on the K2 initial images respectively to obtain K2 first intermediate images includes:
cutting off the edge area of the initial images according to the target scale for K2 initial images to obtain K2 first intermediate images, namely cutting off the edge area of the initial images according to the target scale for each of K2 initial images to obtain K2 first intermediate images;
and in the K2 initial images, the corresponding target proportion is different when at least part of the initial images are cut.
For example: the target proportion may be gradually increased, or the target proportion may be gradually decreased, or the target proportion may be increased and then decreased, or the target proportion may be decreased and then increased, or the target proportion may be increased and then decreased in the shooting order of K2 initial images, or the target proportion may be increased and decreased at intervals according to a predetermined rule, and the like, and the embodiment of the present application is not limited to this.
Specifically, when the initial image obtained by shooting is stored in the buffer, the initial image (or referred to as frame data) in the buffer can be taken out frame by frame and cut; the cutting rule may be to cut off an edge area (or called a frame) of the target proportion according to the original aspect ratio by using a central point of the original initial image as a fixed central point, to obtain new data (i.e., a first intermediate image) with the frame removed, and to perform an amplification process after cutting frame by frame according to the rule that the target proportion is gradually increased, to obtain second intermediate images, so that all the second intermediate images have the same size, and thus in the second intermediate images arranged according to the shooting sequence, the field of view is gradually reduced, that is, the finally synthesized target video may exhibit an effect of gradually amplifying the target object, thereby improving a dynamic display effect of the target video of the dynamic photo when being played.
Optionally, the method further comprises:
before M target images are obtained according to the N initial images, obtaining the movement parameters of the camera module; or acquiring the movement parameters of the camera device; wherein, the camera shooting module is arranged on the camera shooting device.
When the motion parameter meets a preset condition, that is, when the motion parameter is greater than a preset threshold (that is, the motion range of the camera module/camera device is large), the step of obtaining M target images according to the N initial images may not be performed, and video synthesis may be directly performed according to part or all of the N initial images to obtain a target video corresponding to the target dynamic photo.
When the motion parameter does not meet a preset condition, that is, when the motion parameter is less than or equal to a preset threshold (that is, the motion range of the camera module/camera device is small), the specific steps of obtaining M target images according to the N initial images can be executed, so as to ensure that the motion range of the camera module/camera device is small, that is, the change of the field of view range of the N initial images is small, and the target video corresponding to the target dynamic photo can be obviously dynamically displayed during playing through the scheme.
Optionally, the controlling the camera to acquire N initial images includes:
controlling at least two camera modules to respectively acquire at least two groups of initial images; wherein, a camera module gathers and obtains a set of initial image. Namely, each camera module in the at least two camera modules is controlled to respectively acquire a group of initial images; the number of a group of initial images acquired by different camera modules can be the same or different, and the group of initial images comprises at least one initial image;
obtaining the N initial images based on the at least two groups of initial images; for example: the initial images shot by all the camera modules can be determined as the N initial images; wherein, the field angles of different camera modules are different.
Among the at least two camera modules, the different camera modules have different corresponding field angles, so that the different camera modules have different corresponding field ranges of initial images. For example: at least two modules of making a video recording can be the module of making a video recording that has wide-angle camera, main module of making a video recording and the module of making a video recording that has long focus camera, and wherein the angle of vision that the main module of making a video recording corresponds can be between the angle of vision of the module of making a video recording that should have wide-angle camera and the angle of vision of the module of making a video recording that should have long focus camera, and this application embodiment of course does not use this as the limit.
For example: when a dynamic photo is taken, three frames of storage spaces (buffers) can be set, and the three buffers are respectively used for storing and managing initial images (or frame data) taken by the three camera modules. Frame data of a period of time before photographing (such as data of 1.5 second frames before each photographing) is read from the camera module with the wide-angle camera, the main camera module and the camera module with the long-focus camera and stored in respective buffers in an overlaying manner. After receiving a photographing instruction input by a user, generating a dynamic photo; and stopping the mode of covering and storing the frame data in the three buffers, switching to continuously storing the frame data, and continuously storing the frame data of a period of time after the photographing (such as the frame data of 1.5 seconds after the photographing) into the respective buffers aiming at the camera module with the wide-angle camera, the main camera module and the camera module with the long-focus camera, so as to obtain a group of initial images which are respectively photographed by each camera module.
Optionally, the obtaining M target images according to the N initial images includes:
and respectively acquiring partial initial images from each group of initial images to obtain the M target images. Namely, respectively acquiring partial initial images from a group of initial images shot by each camera module; determining all of the acquired initial images as the M target images.
Wherein all of the obtained initial images are: and partial initial images are respectively acquired from a group of initial images shot by each camera module.
In this embodiment, because the field of view ranges corresponding to the initial images captured by the different camera modules are different, when a target video corresponding to the target dynamic photo is obtained by further selecting a part of the initial images captured by the different camera modules to perform video synthesis, the target video can exhibit an effect of obviously changing the field of view range dynamically during playing, and thus the dynamic display effect of the target video of the dynamic photo during playing can be improved.
Optionally, the obtaining of the partial initial images from each group of the initial images respectively includes:
and respectively acquiring partial initial images in different shooting time periods from each group of initial images, namely acquiring partial initial images corresponding to different shooting time periods from a group of initial images shot by different camera modules.
Specifically, selecting partial initial images corresponding to different shooting time periods from a group of initial images shot by different camera modules can be understood as follows: selecting a part of initial images corresponding to a first shooting time period from a group of initial images shot by a first target camera module aiming at any two camera modules (such as a first target camera module and a second target camera module) in at least two camera modules; selecting a part of initial images corresponding to a second shooting time period from a group of initial images shot by the second target camera module; the first shooting time period is different from the second shooting time period, for example, the shooting time periods corresponding to different camera modules may also be determined according to an increasing or decreasing order of the field angles corresponding to different camera modules, which is not limited in this embodiment of the present application.
Optionally, the method further comprises:
before M target images are obtained according to the N initial images, obtaining the movement parameters of the camera module; or acquiring the movement parameters of the camera device; wherein, the camera shooting module is arranged on the camera shooting device.
When the motion parameter does not meet a preset condition, that is, when the motion parameter is less than or equal to a preset threshold (that is, the motion range of the camera module/camera device is small), the specific steps of obtaining M target images according to the N initial images may be performed, so as to ensure that the motion range of the camera module/camera device is small, that is, the change of the field of view range of the N initial images is small, and the dynamic display effect of the target video corresponding to the target dynamic photo during playing may be improved by the above-mentioned scheme.
Optionally, the obtaining M target images according to the N initial images includes:
and under the condition that the movement parameters meet preset conditions, acquiring M initial images from a group of initial images acquired by a first camera module to obtain M target images. Under the condition that the movement parameter is larger than a preset threshold, acquiring M initial images from a group of initial images acquired by a first camera module; determining the M initial images as the M target images; wherein, the first module of making a video recording is one in the at least two modules of making a video recording.
For example: the at least two camera modules can be a camera module with a wide-angle camera, a main camera module and a camera module with a long-focus camera, wherein the angle of view corresponding to the main camera module can be between the angle of view of the camera module with the wide-angle camera and the angle of view of the camera module with the long-focus camera, and the first camera module can be the main camera module.
Therefore, under the condition that the movement parameter is larger than the preset threshold (namely, the movement range of the camera module/the camera device is larger), the video synthesis can be directly carried out according to the initial image shot by the main camera module to obtain the video image of the shot dynamic photo, so that the movement range of the camera module/the camera device is larger, namely, the change of the field range of the N initial images is larger, the obvious dynamic display effect of the target video corresponding to the target dynamic photo can be ensured while the image quality of the target video is ensured when the target video is played.
The following describes the shooting method of the present application with reference to specific examples:
as shown in fig. 2, a flowchart of a shooting method is provided, which specifically includes:
step 201: and entering a dynamic photo shooting mode.
Step 202: upon entering the dynamic photo shooting mode, a frame storage space (buffer) may be set, and this buffer is used to store and manage video frame data.
Step 203: the buffer storing the video frame data repeatedly and coveringly stores the preview frame data within 1.5 seconds before the photographing.
Step 204: as shown in fig. 3, the photo control 31 is clicked to generate a photo.
Step 205: and stopping storing the preview frame data in the buffer in an overlaying mode while generating the picture, and replacing the preview frame data in a continuous mode.
Step 206: and generating a photo and judging whether a function menu of the dynamic effect is opened or not. The "dynamic effect" function is an effect that the field range of the video frame sequence in the video corresponding to the dynamic photo changes according to a preset rule.
Step 207: when the function of 'dynamic effect' is started, whether the mobile phone moves within 1.5 seconds before and after the photographing is continuously judged (certain error can be set at the moving position). If the mobile phone is mobile, directly executing step 210; if the handset is not mobile, step 208 is performed.
Step 208: frame data in the buffer are taken out frame by frame for cutting, and the cutting rule is that a frame (namely an edge area) with a preset proportion is cut frame by taking the central point of initial frame data as a fixed central point according to the width-height ratio of the initial frame data, so as to obtain new frame data with the frame removed. When frames with preset proportion are cut off frame by frame, the preset proportion is increased frame by frame, and thus the range of each frame of data in the obtained new frame of data is smaller than that of the previous frame of data.
Step 209: and taking out the frame data in the buffer frame by frame, calling an amplification algorithm, and amplifying the cut frame data to the width and height of the initial frame data, so that the video generated based on the cut and amplified frame data can be presented as the effect that the target object is gradually amplified visually during playing.
Step 210: and after the processed frame data are obtained, calling a native encoder to perform video encoding, and transmitting the frame data into the encoder.
Step 211: after the encoder generates the video, the photo and the video are associated in a mode of storing a media database, so that the video corresponding to the photo can be found through the media database when the video is played in the gallery.
According to the embodiment of the invention, by means of cutting frame data and amplifying the cut frame data to the width and height of the initial frame data, the effect that the field range corresponding to the frame-by-frame data changes from small to large (such as the constant small to large change similar to zoom) is achieved, so that the effect of continuously amplifying the pictures can be presented when the video corresponding to the dynamic photo is played, and the problem that the display effect of the video obtained by the current shooting method of the dynamic photo is relatively single when the video is played is solved.
As shown in fig. 4, a flowchart of another shooting method is provided, which specifically includes:
step 401: and entering a dynamic photo shooting mode.
Step 402: after entering the dynamic picture shooting mode, three frame storage spaces (buffers) are set, and the three buffers are used for respectively storing and managing frame data of three cameras.
Step 403: when entering a dynamic photo shooting mode, a wide-angle camera, a main shooting camera and a Tele camera are opened.
Step 404: the data frame of 1.5 seconds is read from the frame data shot by the wide-angle camera, the main camera and the Tele camera respectively, and is stored in the buffer in an overlaying manner.
Step 405: as shown in fig. 3, the photo control 31 is clicked to generate a photo.
Step 406: and stopping three cameras to store frame data in respective buffers in an overlaying mode while generating the pictures, and continuously storing the frame data of 1.5 seconds shot by the wide-angle, main shooting and Tele cameras into the respective buffers instead of continuously storing the frame data.
Step 407: and judging whether the function menu of the dynamic effect is opened or not.
Step 408: and when the function menu of the dynamic effect is not started, transmitting the frame data in the main shooting buffer into a video encoder for encoding.
Step 409: and when the function menu of the dynamic effect is opened, judging whether the mobile phone moves. When the mobile phone is judged to move (a certain error can be set), step 408 is executed; when the mobile phone is not moved, step 410 is executed.
Step 410: reading the frame data of the previous second from the wide-angle buffer, reading the frame data of the middle second from the main shooting buffer, and reading the frame data of the last second from the Tele buffer to form a new buffer.
Step 411: and transmitting the frame data in the composed new buffer into a video encoder for encoding.
Step 412: after the encoder generates the video, the photo and the video are associated in a mode of storing a media database, so that the video corresponding to the photo can be found through the media database when the video is played in the gallery.
According to the embodiment of the application, the wide-angle camera, the main shooting camera and the long-focus camera are opened, one second of video frame data is taken from the three cameras respectively to form a 3-second video, so that the effect that the FOV changes from large to small can be presented when the video of the dynamic photo is played, the effects that the FOV changes and the picture is enlarged when the video in the dynamic photo is played are achieved, and the problem that the video obtained by the current shooting method of the dynamic photo is single in display effect when the video is played is solved.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. The embodiment of the present application takes a method for executing shooting by a shooting device as an example, and describes the shooting device provided by the embodiment of the present application.
As shown in fig. 5, an embodiment of the present application provides a camera 500, including:
the control module 510 is configured to control the camera module to acquire N initial images when a dynamic picture of a target is taken;
a synthesizing module 520, configured to perform video synthesis on the M target images to obtain target videos corresponding to the target dynamic photos;
the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
Optionally, the photographing apparatus 500 further includes:
the processing module is used for obtaining M target images according to the N initial images;
wherein, the corresponding field ranges of K1 target images in the M target images are different; k1 is a positive integer greater than 1, and K1 is equal to or less than M.
Optionally, the processing module includes:
the cutting unit is used for respectively cutting the K2 initial images to obtain K2 first intermediate images; wherein K2 is a positive integer, and K2 is not more than N;
a scaling unit, configured to perform scaling processing on each of the first intermediate images to obtain K2 second intermediate images; wherein the K2 second intermediate images are the same size;
a first processing unit, configured to obtain the M target images according to K2 second intermediate images; or obtaining the M target images according to K2 second intermediate images and the N initial images.
Optionally, the clipping unit is further configured to:
cutting edge areas of the K2 initial images according to a target scale to obtain K2 first intermediate images;
and in the K2 initial images, the corresponding target proportion is different when at least part of the initial images are cut.
Optionally, the control module 510 includes:
the control unit is used for controlling the at least two camera modules to acquire at least two groups of initial images respectively; one camera module acquires a group of initial images;
the second processing unit is used for obtaining the N initial images based on the at least two groups of initial images; wherein, the field angles of different camera modules are different.
Optionally, the processing module includes:
and the third processing unit is used for respectively acquiring partial initial images from each group of initial images to obtain the M target images.
Optionally, the third processing unit is further configured to:
and acquiring partial initial images of different shooting time periods from each group of initial images respectively.
Optionally, the processing module includes:
the fourth processing unit is used for acquiring M initial images from a group of initial images acquired by the first camera module to acquire M target images under the condition that the movement parameters of the camera meet preset conditions; wherein, the first module of making a video recording is one in the at least two modules of making a video recording.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The shooting device provided by the embodiment of the application can realize each process realized by the method embodiments of fig. 1 to fig. 4, achieve the same technical effect, and is not repeated here to avoid repetition.
According to the shooting device 500, when a dynamic picture of a target is shot, the shooting module is controlled to acquire N initial images; and performing video synthesis on the M target images determined based on the N initial images to obtain a target video corresponding to the target dynamic photo. Therefore, as the field of view range of the video frame sequence in the synthesized target video changes according to the preset rule, the target video corresponding to the target dynamic photo can be displayed with obvious dynamic picture change effect when played, and the problem that the video obtained by the current dynamic photo shooting method has single display effect when played is solved.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein the processor 710 is configured to: when a dynamic picture of a target is shot, controlling a camera module to acquire N initial images; performing video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo; the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
Optionally, the processor 710 is configured to: obtaining M target images according to the N initial images; wherein, the corresponding field ranges of K1 target images in the M target images are different; k1 is a positive integer greater than 1, and K1 is equal to or less than M.
Optionally, the processor 710 is configured to: cutting the K2 initial images respectively to obtain K2 first intermediate images; wherein K2 is a positive integer, and K2 is not more than N; carrying out scaling processing on each first intermediate image to obtain K2 second intermediate images; wherein the K2 second intermediate images are the same size; obtaining the M target images according to the K2 second intermediate images; or obtaining the M target images according to K2 second intermediate images and the N initial images.
Optionally, the processor 710 is configured to: cutting edge areas of the K2 initial images according to a target scale to obtain K2 first intermediate images; and in the K2 initial images, the corresponding target proportion is different when at least part of the initial images are cut.
Optionally, the processor 710 is configured to: controlling at least two camera modules to respectively acquire at least two groups of initial images; one camera module acquires a group of initial images; obtaining the N initial images based on the at least two groups of initial images; wherein, the field angles of different camera modules are different.
Optionally, the processor 710 is configured to: and respectively acquiring partial initial images from each group of initial images to obtain the M target images.
Optionally, the processor 710 is configured to: and acquiring partial initial images of different shooting time periods from each group of initial images respectively.
Optionally, the processor 710 is configured to: acquiring M initial images from a group of initial images acquired by a first camera module under the condition that the movement parameters of the camera meet preset conditions to obtain M target images; wherein, the first module of making a video recording is one in the at least two modules of making a video recording.
The electronic device 700 in the embodiment of the application controls the camera module to acquire N initial images when shooting a dynamic picture of a target; and performing video synthesis on the M target images determined based on the N initial images to obtain a target video corresponding to the target dynamic photo. Therefore, as the field of view range of the video frame sequence in the synthesized target video changes according to the preset rule, the target video corresponding to the target dynamic photo can be displayed with obvious dynamic picture change effect when played, and the problem that the video obtained by the current dynamic photo shooting method has single display effect when played is solved.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A photographing method, characterized by comprising:
when a dynamic picture of a target is shot, controlling a camera module to acquire N initial images;
performing video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo;
the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
2. The shooting method according to claim 1, wherein before the video composition of the M target images to obtain the target video corresponding to the target moving picture, the method further comprises:
obtaining M target images according to the N initial images;
wherein, the corresponding field ranges of K1 target images in the M target images are different; k1 is a positive integer greater than 1, and K1 is equal to or less than M.
3. The shooting method according to claim 2, wherein obtaining M target images from the N initial images comprises:
cutting the K2 initial images respectively to obtain K2 first intermediate images; wherein K2 is a positive integer, and K2 is not more than N;
carrying out scaling processing on each first intermediate image to obtain K2 second intermediate images; wherein the K2 second intermediate images are the same size;
obtaining the M target images according to the K2 second intermediate images; or obtaining the M target images according to K2 second intermediate images and the N initial images.
4. The shooting method according to claim 3, wherein the cropping the K2 initial images to obtain K2 first intermediate images comprises:
cutting edge areas of the K2 initial images according to a target scale to obtain K2 first intermediate images;
and in the K2 initial images, the corresponding target proportion is different when at least part of the initial images are cut.
5. The shooting method according to claim 2, wherein the controlling the camera to acquire N initial images comprises:
controlling at least two camera modules to respectively acquire at least two groups of initial images; one camera module acquires a group of initial images;
obtaining the N initial images based on the at least two groups of initial images; wherein, the field angles of different camera modules are different.
6. The shooting method according to claim 5, wherein the obtaining M target images from the N initial images comprises:
and respectively acquiring partial initial images from each group of initial images to obtain the M target images.
7. The shooting method according to claim 6, wherein the obtaining of the partial initial images from each group of the initial images respectively comprises:
and acquiring partial initial images of different shooting time periods from each group of initial images respectively.
8. The shooting method according to claim 5, applied to a shooting device, wherein the obtaining M target images according to the N initial images comprises:
acquiring M initial images from a group of initial images acquired by a first camera module under the condition that the movement parameters of the camera meet preset conditions to obtain M target images; wherein, the first module of making a video recording is one in the at least two modules of making a video recording.
9. A camera, comprising:
the control module is used for controlling the camera module to acquire N initial images when shooting a dynamic picture of a target;
the synthesis module is used for carrying out video synthesis on the M target images to obtain a target video corresponding to the target dynamic photo;
the M target images are determined according to the N initial images, the field of view range of the video frame sequence in the target video changes according to a preset rule, N, M are positive integers larger than 1, and M is smaller than or equal to N.
10. The photographing apparatus according to claim 9, further comprising:
the processing module is used for obtaining M target images according to the N initial images;
wherein, the corresponding field ranges of K1 target images in the M target images are different; k1 is a positive integer greater than 1, and K1 is equal to or less than M.
11. The camera of claim 10, wherein the processing module comprises:
the cutting unit is used for respectively cutting the K2 initial images to obtain K2 first intermediate images; wherein K2 is a positive integer, and K2 is not more than N;
a scaling unit, configured to perform scaling processing on each of the first intermediate images to obtain K2 second intermediate images; wherein the K2 second intermediate images are the same size;
a first processing unit, configured to obtain the M target images according to K2 second intermediate images; or obtaining the M target images according to K2 second intermediate images and the N initial images.
12. The camera of claim 11, wherein the cropping unit is further configured to:
cutting edge areas of the K2 initial images according to a target scale to obtain K2 first intermediate images;
and in the K2 initial images, the corresponding target proportion is different when at least part of the initial images are cut.
13. The camera of claim 10, wherein the control module comprises:
the control unit is used for controlling the at least two camera modules to acquire at least two groups of initial images respectively; one camera module acquires a group of initial images;
the second processing unit is used for obtaining the N initial images based on the at least two groups of initial images; wherein, the field angles of different camera modules are different.
14. The camera of claim 13, wherein the processing module comprises:
and the third processing unit is used for respectively acquiring partial initial images from each group of initial images to obtain the M target images.
15. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the photographing method according to any one of claims 1 to 8.
CN202111393507.4A 2021-11-23 2021-11-23 Shooting method and device and electronic equipment Pending CN113891018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111393507.4A CN113891018A (en) 2021-11-23 2021-11-23 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111393507.4A CN113891018A (en) 2021-11-23 2021-11-23 Shooting method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113891018A true CN113891018A (en) 2022-01-04

Family

ID=79015473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111393507.4A Pending CN113891018A (en) 2021-11-23 2021-11-23 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113891018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615426A (en) * 2022-02-17 2022-06-10 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105259664A (en) * 2015-11-13 2016-01-20 苏州苏大维格光电科技股份有限公司 Light field imaging and printing device and thin film with three-dimensional floating images
CN109997171A (en) * 2016-09-30 2019-07-09 株式会社尼康 Display device and program
WO2019176804A1 (en) * 2018-03-16 2019-09-19 富士フイルム株式会社 Image processing device, image capturing device, and image processing method
US20200068138A1 (en) * 2018-08-21 2020-02-27 Gopro, Inc. Field of view adjustment
CN113286064A (en) * 2021-04-30 2021-08-20 深圳市沃特沃德信息有限公司 All-round-view image acquisition method and device, mobile terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105259664A (en) * 2015-11-13 2016-01-20 苏州苏大维格光电科技股份有限公司 Light field imaging and printing device and thin film with three-dimensional floating images
CN109997171A (en) * 2016-09-30 2019-07-09 株式会社尼康 Display device and program
WO2019176804A1 (en) * 2018-03-16 2019-09-19 富士フイルム株式会社 Image processing device, image capturing device, and image processing method
US20200068138A1 (en) * 2018-08-21 2020-02-27 Gopro, Inc. Field of view adjustment
CN113286064A (en) * 2021-04-30 2021-08-20 深圳市沃特沃德信息有限公司 All-round-view image acquisition method and device, mobile terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615426A (en) * 2022-02-17 2022-06-10 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111601040B (en) Camera control method and device and electronic equipment
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112637500B (en) Image processing method and device
CN112954199B (en) Video recording method and device
CN113794829B (en) Shooting method and device and electronic equipment
CN113676655B (en) Shooting method and device, mobile terminal and chip system
CN112954193B (en) Shooting method, shooting device, electronic equipment and medium
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114449174A (en) Shooting method and device and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN112911059B (en) Photographing method and device, electronic equipment and readable storage medium
CN113709368A (en) Image display method, device and equipment
CN113891018A (en) Shooting method and device and electronic equipment
CN112508820A (en) Image processing method and device and electronic equipment
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN113852757B (en) Video processing method, device, equipment and storage medium
CN112887515B (en) Video generation method and device
CN112261262B (en) Image calibration method and device, electronic equipment and readable storage medium
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112399092A (en) Shooting method and device and electronic equipment
CN114285978A (en) Video processing method, video processing device and electronic equipment
CN113873147A (en) Video recording method and device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN114286003A (en) Shooting method, shooting device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination