CN107872631B - Image shooting method and device based on double cameras and mobile terminal - Google Patents

Image shooting method and device based on double cameras and mobile terminal Download PDF

Info

Publication number
CN107872631B
CN107872631B CN201711275279.4A CN201711275279A CN107872631B CN 107872631 B CN107872631 B CN 107872631B CN 201711275279 A CN201711275279 A CN 201711275279A CN 107872631 B CN107872631 B CN 107872631B
Authority
CN
China
Prior art keywords
image
camera
shot
cameras
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711275279.4A
Other languages
Chinese (zh)
Other versions
CN107872631A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711275279.4A priority Critical patent/CN107872631B/en
Publication of CN107872631A publication Critical patent/CN107872631A/en
Application granted granted Critical
Publication of CN107872631B publication Critical patent/CN107872631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application provides an image shooting method and device based on double cameras and a mobile terminal, wherein the image shooting method based on the double cameras comprises the following steps: receiving a shooting instruction, and controlling a first camera and a second camera to obtain multi-frame continuously shot image data according to the shooting instruction, wherein the image data comprises a first shot image shot by the first camera and a second shot image shot by the second camera; determining the depth information of the image data continuously shot by multiple frames based on a preset algorithm; extracting feature information of a subject in image data continuously shot by multiple frames according to the depth information; and synthesizing the characteristic information of the shot subject in a preset mode to generate a multiple exposure image. The image shooting method and device based on the double cameras and the mobile terminal can conveniently generate multiple exposure images, are simple to operate and simultaneously ensure the image quality.

Description

Image shooting method and device based on double cameras and mobile terminal
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for image capturing based on two cameras, and a mobile terminal.
Background
Multiple exposure (multiple exposure) is a technique in photography that uses two or more separate exposures which are then superimposed to form a single photograph. The subject will also change due to different parameters of each exposure, and the resulting picture will have a unique visual effect. At present, multiple exposures are mainly applied to digital cameras, and mobile terminals have no mature technical scheme. If a user wants to obtain an image with multiple exposures, the image needs to be repaired by later-stage image repairing software, the technical threshold is high, the operation is complex, and the image quality is reduced.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the image shooting method based on the double cameras is provided, multiple exposure images can be conveniently generated, the operation is simple, and meanwhile the image quality is guaranteed.
The application provides an image shooting device based on two cameras.
The application provides a mobile terminal.
The present application provides a computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present application provides an image capturing method based on dual cameras, where the dual cameras include a first camera and a second camera, and the method includes the following steps:
receiving a shooting instruction, and controlling the first camera and the second camera to obtain a plurality of frames of continuously shot image data according to the shooting instruction, wherein the image data comprises a first shot image shot by the first camera and a second shot image shot by the second camera;
determining the depth information of the multi-frame continuously shot image data based on a preset algorithm;
extracting feature information of a subject in the multi-frame continuously shot image data according to the depth information; and
and synthesizing the characteristic information of the shot main body in a preset mode to generate a multiple exposure image.
The image shooting method based on the double cameras comprises the steps of receiving a shooting instruction, controlling the first camera and the second camera to obtain image data continuously shot by multiple frames according to the shooting instruction, determining depth information of the image data continuously shot by the multiple frames based on a preset algorithm, extracting feature information of a shot main body in the image data continuously shot by the multiple frames according to the depth information, synthesizing the feature information of the shot main body in a preset mode to generate a multiple exposure image, conveniently generating the multiple exposure image, and being simple in operation and capable of ensuring image quality.
In order to achieve the above object, an embodiment of a second aspect of the present application provides an image capturing apparatus based on dual cameras, where the dual cameras include a first camera and a second camera, the apparatus includes:
the acquisition module is used for receiving a shooting instruction and controlling the first camera and the second camera to acquire a plurality of frames of continuously shot image data according to the shooting instruction, wherein the image data comprises a first shot image shot by the first camera and a second shot image shot by the second camera;
the determining module is used for determining the depth information of the multi-frame continuously shot image data based on a preset algorithm;
the extraction module is used for extracting the characteristic information of the shot subject in the multi-frame continuously shot image data according to the depth information; and
and the synthesis module is used for synthesizing the characteristic information of the shot main body in a preset mode so as to generate a multiple exposure image.
The image shooting device based on the double cameras receives shooting instructions, controls the first camera and the second camera to obtain image data continuously shot by multiple frames according to the shooting instructions, determines depth information of the image data continuously shot by the multiple frames based on a preset algorithm, extracts feature information of a shot main body in the image data continuously shot by the multiple frames according to the depth information, synthesizes the feature information of the shot main body in a preset mode to generate a multiple exposure image, can conveniently generate the multiple exposure image, is simple to operate, and ensures image quality.
To achieve the above object, a third aspect of the present application provides a mobile terminal, including: the system comprises a first camera, a second camera, a memory, a processor and a computer program which is stored on the memory and can run on the processor; the processor, when executing the program, implements the dual-camera based image capture method as described in the first aspect.
In order to achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, which when executed by a processor, implements the dual-camera based image capturing method according to the first aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image method based on two cameras according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating the effect of the triangulation principle;
FIG. 3 is a schematic diagram illustrating the effect of a disparity map of two cameras;
FIG. 4 is a schematic diagram illustrating the effect of multiple exposure images;
fig. 5 is a schematic flowchart of another image capturing method based on two cameras according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image capturing apparatus based on two cameras according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 8 is a diagram of an image processing circuit according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes an image capturing method and apparatus based on dual cameras and a mobile terminal according to an embodiment of the present application with reference to the drawings.
The execution device of the image shooting method can be a hardware device with double cameras, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device, and the wearable device can be an intelligent bracelet, an intelligent watch and intelligent glasses.
In this hardware equipment with two cameras, contain the module of making a video recording, including first camera and second camera in this the module of making a video recording. The first camera and the second camera are respectively provided with a lens, an image sensor and a voice coil motor which are independent. First camera and second camera in two cameras all link to each other with the camera connector to current value drive voice coil motor that provides according to the camera connector for first camera and second camera adjust the distance between lens and the image sensor under voice coil motor's drive, thereby realize focusing.
Fig. 1 is a schematic flowchart of an image capturing method based on two cameras according to an embodiment of the present application.
As shown in fig. 1, the dual-camera based image photographing method includes the steps of:
and S101, receiving a shooting instruction, and controlling the first camera and the second camera to obtain multi-frame continuously shot image data according to the shooting instruction.
At present, multiple exposures are mainly realized by post-phase image correction software, and therefore, the embodiment provides an image shooting method which can avoid the complicated operation of image correction.
In one embodiment of the application, when a user uses the dual-camera terminal to shoot, the camera can be opened to view a frame, and the virtual shutter key is pressed to send a shooting instruction. For example, it is possible to select entry into a multiple exposure mode in which a plurality of frames of image data continuously captured are obtained by continuously holding down a virtual shutter key. The image data includes a first captured image captured by the first camera and a second captured image captured by the second camera. In this embodiment, the first camera and the second camera are simultaneously used for framing shooting, and a first shot image for imaging and a second shot image for calculating depth information are obtained respectively.
And S102, determining the depth information of the image data continuously shot by multiple frames based on a preset algorithm.
Specifically, an imaging image may be generated from a first captured image, and then depth information of the imaging image may be calculated from the first captured image and a second captured image using a preset algorithm. The preset algorithm may be a triangulation principle.
Because the first shot image and the second shot image are shot by different cameras respectively, and a certain distance is reserved between the two cameras, the parallax caused by the first shot image and the second shot image can be calculated according to the principle of triangulation distance measurement, and the depth information of the same subject in the first shot image and the second shot image, namely the distance between the subject and the plane where the first camera and the second camera are located, can be obtained.
The following briefly introduces the principle of triangulation:
in an actual scene, the resolution depth of the two cameras is the same as the depth of the scenery resolved by the human eyes, and the two cameras mainly rely on binocular vision to distinguish. In this embodiment, the depth information of the imaging image is calculated based on the second captured image, and the main method relies on the principle of triangulation.
As shown in fig. 2, in the real space, the imaging object is drawn, and the positions OR and OT of the two cameras, and the focal planes of the two cameras, the distance between the focal plane and the plane where the two cameras are located is f, and the two cameras perform imaging at the focal plane position, so as to obtain two captured images.
P and P' are the positions of the same subject in different captured images, respectively. Wherein, the distance from the point P to the left boundary of the shot image is XR, and the distance from the point P' to the left boundary of the shot image is XT. OR and OT are two cameras respectively, the two cameras are in the same plane, and the distance is B.
Based on the principle of triangulation, the distance Z between the object and the plane where the two cameras are located in fig. 2 has the following relationship:
Figure 144951DEST_PATH_IMAGE001
based on this, can be derived
Figure 488469DEST_PATH_IMAGE002
Where d is a distance difference between positions of the same object in different captured images. B, f is constant, so the distance Z of the object can be determined from d.
Of course, in addition to the triangulation method, other methods may also be used to calculate the depth information of the image, for example, when the main camera and the sub-camera take a picture of the same scene, the distance between an object in the scene and the sub-camera is proportional to the displacement difference, the attitude difference, and the like of the images formed by the main camera and the sub-camera, and therefore, in an embodiment of the present application, the distance Z may be obtained according to the proportional relationship.
For example, as shown in fig. 3, a map of different point differences is calculated from a main image acquired by a main camera and a sub image acquired by a sub camera, and this map is represented by a disparity map, which represents the displacement difference of the same point on two maps, but since the displacement difference in triangulation is proportional to Z, the disparity map is often directly used as a depth map carrying depth information.
Based on the above analysis, when the two cameras acquire the depth information, the positions of the subject in the different captured images need to be acquired, so that if the two images for acquiring the depth information are closer, the efficiency and accuracy of acquiring the depth information can be improved.
And S103, extracting characteristic information of the shot subject in the image data continuously shot by multiple frames according to the depth information.
Specifically, after the depth information of the imaged image is calculated, feature information of a subject in image data continuously captured over multiple frames may be extracted according to the depth information, so that the subject, the foreground, and the background in the imaged image are identified. Wherein, the foreground is smaller relative to the depth of the subject, and the background is larger relative to the depth of the subject.
And S104, synthesizing the characteristic information of the shot subject in a preset mode to generate a multiple exposure image.
Specifically, the feature information of the subject to be photographed is synthesized in a preset manner by using the transparency as a dimension, or by using the brightness as a dimension.
When the transparency is used as the dimension for the synthesis, if the display effect of the subject in the last frame image is to be enhanced, the effects of other images are gradually weakened, and the feature information of the subject in the first frame image to the nth frame image can be merged in a mode of decreasing the transparency progressively (the transparency of the first frame image is the highest, and the transparency of the last frame image is the lowest). The feature information of the subject in the first frame image to the nth frame image may be combined in the same transparency manner (the transparency of the N frame images is kept consistent). Of course, if the display effect of the subject in the first frame image is to be enhanced, the effects of the other images are gradually weakened, and the feature information of the subject in the first frame image to the nth frame image may be merged in a manner of increasing the transparency (the transparency of the first frame image is lowest, and the transparency of the last frame image is highest). Wherein N is a natural number greater than 1.
Similarly, when combining with brightness as the dimension, if the display effect of the subject in the first frame image is to be enhanced, the effects of the other images are gradually weakened, and the feature information of the subject in the first to nth frame images may be combined in a manner of decreasing brightness (the brightness of the first frame image is the highest, and the brightness of the last frame image is the lowest). The feature information of the subject in the first frame image to the nth frame image may be combined in the same brightness manner (the brightness of the N frame images is kept uniform). Of course, if it is desired to enhance the display effect of the subject in the last frame image and gradually weaken the effect of other images, the feature information of the subject in the first to nth frame images may be combined in a manner of increasing the brightness (the brightness of the first frame image is the lowest and the brightness of the last frame image is the highest).
Based on the above combination manner, multiple exposure images with various effects can be obtained.
It is to be understood that by acquiring depth information of image data by two cameras, it is possible to improve accuracy in extracting feature information of a subject, so that image quality is improved at the time of composition. In addition, in the synthesis, because the shooting is continuous, the foreground or the background of the multi-frame image data is the same, so that the foreground or the background of one frame can be selected from the multi-frame image data as the foreground or the background of the multiple exposure image after the synthesis. The resulting multiple exposure image can be seen in fig. 4.
According to the image shooting method based on the double cameras, the shooting instruction is received, the first camera and the second camera are controlled according to the shooting instruction to obtain the image data continuously shot by multiple frames, the depth information of the image data continuously shot by the multiple frames is determined based on the preset algorithm, the feature information of the shot main body in the image data continuously shot by the multiple frames is extracted according to the depth information, and the feature information of the shot main body is synthesized in the preset mode to generate the multiple exposure image.
In order to clearly illustrate the previous embodiment, this embodiment provides another image capturing method based on two cameras, and fig. 5 is a schematic flow chart of the another image capturing method based on two cameras provided in this embodiment of the present application.
As shown in fig. 5, the method may include the steps of:
s501, receiving a shooting instruction, controlling a first camera to shoot a first shot image according to the shooting instruction, and controlling a second camera to shoot a second shot image.
In this embodiment, the first camera and the second camera are simultaneously used for framing shooting, and a first shot image for imaging and a second shot image for calculating depth information are obtained respectively.
And S502, judging whether the field angle of the first camera is smaller than or equal to that of the second camera, if so, executing S503, and otherwise, executing S504.
The Field Angle (FOV) refers to the maximum Angle that can be covered by the lens, and the Angle between the scene and the camera is beyond this Angle, so that the scene cannot be imaged. In this embodiment, the field angles of the first camera and the second camera may be the same or different. However, due to different viewing angle value conditions, the difference between the framing of the first captured image and the framing of the second captured image is inconsistent, and further, part of the object is imaged in only one of the first captured image and the second captured image, and when the depth is calculated, the depth information cannot be calculated for the part of the object. In order to facilitate the calculation of the depth information, in this embodiment, the same framing portion of the first captured image and the second captured image is captured as an imaging image as much as possible, so as to ensure the accuracy of the depth information of the imaging image.
And S503, if the field angle of the first camera is smaller than or equal to the field angle of the second camera, taking the first shot image as an imaging image.
Specifically, if the field angle of the first camera is smaller than or equal to the field angle of the second camera, the first camera and the second camera are usually located on the same plane, and therefore the viewing range of the first camera is smaller than or equal to the viewing range of the second camera. Therefore, each object in the first shot image shot by the first camera is covered by the second shot image shot by the second camera, and in this case, the first shot image shot by the first camera does not need to be cut, and the first shot image can be directly used as the imaging image.
And S504, if the field angle of the first camera is larger than that of the second camera, cutting the same area as the framing picture of the second photographed image from the first photographed image to obtain an imaged image.
Specifically, if the field angle of the first camera is larger than that of the second camera, the first camera and the second camera are usually located on the same plane, so the viewing range of the first camera is larger than that of the second camera. Based on this, each object in the first captured image captured by the first camera may not be captured in its entirety by the second camera. In this case, it is necessary to crop the first captured image captured by the first camera and cut out the same area as the finder screen of the second captured image as an imaged image.
And S505, calculating the depth information of the imaging image by using a preset algorithm according to the first shooting image and the second shooting image.
Specifically, depth information of the imaged image is determined based on a positional deviation with respect to the subject in the second captured image and the first captured image, and parameters of the two cameras.
The specific calculation process is consistent with the description of the previous embodiment, and is not described again in this embodiment.
And S506, extracting characteristic information of the shot subject in the image data continuously shot by multiple frames according to the depth information.
This step is consistent with the description of the previous embodiment, and is not described again in this embodiment.
S507, synthesizing the characteristic information of the subject in a preset manner to generate a multiple exposure image.
Specifically, the feature information of the subject to be photographed is synthesized in a preset manner by using the transparency as a dimension, or by using the brightness as a dimension.
In order to implement the above embodiments, the present application further provides an image capturing apparatus based on two cameras.
Fig. 6 is a schematic structural diagram of an image capturing device based on two cameras according to an embodiment of the present application. The device can be applied to a mobile terminal with double cameras. The double cameras comprise a first camera and a second camera.
As shown in fig. 6, the apparatus includes: an obtaining module 610, a determining module 620, an extracting module 630, and a synthesizing module 640.
The obtaining module 610 is configured to receive a shooting instruction, and control the first camera and the second camera to obtain multiple frames of continuously shot image data according to the shooting instruction.
The image data includes a first captured image captured by the first camera and a second captured image captured by the second camera.
And a determining module 620, configured to determine depth information of multiple frames of continuously captured image data based on a preset algorithm.
An extracting module 630, configured to extract feature information of the subject in the image data obtained by continuously capturing multiple frames according to the depth information.
And a synthesizing module 640, configured to synthesize the feature information of the subject in a preset manner to generate a multiple exposure image.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
The image shooting device based on the double cameras receives the shooting instruction, the first camera and the second camera are controlled according to the shooting instruction to obtain the image data continuously shot by multiple frames, the depth information of the image data continuously shot by the multiple frames is determined based on the preset algorithm, the characteristic information of a shot main body in the image data continuously shot by the multiple frames is extracted according to the depth information, and the characteristic information of the shot main body is synthesized in a preset mode to generate a multiple exposure image.
In order to implement the foregoing embodiments, the present application further provides a mobile terminal, and fig. 7 is a schematic structural diagram of the mobile terminal according to an embodiment of the present application, and as shown in fig. 7, the mobile terminal 700 includes: a housing 710 and a first camera 711, a second camera 712, a memory 713, and a processor 714 located within the housing 710.
Wherein the memory 713 stores executable program code; the processor 714 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 713 for executing the dual-camera based image photographing method as the aforementioned method embodiment.
The method comprises the following steps:
and S101', receiving a shooting instruction, and controlling the first camera and the second camera to obtain multiple frames of continuously shot image data according to the shooting instruction.
In one embodiment of the application, when a user uses the dual-camera terminal to shoot, the camera can be opened to view a frame, and the virtual shutter key is pressed to send a shooting instruction. For example, it is possible to select entry into a multiple exposure mode in which a plurality of frames of image data continuously captured are obtained by continuously holding down a virtual shutter key. The image data includes a first captured image captured by the first camera and a second captured image captured by the second camera. In this embodiment, the first camera and the second camera are simultaneously used for framing shooting, and a first shot image for imaging and a second shot image for calculating depth information are obtained respectively.
And S102', determining the depth information of the image data continuously shot by multiple frames based on a preset algorithm.
Specifically, an imaging image may be generated from a first captured image, and then depth information of the imaging image may be calculated from the first captured image and a second captured image using a preset algorithm. The preset algorithm may be a triangulation principle.
Because the first shot image and the second shot image are shot by different cameras respectively, and a certain distance is reserved between the two cameras, the parallax caused by the first shot image and the second shot image can be calculated according to the principle of triangulation distance measurement, and the depth information of the same subject in the first shot image and the second shot image, namely the distance between the subject and the plane where the first camera and the second camera are located, can be obtained.
And S103', extracting characteristic information of the shot subject in the image data continuously shot by a plurality of frames according to the depth information.
Specifically, after the depth information of the imaged image is calculated, feature information of a subject in image data continuously captured over multiple frames may be extracted according to the depth information, so that the subject, the foreground, and the background in the imaged image are identified. Wherein, the foreground is smaller relative to the depth of the subject, and the background is larger relative to the depth of the subject.
And S104', synthesizing the characteristic information of the shot subject in a preset mode to generate a multiple exposure image.
Specifically, the feature information of the subject to be photographed is synthesized in a preset manner by using the transparency as a dimension, or by using the brightness as a dimension.
When the transparency is used as the dimension for the synthesis, if the display effect of the subject in the last frame image is to be enhanced, the effects of other images are gradually weakened, and the feature information of the subject in the first frame image to the nth frame image can be merged in a mode of decreasing the transparency progressively (the transparency of the first frame image is the highest, and the transparency of the last frame image is the lowest). The feature information of the subject in the first frame image to the nth frame image may be combined in the same transparency manner (the transparency of the N frame images is kept consistent). Of course, if the display effect of the subject in the first frame image is to be enhanced, the effects of the other images are gradually weakened, and the feature information of the subject in the first frame image to the nth frame image may be merged in a manner of increasing the transparency (the transparency of the first frame image is lowest, and the transparency of the last frame image is highest). Wherein N is a natural number greater than 1.
Similarly, when combining with brightness as the dimension, if the display effect of the subject in the first frame image is to be enhanced, the effects of the other images are gradually weakened, and the feature information of the subject in the first to nth frame images may be combined in a manner of decreasing brightness (the brightness of the first frame image is the highest, and the brightness of the last frame image is the lowest). The feature information of the subject in the first frame image to the nth frame image may be combined in the same brightness manner (the brightness of the N frame images is kept uniform). Of course, if it is desired to enhance the display effect of the subject in the last frame image and gradually weaken the effect of other images, the feature information of the subject in the first to nth frame images may be combined in a manner of increasing the brightness (the brightness of the first frame image is the lowest and the brightness of the last frame image is the highest).
Based on the above combination manner, multiple exposure images with various effects can be obtained.
It is to be understood that by acquiring depth information of image data by two cameras, it is possible to improve accuracy in extracting feature information of a subject, so that image quality is improved at the time of composition. In addition, in the synthesis, because the shooting is continuous, the foreground or the background of the multi-frame image data is the same, so that the foreground or the background of one frame can be selected from the multi-frame image data as the foreground or the background of the multiple exposure image after the synthesis.
The mobile terminal receives the shooting instruction, controls the first camera and the second camera to obtain the image data continuously shot by multiple frames according to the shooting instruction, determines the depth information of the image data continuously shot by the multiple frames based on a preset algorithm, extracts the characteristic information of a shot main body in the image data continuously shot by the multiple frames according to the depth information, and synthesizes the characteristic information of the shot main body in a preset mode to generate a multiple exposure image.
In order to implement the above-mentioned embodiments, the present application also proposes a computer-readable storage medium having stored thereon a computer program that, when executed by a processor of a mobile terminal, implements a dual-camera based image capturing method as in the foregoing embodiments.
The mobile terminal may further include an Image Processing circuit, which may be implemented by hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes an ISP processor 840 and control logic 850. Image data captured by imaging device 810 is first processed by ISP processor 840, and ISP processor 840 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of imaging device 810. Imaging device 810 may specifically include two cameras, each of which may include one or more lenses 812 and an image sensor 814. Image sensor 814 may include an array of color filters (e.g., Bayer filters), and image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 814 and provide a set of raw image data that may be processed by ISP processor 840. The sensor 820 may provide raw image data to the ISP processor 840 based on the sensor 820 interface type. The sensor 820 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 840 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 840 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 840 may also receive pixel data from image memory 830. For example, raw pixel data is sent from the sensor 820 interface to the image memory 830, and the raw pixel data in the image memory 830 is then provided to the ISP processor 840 for processing. The image Memory 830 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 820 interface or from the image memory 830, the ISP processor 840 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 830 for additional processing before being displayed. ISP processor 840 receives the processed data from image memory 830 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 870 for viewing by a user and/or further processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 840 may also be sent to image memory 830 and display 870 may read image data from image memory 830. In one embodiment, image memory 830 may be configured to implement one or more frame buffers. In addition, the output of ISP processor 840 may be transmitted to encoder/decoder 860 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 870 device. The encoder/decoder 860 may be implemented by a CPU or GPU or coprocessor.
The statistics determined by ISP processor 840 may be sent to control logic 850 unit. For example, the statistical data may include image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 812 shading correction, and the like. Control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 810 and, in turn, control parameters based on the received statistical data. For example, the control parameters may include sensor 820 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 812 shading correction parameters.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (6)

1. An image shooting method based on double cameras is characterized in that the double cameras comprise a first camera and a second camera, and the method comprises the following steps:
when the double cameras are in a multiple exposure mode, acquiring a shooting instruction sent by a user through continuously pressing a virtual shutter key, and controlling the first camera and the second camera to acquire multiple frames of continuously shot image data according to the shooting instruction, wherein the image data comprises a first shot image shot by the first camera and a second shot image shot by the second camera;
judging whether the field angle of the first camera is smaller than or equal to the field angle of the second camera;
if the field angle of the first camera is smaller than or equal to the field angle of the second camera, taking the first shot image as an imaging image;
if the field angle of the first camera is larger than that of the second camera, intercepting the same area as the framing picture of the second shot image from the first shot image to obtain the imaging image;
according to the imaging image and the second shot image, obtaining distance differences between each object in the imaging image and the second shot image;
acquiring depth information of the imaging image according to (B x f)/d, wherein B is the distance between the first camera and the second camera, f is the distance between the focal plane of the first camera and the focal plane of the second camera and the plane where the first camera and the second camera are located, and d is the distance difference;
extracting feature information of a subject in the multi-frame continuous imaging image according to the depth information; and
and synthesizing the characteristic information of the shot main body in a preset mode to generate a multiple exposure image.
2. The method of claim 1, wherein synthesizing feature information of the subject in a preset manner to generate a multiple exposure image comprises:
merging the characteristic information of the shot subject in the first frame image to the Nth frame image in a mode of descending transparency or brightness, wherein N is a natural number greater than 1; or
Merging the characteristic information of the shot main body in the first frame image to the Nth frame image in a mode of the same transparency or the same brightness; or
And combining the characteristic information of the shot main body in the first frame image to the Nth frame image in a mode of increasing the transparency or the brightness.
3. The utility model provides an image shooting device based on two cameras, its characterized in that, two cameras include first camera and second camera, the device includes:
an obtaining module, configured to obtain a shooting instruction issued by a user by continuously pressing a virtual shutter key when the two cameras are in a multiple exposure mode, and control the first camera and the second camera according to the shooting instruction to obtain multiple frames of continuously shot image data, where the image data includes a first shot image shot by the first camera and a second shot image shot by the second camera, determine whether a field angle of the first camera is smaller than or equal to a field angle of the second camera, if the field angle of the first camera is smaller than or equal to the field angle of the second camera, take the first shot image as an imaging image, and if the field angle of the first camera is larger than the field angle of the second camera, intercept an area that is the same as a framing picture of the second shot image from the first shot image, obtaining the imaging image;
a determining module, configured to obtain, according to the imaged image and the second captured image, distance differences between each object in the imaged image and the second captured image, and obtain depth information of the imaged image according to (B x f)/d, where B is a distance between the first camera and the second camera, f is a distance between a focal plane of the first camera and the second camera and a plane where the first camera and the second camera are located, and d is the distance difference;
the extraction module is used for extracting the characteristic information of the shot main body in the multi-frame continuous imaging images according to the depth information; and
and the synthesis module is used for synthesizing the characteristic information of the shot main body in a preset mode so as to generate a multiple exposure image.
4. The apparatus of claim 3, wherein the synthesis module is specifically configured to:
merging the characteristic information of the shot subject in the first frame image to the Nth frame image in a mode of descending transparency or brightness, wherein N is a natural number greater than 1; or
Merging the characteristic information of the shot main body in the first frame image to the Nth frame image in a mode of the same transparency or the same brightness; or
And combining the characteristic information of the shot main body in the first frame image to the Nth frame image in a mode of increasing the transparency or the brightness.
5. A mobile terminal, comprising: the system comprises a first camera, a second camera, a memory, a processor and a computer program which is stored on the memory and can run on the processor; the processor, when executing the program, implements the dual-camera based image photographing method as claimed in claim 1 or 2.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the dual-camera based image capturing method according to any one of claims 1 or 2.
CN201711275279.4A 2017-12-06 2017-12-06 Image shooting method and device based on double cameras and mobile terminal Active CN107872631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711275279.4A CN107872631B (en) 2017-12-06 2017-12-06 Image shooting method and device based on double cameras and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711275279.4A CN107872631B (en) 2017-12-06 2017-12-06 Image shooting method and device based on double cameras and mobile terminal

Publications (2)

Publication Number Publication Date
CN107872631A CN107872631A (en) 2018-04-03
CN107872631B true CN107872631B (en) 2020-05-19

Family

ID=61755330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711275279.4A Active CN107872631B (en) 2017-12-06 2017-12-06 Image shooting method and device based on double cameras and mobile terminal

Country Status (1)

Country Link
CN (1) CN107872631B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648166A (en) * 2018-06-06 2018-10-12 中山新诺科技股份有限公司 The method and system of exposure figure processing
CN108933905A (en) * 2018-07-26 2018-12-04 努比亚技术有限公司 video capture method, mobile terminal and computer readable storage medium
CN109582811B (en) * 2018-12-17 2021-08-31 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110958400B (en) * 2019-12-13 2021-11-23 上海海鸥数码照相机有限公司 System, method and device for keeping exposure of continuously shot pictures consistent
CN111726522B (en) * 2020-06-12 2021-06-11 上海摩勤智能技术有限公司 Face recognition equipment control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851125A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of mobile terminal and multiple-exposure image pickup method
CN107071275A (en) * 2017-03-22 2017-08-18 努比亚技术有限公司 A kind of image combining method and terminal
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327253B (en) * 2013-06-26 2015-05-13 努比亚技术有限公司 Multiple exposure method and camera shooting device
CN106060422B (en) * 2016-07-06 2019-02-22 维沃移动通信有限公司 A kind of image exposure method and mobile terminal
CN106412428B (en) * 2016-09-27 2019-08-02 Oppo广东移动通信有限公司 Image pickup method, device and mobile terminal
CN107295269A (en) * 2017-07-31 2017-10-24 努比亚技术有限公司 A kind of light measuring method and terminal, computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071275A (en) * 2017-03-22 2017-08-18 努比亚技术有限公司 A kind of image combining method and terminal
CN106851125A (en) * 2017-03-31 2017-06-13 努比亚技术有限公司 A kind of mobile terminal and multiple-exposure image pickup method
CN107194963A (en) * 2017-04-28 2017-09-22 努比亚技术有限公司 A kind of dual camera image processing method and terminal

Also Published As

Publication number Publication date
CN107872631A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107948519B (en) Image processing method, device and equipment
KR102306304B1 (en) Dual camera-based imaging method and device and storage medium
CN108055452B (en) Image processing method, device and equipment
CN107977940B (en) Background blurring processing method, device and equipment
EP3499863B1 (en) Method and device for image processing
CN108154514B (en) Image processing method, device and equipment
CN108111749B (en) Image processing method and device
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN108024054B (en) Image processing method, device, equipment and storage medium
CN107945105B (en) Background blurring processing method, device and equipment
CN108712608B (en) Terminal equipment shooting method and device
JP2020528700A (en) Methods and mobile terminals for image processing using dual cameras
CN108156369B (en) Image processing method and device
KR102304784B1 (en) Double camera-based imaging method and apparatus
CN108024057B (en) Background blurring processing method, device and equipment
CN108053438B (en) Depth of field acquisition method, device and equipment
CN110177212B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR20180107346A (en) Photographing apparatus and method for controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant