CN110213494B - Photographing method and device, electronic equipment and computer readable storage medium - Google Patents

Photographing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110213494B
CN110213494B CN201910593943.2A CN201910593943A CN110213494B CN 110213494 B CN110213494 B CN 110213494B CN 201910593943 A CN201910593943 A CN 201910593943A CN 110213494 B CN110213494 B CN 110213494B
Authority
CN
China
Prior art keywords
image
camera
target
area
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910593943.2A
Other languages
Chinese (zh)
Other versions
CN110213494A (en
Inventor
陈伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910593943.2A priority Critical patent/CN110213494B/en
Publication of CN110213494A publication Critical patent/CN110213494A/en
Application granted granted Critical
Publication of CN110213494B publication Critical patent/CN110213494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to a shooting method, a shooting device, electronic equipment and a computer-readable storage medium. The method comprises the following steps: controlling a first camera to acquire a first preview image; performing photometric processing on the first preview image to obtain a first exposure parameter, and sending the first exposure parameter to the second camera; and controlling the first camera and the at least one second camera to shoot based on the first exposure parameter. The shooting method, the shooting device, the electronic equipment and the computer readable storage medium can reduce power consumption of the electronic equipment during shooting.

Description

Photographing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a shooting method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, people increasingly demand images shot by electronic equipment. The number of cameras of the electronic equipment is also developed from the first single camera to the later double camera, triple camera and even more cameras. When an electronic device including a plurality of cameras performs shooting, all the cameras are usually started to perform shooting, and a plurality of shot images are combined to obtain a combined image.
However, the conventional photographing method has a problem of high power consumption.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, electronic equipment and a computer readable storage medium, which can reduce power consumption of the electronic equipment during shooting.
A shooting method is applied to an electronic device comprising a first camera and at least one second camera, and comprises the following steps:
controlling the first camera to acquire a first preview image;
performing photometric processing on the first preview image to obtain a first exposure parameter, and sending the first exposure parameter to the second camera;
and controlling the first camera and the at least one second camera to shoot based on the first exposure parameter.
A shooting method is applied to electronic equipment comprising at least two cameras and comprises the following steps:
controlling the at least two cameras to respectively acquire preview images;
selecting at least two candidate preview images from the preview images for matching to obtain a target overlapping area; the at least two candidate preview images comprise a preview image corresponding to each camera;
performing photometric processing on the target overlapping area to obtain a second exposure parameter, and sending the second exposure parameter to the at least two cameras;
and controlling the at least two cameras to shoot based on the second exposure parameters.
A shooting device is applied to an electronic device comprising a first camera and at least one second camera, and comprises:
the preview image acquisition module is used for controlling the first camera to acquire a first preview image;
the light metering processing module is used for performing light metering processing on the first preview image to obtain a first exposure parameter and sending the first exposure parameter to the second camera;
and the shooting module is used for controlling the first camera and the at least one second camera to shoot based on the first exposure parameter.
A shooting device is applied to an electronic device comprising at least two cameras, and comprises:
the preview image acquisition module is used for controlling the at least two cameras to respectively acquire preview images;
the matching module is used for selecting at least two candidate preview images from the preview images to match to obtain a target overlapping area; the at least two candidate preview images comprise a preview image corresponding to each camera;
the photometric processing module is used for performing photometric processing on the target overlapping area to obtain a second exposure parameter, and sending the second exposure parameter to the at least two cameras;
and the shooting module is used for controlling the at least two cameras to shoot based on the second exposure parameters.
An electronic device includes a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the shooting method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The shooting method, the shooting device, the electronic equipment and the computer readable storage medium control the first camera to acquire the first preview image, perform photometry processing on the first preview image to obtain the first exposure parameter, and send the first exposure parameter to the second camera. The first exposure parameter is obtained by only performing photometric processing on the first preview image through the first camera, so that the process that the exposure parameter is obtained by performing photometric processing on the preview image through the second camera is avoided. After the at least one second camera receives the first exposure parameter, the first camera and the at least one second camera can be controlled to shoot based on the first exposure parameter, and power consumption of the electronic equipment during shooting is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment of a photographing method in one embodiment;
FIG. 2 is a schematic diagram of an image processing circuit in one embodiment;
FIG. 3 is a flow diagram of a method of capturing in one embodiment;
FIG. 4 is a flow diagram of the steps in one embodiment for determining contrast;
FIG. 5 is a flow diagram of image synthesis in one embodiment;
FIG. 6 is a flowchart of a photographing method in another embodiment;
FIG. 7 is a diagram showing a photographing method according to the prior art in one embodiment;
FIG. 8 is a diagram showing a photographing method in another embodiment;
FIG. 9 is a block diagram showing the configuration of a photographing apparatus according to an embodiment;
FIG. 10 is a block diagram showing the construction of a photographing apparatus according to another embodiment;
fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first camera may be referred to as a second camera, and similarly, a second camera may be referred to as a first camera, without departing from the scope of the present application. The first camera and the second camera are both cameras, but they are not the same camera.
Fig. 1 is a schematic diagram of an application environment of the photographing method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 10 and an object 12, a first camera 102 and at least one second camera 104 are installed on the electronic device 10, and the first camera 102 is controlled to acquire a first preview image; performing photometric processing on the first preview image to obtain a first exposure parameter, and sending the first exposure parameter to the second camera 104; and controlling the first camera 102 and the at least one second camera 104 to shoot based on the first exposure parameter. The electronic device 10 may be a mobile phone, a computer, a wearable device, a personal digital assistant, and the like, which is not limited herein.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 2 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 2, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 2, the image processing circuit includes a first ISP processor 230, a second ISP processor 240 and control logic 250. The first camera 210 includes one or more first lenses 212 and a first image sensor 214. The first image sensor 214 may include a color filter array (e.g., a Bayer filter), and the first image sensor 214 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 214 and provide a set of image data that may be processed by the first ISP processor 230. The second camera 220 includes one or more second lenses 222 and a second image sensor 224. The second image sensor 224 may include a color filter array (e.g., a Bayer filter), and the second image sensor 224 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 224 and provide a set of image data that may be processed by the second ISP processor 240.
The first image collected by the first camera 210 is transmitted to the first ISP processor 230 for processing, after the first ISP processor 230 processes the first image, the statistical data of the first image (such as the brightness of the image, the optical ratio of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 250, and the control logic 250 may determine the control parameter of the first camera 210 according to the statistical data, so that the first camera 210 may perform operations such as auto-focus and auto-exposure according to the control parameter. The first image may be stored in the image memory 260 after being processed by the first ISP processor 230, and the first ISP processor 230 may also read the image stored in the image memory 260 for processing. In addition, the first image may be directly transmitted to the display 270 for display after being processed by the ISP processor 230, or the display 270 may read and display the image in the image memory 260.
Wherein the first ISP processor 230 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 230 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image Memory 260 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving the interface from the first image sensor 214, the first ISP processor 230 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 260 for additional processing before being displayed. The first ISP processor 230 receives the processed data from the image memory 260 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 230 may be output to a display 270 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 230 may also be transmitted to the image memory 260, and the display 270 may read image data from the image memory 260. In one embodiment, image memory 260 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 230 may be sent to the control logic 250. For example, the statistical data may include first image sensor 214 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 212 shading correction, and the like. Control logic 250 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 210 and control parameters for first ISP processor 230 based on the received statistical data. For example, the control parameters of the first camera 210 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 212 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 212 shading correction parameters.
Similarly, the second image collected by the second camera 220 is transmitted to the second ISP processor 240 for processing, after the second ISP processor 240 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 250, and the control logic 250 may determine the control parameter of the second camera 220 according to the statistical data, so that the second camera 220 may perform operations such as auto-focus and auto-exposure according to the control parameter. The second image may be stored in the image memory 260 after being processed by the second ISP processor 240, and the second ISP processor 240 may also read the image stored in the image memory 260 for processing. In addition, the second image may be directly transmitted to the display 270 for display after being processed by the ISP processor 240, or the display 270 may read and display the image in the image memory 260. The second camera 220 and the second ISP processor 240 may also implement the processes described for the first camera 210 and the first ISP processor 230.
In one embodiment, the electronic device includes a first camera 210 and at least one second camera 220. When the first camera 210 is turned on, the control logic 250 controls the first camera 210 to acquire a first preview image and transmit the first preview image to the first ISP processor 230. The first ISP processor 230 performs a light metering process on the first preview image to obtain a first exposure parameter, and sends the first exposure parameter to the control logic 250. The control logic 250 controls both the first camera 210 and the second camera 220 to perform photographing based on the first exposure parameter.
In one embodiment, the first ISP processor 230 may process the first preview image to obtain a contrast of the first preview image and compare the contrast to a contrast threshold. When the contrast is smaller than the contrast threshold, performing photometric processing on the first preview image; and when the contrast is greater than or equal to the contrast threshold, acquiring a target area in the first preview image, and performing photometric processing on the target area to obtain a first exposure parameter.
In one embodiment, when at least two cameras are included in the electronic device, the cameras may be the first camera 210 and the second camera 220, and the control logic 250 controls the first camera 210 and the second camera 220 to respectively acquire the preview images. The first camera 210 transmits the acquired preview image to the first ISP processor 230 and the second camera 220 transmits the acquired preview image to the second ISP processor 240. The first ISP processor 230 and the second ISP processor 240 obtain at least two candidate preview images from the preview images for matching, so as to obtain a target overlapping area. The first ISP processor 230 performs photometry on the target overlapping area in the candidate preview image corresponding to the first camera 210, and the second ISP processor 240 performs photometry on the target overlapping area in the candidate preview image corresponding to the second camera 220, so as to obtain a consistent second exposure parameter, and send the consistent second exposure parameter to the control logic 250. The control logic 250 controls both the first camera 210 and the second camera 220 to perform photographing based on the second exposure parameter.
In one embodiment, the first camera 210 may be a wide-angle camera and the second camera 220 may be a tele camera.
The images processed by the first ISP processor 230 and the second ISP processor 240 may be stored in the image memory 260, or may be transmitted to the display 270, so that the images are displayed on the display interface of the electronic device. The first ISP processor 230 and the second ISP processor 240 may acquire images from the image memory 260 and process the images, and store the processed images in the image memory 260 or transmit the processed images to the display 270, but is not limited thereto.
Fig. 3 is a flowchart of a photographing method in one embodiment. The shooting method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 3, the photographing method is applied to an electronic device including a first camera and at least one second camera, and includes steps 302 to 306.
Step 302, controlling a first camera to acquire a first preview image.
The electronic equipment can set up the camera, and the quantity of the camera of setting is at least two, first camera and at least one second camera promptly. For example, one first camera and one second camera, one first camera and two second cameras, and the like are provided, and are not limited herein. The form of the camera installed in the electronic device is not limited, and for example, the camera may be a camera built in the electronic device, or a camera externally installed in the electronic device; the camera can be a front camera or a rear camera.
In the embodiments provided herein, the first camera and the second camera may be any type of camera. For example, the first camera and the second camera may be a color camera, a black and white camera, a depth camera, or the like, without being limited thereto. The first camera and the second camera may be different types of cameras, respectively, for example, the first camera is a color camera and the second camera is a depth camera. For another example, the first camera may be a telephoto camera or a wide-angle camera, and similarly, the second camera may be a telephoto camera or a wide-angle camera.
Correspondingly, the color image is acquired by the color camera, the black-and-white image is acquired by the black-and-white camera, the depth image is acquired by the depth camera, the tele image is acquired by the tele camera, and the wide image is acquired by the wide camera, but the method is not limited thereto.
It is understood that the first camera and the at least one second camera are located on the same side of the electronic device and capture a scene in the same direction. The field of view of the first camera and the field of view of the at least one second camera can be completely the same, and the image shot by the first camera and the image shot by the at least one second camera completely correspond to each other; the field of view of the first camera may partially coincide with the field of view of the at least one second camera, so that the image captured by the first camera and the image captured by the at least one second camera may partially coincide.
And step 304, performing photometric processing on the first preview image to obtain a first exposure parameter, and sending the first exposure parameter to the second camera.
The photometry processing refers to measuring the brightness of a subject to be photographed. Exposure refers to the process of light entering a shutter and being projected on a photosensitive layer of a camera when the camera opens the shutter, thereby generating an image. The first Exposure parameter may be an Exposure time, an EV (Exposure value), an aperture value, or the like, but is not limited thereto. It is understood that the longer the exposure time, the greater the exposure amount; the larger the aperture value, the larger the exposure amount. Generally, the larger the first exposure parameter, the brighter the acquired image.
After the luminance of the subject is obtained by the photometry process, the electronic device may obtain the first exposure parameter from the luminance. Specifically, a reference light reflectance is set in advance in the electronic device; when the target reflection rate of the shot object is obtained through photometry, comparing the target reflection rate with a reference reflection rate, and adjusting an aperture value and a shutter value according to a comparison result; and obtaining a first exposure parameter according to the target light reflection rate, the adjusted aperture value and the adjusted shutter value.
The aperture value is a relative value (reciprocal of relative aperture) obtained by the focal length of the lens/the lens light-passing diameter. The shutter is a device that controls the effective exposure time of the photosensitive sheet. The shutter value may be a shutter speed, a lag time of the shutter, or the like.
When the target reflection rate is greater than the reference reflection rate, the electronic device can reduce the aperture value and the shutter value, so that the camera can obtain less light when shooting images. However, there is a problem that the captured image has an error with the actual subject because less light is captured, that is, the captured image is darker than the actual subject, and the accuracy of capturing is low. Therefore, it is necessary to improve the exposure parameter to obtain the first exposure parameter, so that the camera can obtain more light rays when shooting an image, and can shoot to obtain a more accurate image.
For example, in the electronic device, a reference light reflectance is set in advance to be 18%, when a target light reflectance of a subject obtained through photometry is 50%, which indicates that the brightness of the subject is bright, the camera may decrease the aperture value and the shutter value according to the target light reflectance, so that the electronic device obtains less light and decreases the light reflectance of the subject. However, when the reflectance of the subject is reduced, the captured image is darker than the actual subject. Therefore, the exposure parameter is increased, resulting in a first exposure parameter.
Similarly, when the target light reflectance is smaller than the reference light reflectance, the electronic device may increase the aperture value and the shutter value, so that the camera acquires more light when capturing an image. However, acquiring more light causes an error between the captured image and the actual subject, i.e., the captured image is brighter than the actual subject, which causes a problem of low accuracy of capturing. Therefore, the exposure parameters need to be reduced to obtain the first exposure parameters, so that the camera can obtain less light when shooting images, and can shoot more accurate images.
The manner of photometry processing may include one or more of center averaging photometry, center partial photometry, spot photometry, multipoint photometry, and evaluative photometry. And the first camera performs photometric processing on the first preview image to obtain a first exposure parameter, and then sends the first exposure parameter to the second camera.
And step 306, controlling the first camera and the at least one second camera to shoot based on the first exposure parameter.
After the second camera receives the first exposure parameter sent by the first camera, the at least one second camera can be started, and the first camera and the at least one second camera are controlled to shoot based on the first exposure parameter, so that the second camera is prevented from performing photometric processing on a preview image to obtain the exposure parameter, and the power consumption is reduced.
According to the shooting method, the first camera is controlled to obtain the first preview image, the first preview image is subjected to photometric processing to obtain the first exposure parameter, and the first exposure parameter is sent to the second camera. The first exposure parameter is obtained by only performing photometric processing on the first preview image through the first camera, so that the process that the exposure parameter is obtained by performing photometric processing on the preview image through the second camera is avoided. After the at least one second camera receives the first exposure parameter, the first camera and the at least one second camera can be controlled to shoot based on the first exposure parameter, and power consumption of the electronic equipment during shooting is reduced.
In one embodiment, as shown in fig. 4, before performing the photometric processing on the first preview image to obtain the first exposure parameter, the method further includes:
step 402, determining a contrast of the first preview image according to the first preview image.
Contrast refers to a gradation of the image from the luminance level of the brightest area to the luminance level of the darkest area. The larger the value of the contrast is, the more gradation levels representing the luminance levels from the brightest area to the darkest area in the image are, the greater the contrast of the image is.
Further, a partial region may be acquired from the first preview image, and the contrast of the partial region may be determined and taken as the contrast of the first preview image. The contrast ratio is obtained by calculating only a partial area in the first preview image, so that the efficiency of obtaining the contrast ratio by calculation can be improved, and the power consumption of the electronic equipment is saved.
Generally, the center position of the image is the most important region of the image, so the center position of the first preview image is acquired as the partial region, and the accuracy of calculating the contrast is improved.
It is understood that the larger the proportion of the acquired partial region in the first preview image is, the closer the contrast of the partial region is to the contrast of the first preview image in reality.
And step 404, when the contrast is smaller than the contrast threshold, executing photometric processing on the first preview image to obtain a first exposure parameter.
When the contrast is smaller than the contrast threshold, it indicates that the contrast of the first preview image is small, and the first preview image may be directly subjected to photometric processing to obtain the first exposure parameter. The exposure of the image shot based on the first exposure parameter is in a reasonable range, and the problem of overexposure or over darkness of the shot image is avoided.
According to the shooting method, the contrast of the first preview image is determined according to the first preview image, when the contrast is smaller than the contrast threshold, photometric processing is performed on the first preview image to obtain the first exposure parameter, and the exposure of the image shot based on the first exposure parameter is in a reasonable range, so that the problem of overexposure or over-darkness of the shot image is avoided, and the shooting accuracy is improved.
In one embodiment, the above shooting method further includes: when the contrast is greater than or equal to the contrast threshold, acquiring a target area in the first preview image; and performing photometric processing on a target area in the first preview image to obtain a first exposure parameter.
When the contrast is greater than or equal to the contrast threshold, it indicates that the first preview image has greater contrast, i.e., the image contains both lighter and darker areas. If the photometric processing is performed on a bright area in the first preview image, the obtained first exposure parameter is large, and a dark area in an image obtained by shooting based on the first exposure parameter is brighter than a corresponding area in an actual scene. If the light metering processing is performed on the darker area in the first preview image, the obtained first exposure parameter is smaller, and the brighter area in the image obtained based on the first exposure parameter is darker than the corresponding area in the actual scene.
That is, when the contrast of the first preview image is large, the photometric processing is performed on a bright area in the first preview image, and when the subject actually photographed by the user is a dark area, the image photographed based on the first exposure parameter obtained by the photometric processing is not accurate for the user. Similarly, if the light metering process is performed on a dark area in the first preview image and the user actually photographs a bright area, the image photographed based on the first exposure parameter obtained by the light metering process is also inaccurate for the user.
Thus, when the contrast is greater than or equal to the contrast threshold, the target region in the first preview image is acquired. The target area may be the center position of the first preview image, may also be a focus area selected by the user, and may also be obtained based on the size ratio of the bright area and the dark area in the first preview image, which is not limited to this.
Specifically, the area of the bright area and the area of the dark area in the first preview image may be acquired, and the area of the bright area and the area of the dark area may be compared to acquire the area having a larger area as the target area. The brightness value of each pixel point in the first preview image can be detected, and when the brightness value is greater than or equal to a first threshold value, the pixel point is taken as a bright pixel point; and when the brightness value is smaller than or equal to the second threshold value, taking the pixel point as a dark pixel point. The first threshold is greater than the second threshold. And then taking the area where each bright pixel point is as a bright area and taking the area where each dark pixel point is as a dark area.
According to the shooting method, when the contrast is larger than or equal to the contrast threshold, the target area in the first preview image is obtained, photometric processing is carried out on the target area in the first preview image to obtain the first exposure parameter, the more accurate first exposure parameter can be obtained, and the more accurate image is shot based on the first exposure parameter.
In one embodiment, the above shooting method further includes: acquiring a first image obtained by shooting by a first camera based on a first exposure parameter and a second image obtained by shooting by at least one second camera based on the first exposure parameter; matching the first image with the second image to obtain a superposition area of the first image and a superposition area of the second image; obtaining a target area according to the overlapping area of the first image and the overlapping area of the second image; and synthesizing the target area, the first image and the second image to obtain a target image.
The first image and the second image may be matched through contours in the images, or may be matched through feature points in the images, or may be matched through depth information, RGB three-channel information, and the like, without being limited thereto.
And matching the first image with the second image to obtain a superposed region of the first image and a superposed region of the second image, namely the superposed region of the first image corresponds to the superposed region of the second image. In one embodiment, one of the coincident regions of the first image and the second image may be acquired as a target region. In another embodiment, the target area may also be obtained by averaging the pixel values in the overlapping area of the first image and the corresponding pixel values in the overlapping area of the second image. In other embodiments, the target area may be obtained by other calculation methods, but is not limited thereto.
After the target area is obtained, the target area, the area of the first image except the overlapping area of the first image, and the area of the second image except the overlapping area of the second image may be stitched to obtain the target image.
It can be understood that the first image and the second image are both obtained by shooting based on the first exposure parameter, so that the parameters such as the exposure amount of the first image and the second image are the same, and the parameters such as the exposure amount of each area in the target image obtained by synthesizing the first image and the second image are the same, so that the accuracy of the synthesized target image is improved.
In one embodiment, as shown in fig. 5, obtaining the target area according to the overlapping area of the first image and the overlapping area of the second image includes:
step 502, dividing the overlapping area of the first image into at least two first sub-areas, and dividing the overlapping area of the second image into at least two second sub-areas, wherein the first sub-areas and the second sub-areas are in one-to-one correspondence and completely overlap.
It will be appreciated that the thinner the division of the overlapping region of the first image and the overlapping region of the second image, i.e. the more the first and second sub-regions, the more accurate the resulting target region.
Step 504, determining the attribute value of each first sub-region and the attribute value of each second sub-region corresponding to each first sub-region.
In this embodiment, the attribute value may be at least one of definition, color, gray scale value, depth information, and the like, and the specific case may be set according to the user requirement, which is not limited thereto.
Step 506, comparing the attribute value of the first sub-region with the attribute value of the corresponding second sub-region to obtain a comparison result.
And the first sub-areas and the second sub-areas are in one-to-one correspondence and completely overlapped, and the attribute values of the first sub-areas and the corresponding attribute values of the second sub-areas are compared to obtain a comparison result. The comparison result may be that the attribute value of the first sub-region is greater than the attribute value of the corresponding second sub-region, or that the attribute value of the second sub-region is greater than the attribute value of the corresponding first sub-region, but is not limited thereto.
And step 508, acquiring each target sub-region according to the comparison result.
In one embodiment, when the attribute value is a luminance value, a region having a larger attribute value may be acquired as the target sub-region. When the attribute value is a gray value, a region with a smaller attribute value may be acquired as the target sub-region. The specific situation can be set according to the user requirement, but is not limited to this.
And step 510, synthesizing all the target sub-regions to obtain a target region.
In one embodiment, the obtained target sub-regions may be spliced to obtain a target region. In another embodiment, in order to ensure the continuity of the edges of the obtained sub-regions, the sub-regions may be spliced first, and then the noise is filtered by filtering waves to obtain the target region.
According to the shooting method, the overlapping area of the first image is divided into at least two first sub-areas, the overlapping area of the second image is divided into at least two second sub-areas, the attribute value of each first sub-area and the attribute value of each second sub-area corresponding to each first sub-area are determined, the attribute value of each first sub-area is compared with the attribute value of the corresponding second sub-area, each target sub-area is obtained according to the comparison result, and the target sub-areas are combined to obtain a more accurate target area.
Fig. 6 is a flowchart of a photographing method in one embodiment. The shooting method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 6, the shooting method is applied to an electronic device including at least two cameras, and includes steps 602 to 608.
Step 602, controlling at least two cameras to respectively acquire preview images.
The electronic equipment can set up the camera, and the quantity of the camera of setting is at least two. The form of the camera installed in the electronic device is not limited, and for example, the camera may be a camera built in the electronic device, or a camera externally installed in the electronic device; the camera can be a front camera or a rear camera.
In the embodiments provided herein, the first camera and the second camera may be any type of camera. For example, the at least two cameras may be a color camera, a black and white camera, a depth camera, and the like, without being limited thereto. At least two cameras can be the camera of different grade type respectively, for example, one of them camera is the color camera, and other cameras are the degree of depth camera. For another example, one of the cameras may be a telephoto camera or a wide-angle camera, and similarly, the other cameras may be telephoto cameras or wide-angle cameras.
Correspondingly, the color image is acquired by the color camera, the black-and-white image is acquired by the black-and-white camera, the depth image is acquired by the depth camera, the tele image is acquired by the tele camera, and the wide image is acquired by the wide camera, but the method is not limited thereto.
It is understood that at least two cameras are located on the same side of the electronic device and capture a scene in the same direction. The view fields of the at least two cameras can be completely the same, and the images shot by the at least two cameras completely correspond to each other; the fields of view of the at least two cameras may also have partially overlapping fields of view, such that images captured by the at least two cameras have partially overlapping regions.
Step 604, selecting at least two candidate preview images from the preview images to match, and obtaining a target overlapping area; the at least two candidate preview images include a preview image corresponding to each camera.
The candidate preview images refer to images obtained from the preview images, and the at least two candidate preview images include a preview image corresponding to each camera, that is, a preview image is obtained from the preview image obtained from each camera as a candidate preview image.
And selecting at least two candidate preview images from the preview images for matching to obtain a target overlapping area, wherein the target overlapping area exists in each candidate preview image, namely the candidate preview image corresponding to each camera comprises the target overlapping area.
The at least two candidate preview images may be matched through contours in the images, or through feature points in the images, or through depth information, RGB three-channel information, or the like, without being limited thereto.
And 606, performing photometric processing on the target overlapping area to obtain a second exposure parameter, and sending the second exposure parameter to the at least two cameras.
The photometry processing refers to measuring the brightness of a subject to be photographed. Exposure refers to the process of light entering a shutter and being projected on a photosensitive layer of a camera when the camera opens the shutter, thereby generating an image. The second Exposure parameter may be an Exposure time, an EV (Exposure value), an aperture value, or the like, but is not limited thereto. It is understood that the longer the exposure time, the greater the exposure amount; the larger the aperture value, the larger the exposure amount. Generally, the larger the second exposure parameter, the brighter the acquired image.
After the luminance of the subject is obtained by the photometry process, the electronic device may obtain the second exposure parameter from the luminance. Specifically, a reference light reflectance is set in advance in the electronic device; when the target reflection rate of the shot object is obtained through photometry, comparing the target reflection rate with a reference reflection rate, and adjusting an aperture value and a shutter value according to a comparison result; and obtaining a second exposure parameter according to the target light reflection rate, the adjusted aperture value and the adjusted shutter value.
The aperture value is a relative value (reciprocal of relative aperture) obtained by the focal length of the lens/the lens light-passing diameter. The shutter is a device that controls the effective exposure time of the photosensitive sheet. The shutter value may be a shutter speed, a lag time of the shutter, or the like.
When the target reflection rate is greater than the reference reflection rate, the electronic device can reduce the aperture value and the shutter value, so that the camera can obtain less light when shooting images. However, there is a problem that the captured image has an error with the actual subject because less light is captured, that is, the captured image is darker than the actual subject, and the accuracy of capturing is low. Therefore, the exposure parameter needs to be increased to obtain the second exposure parameter, so that the camera can obtain more light rays when shooting an image, and a more accurate image can be shot.
For example, in the electronic device, a reference light reflectance is set in advance to be 18%, when a target light reflectance of a subject obtained through photometry is 50%, which indicates that the brightness of the subject is bright, the camera may decrease the aperture value and the shutter value according to the target light reflectance, so that the electronic device obtains less light and decreases the light reflectance of the subject. However, when the reflectance of the subject is reduced, the captured image is darker than the actual subject. Therefore, the exposure parameter is increased, and the second exposure parameter is obtained.
Similarly, when the target light reflectance is smaller than the reference light reflectance, the electronic device may increase the aperture value and the shutter value, so that the camera acquires more light when capturing an image. However, acquiring more light causes an error between the captured image and the actual subject, i.e., the captured image is brighter than the actual subject, which causes a problem of low accuracy of capturing. Therefore, the exposure parameter needs to be reduced to obtain the second exposure parameter, so that the camera can obtain less light when shooting an image, and a more accurate image can be shot.
The manner of photometry processing may include one or more of center averaging photometry, center partial photometry, spot photometry, multipoint photometry, and evaluative photometry.
As shown in fig. 7, there are four cameras in the electronic apparatus, namely, a camera 1, a camera 2, a camera 3, and a camera 4. In the conventional shooting method, exposure parameter 1 is obtained by performing photometry processing on a 704 region in the preview image 702 of the camera 1, exposure parameter 2 is obtained by performing photometry processing on a 708 region in the preview image 706 of the camera 2, exposure parameter 3 is obtained by performing photometry processing on a 712 region in the preview image 710 of the camera 3, and exposure parameter 4 is obtained by performing photometry processing on a 716 region in the preview image 714 of the camera 4. The 704 area, the 708 area, the 712 area and the 716 area are different areas in the actual scene, so that the obtained exposure parameter 1, the exposure parameter 2, the exposure parameter 3 and the exposure parameter 4 are different exposure parameters. A camera in the electronic equipment acquires each image based on different exposure parameters, and synthesizes each image to obtain a target image. The exposure of each area in the target image is different, the picture of the synthesized target image is not uniform, and the accuracy is low.
In the application, each candidate preview image includes a target overlapping area, and then, photometry processing is performed on the target overlapping area to obtain a second exposure parameter, and the second exposure parameter is sent to at least two cameras.
In one embodiment, as shown in fig. 8, four cameras, that is, a camera 1, a camera 2, a camera 3, and a camera 4, exist in the electronic device, and the camera 1 acquires a corresponding candidate preview image 802, the camera 2 acquires a corresponding candidate preview image 804, the camera 3 acquires a corresponding candidate preview image 806, the camera 4 acquires a corresponding candidate preview image 808, and the candidate preview images 802, 804, 806, and 808 are matched to obtain a target overlapping area 810. When the electronic device performs photometry processing on the target overlapping area 810, the camera 1, the camera 2, the camera 3, and the camera 4 can obtain a consistent light reflection rate, so that a consistent second exposure parameter can be obtained. And sending the second exposure parameters to the camera 1, the camera 2, the camera 3 and the camera 4. The camera 1, the camera 2, the camera 3 and the camera 4 can acquire images with the same exposure amount based on the second exposure parameters, so that more accurate target images can be obtained through synthesis.
And step 608, controlling at least two cameras to shoot based on the second exposure parameters.
And the at least two cameras shoot based on the second exposure parameters, and the exposure of the images shot by the at least two cameras is the same. Further, images with the same exposure are synthesized to obtain a more accurate target image, so that the accuracy is improved.
The shooting method controls at least two cameras to respectively acquire preview images, acquires at least two candidate preview images from the preview images for matching, and obtains a target overlapping area, wherein the target overlapping area exists in the candidate preview images of each camera. Therefore, the light metering processing is carried out on the target overlapping area, each camera can obtain the consistent second exposure parameter, so that at least two cameras are controlled to shoot based on the second exposure parameter, the shooting accuracy is improved, images with the same exposure can be obtained, and the target images with the consistent exposure of all areas can be synthesized.
In one embodiment, at least two candidate images shot by at least two cameras based on the second exposure parameters are obtained; matching at least two candidate images to obtain a superposition area of each candidate image; obtaining a target area according to the overlapping area of each candidate image; and synthesizing the target area and each candidate image to obtain a target image.
At least two candidate images can be matched through the contour in the image, or can be matched through the feature points in the image, or can be matched through depth information, RGB three-channel information, and the like, but not limited to this.
It is understood that the overlapping regions of the candidate images may or may not completely correspond to each other. For example, the overlapping area a of the candidate image 1 completely corresponds to the overlapping area B of the candidate image 2. As another example, the overlapping area a of the candidate image 1 may correspond to the overlapping area B of the candidate image 2, and the overlapping area C of the candidate image 3 may correspond to the overlapping area D of the candidate image 4.
And matching the overlapping areas of the corresponding candidate images to obtain a target area. In one embodiment, one of the coincident regions of the corresponding candidate images may be acquired as a target region. In another embodiment, the pixel values of the pixels in the overlapping region of the corresponding candidate image may also be obtained, and the corresponding pixel values are averaged to obtain the target region. In other embodiments, the target area may be obtained by other calculation methods, but is not limited thereto.
After the target area is obtained, the target area and areas except the overlapping area of the candidate image in each candidate image can be spliced to obtain the target image.
In an embodiment, the overlapping region of the corresponding candidate image may be divided into a plurality of identical sub-regions, the attribute values of the sub-regions are obtained, and the attribute values of the corresponding sub-regions are compared to obtain the target sub-region.
The attribute value may be at least one of definition, color, gray value, depth information, and the like, and the specific condition may be set according to a user requirement, which is not limited thereto.
In one embodiment, when the attribute value is a luminance value, a region having a larger attribute value may be acquired as the target sub-region. When the attribute value is a gray value, a region with a smaller attribute value may be acquired as the target sub-region. The specific situation can be set according to the user requirement, but is not limited to this.
It can be understood that the thinner the overlapping regions of the corresponding candidate images are divided, i.e. the more sub-regions, the more accurate the resulting target region.
In one embodiment, the obtained target sub-regions may be spliced to obtain a target region. In another embodiment, in order to ensure the continuity of the edges of the obtained sub-regions, the sub-regions may be spliced first, and then the noise is filtered by filtering waves to obtain the target region.
In one embodiment, at least two cameras in the electronic device are both telephoto cameras, the electronic device further includes a wide-angle camera, and the method further includes: controlling a wide-angle camera to acquire a wide-angle preview image; matching the at least two candidate preview images with the wide-angle preview image respectively to obtain at least two overlapping areas of the wide-angle preview image; and matching the at least two overlapping areas to obtain a target overlapping area of the wide-angle preview image, wherein areas corresponding to the target overlapping areas exist in at least two candidate preview images.
The wide-angle camera is a camera with a focal length shorter than that of a standard camera and a visual angle larger than that of the standard camera. The long-focus camera is a camera with a focal length longer than that of a standard camera and a visual angle smaller than that of the standard camera. The wide-angle camera has a large field angle, that is, an image captured by the wide-angle camera can contain a wide shooting scene. The long-focus camera has a small field angle and a long focal length, and can shoot clear images of long-distance objects.
The wide-angle camera and the telephoto camera are arranged on the same side of the electronic equipment and shoot scenes in the same direction. Therefore, there is a partial overlapping area between the wide preview image acquired by the wide camera and the preview image candidate acquired by the telephoto camera.
The manner of matching the at least two candidate preview images with the wide-angle preview image is not limited, and the matching may be performed by a contour in the image, a feature point in the image, depth information, RGB three-channel information, or the like, but is not limited thereto.
And matching the at least two candidate preview images with the wide-angle preview image to obtain at least two overlapping areas of the wide-angle preview image, and matching the at least two overlapping areas of the wide-angle preview image to obtain a target overlapping area of the wide-angle preview image. The target overlapping area exists in each of the at least two candidate preview images, namely the target overlapping area is an area where the at least two candidate preview images overlap. In the present application, the at least two candidate preview images may be based on the wide-angle preview image, thereby obtaining the target overlapping areas of the at least two candidate preview images.
According to the shooting method, the wide-angle camera is controlled to obtain the wide-angle preview image, the at least two candidate preview images are respectively matched with the wide-angle preview image to obtain at least two overlapping areas of the wide-angle preview image, and then the at least two overlapping areas are matched to obtain the target overlapping area. That is, the overlapping area of at least two candidate preview images is obtained based on the wide-angle preview image, and the accuracy of image matching can be improved.
It should be understood that, although the respective steps in the flowcharts of fig. 3 to 6 are sequentially shown as indicated by arrows, the steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
Fig. 9 is a block diagram showing the configuration of the imaging device according to the embodiment. As shown in fig. 9, there is provided a camera 900 applied to an electronic device including a first camera and at least one second camera, including: a preview image acquisition module 902, a photometry processing module 904, and a photographing module 906, wherein:
a preview image obtaining module 902, configured to control the first camera to obtain the first preview image.
And a light metering processing module 904, configured to perform light metering processing on the first preview image to obtain a first exposure parameter, and send the first exposure parameter to the second camera.
And a shooting module 906, configured to control the first camera and the at least one second camera to perform shooting based on the first exposure parameter.
The shooting device controls the first camera to obtain the first preview image, performs photometry processing on the first preview image to obtain the first exposure parameter, and sends the first exposure parameter to the second camera. The first exposure parameter is obtained by only performing photometric processing on the first preview image through the first camera, so that the process that the exposure parameter is obtained by performing photometric processing on the preview image through the second camera is avoided. After the at least one second camera receives the first exposure parameter, the first camera and the at least one second camera can be controlled to shoot based on the first exposure parameter, and power consumption of the electronic equipment during shooting is reduced.
In an embodiment, the capturing apparatus 900 further includes a contrast obtaining module, configured to determine a contrast of the first preview image according to the first preview image; and when the contrast is smaller than the contrast threshold, executing photometric processing on the first preview image to obtain a first exposure parameter.
In one embodiment, the above-mentioned camera 900 further comprises a target area acquiring module, configured to acquire a target area in the first preview image when the contrast is greater than or equal to the contrast threshold; and performing photometric processing on a target area in the first preview image to obtain a first exposure parameter.
In an embodiment, the above-mentioned shooting apparatus 900 further includes a synthesis module, configured to obtain a first image obtained by the first camera shooting based on the first exposure parameter, and a second image obtained by the at least one second camera shooting based on the first exposure parameter; matching the first image with the second image to obtain a superposition area of the first image and a superposition area of the second image; obtaining a target area according to the overlapping area of the first image and the overlapping area of the second image; and synthesizing the target area, the first image and the second image to obtain a target image.
In one embodiment, the synthesis module is further configured to divide the overlapping region of the first image into at least two first sub-regions, and divide the overlapping region of the second image into at least two second sub-regions, where the first sub-regions and the second sub-regions are in one-to-one correspondence and completely overlap; determining the attribute value of each first sub-area and the attribute value of each second sub-area corresponding to each first sub-area; comparing the attribute value of the first sub-region with the attribute value of the corresponding second sub-region to obtain a comparison result; obtaining each target subregion according to the comparison result; and synthesizing all the target sub-areas to obtain a target area.
Fig. 10 is a block diagram showing a configuration of a photographing apparatus according to another embodiment. As shown in fig. 10, there is provided a camera 1000 applied to an electronic device including at least two cameras, including: a preview image acquisition module 1002, a matching module 1004, a photometry processing module 1006, and a photographing module 1008, wherein:
a preview image obtaining module 1002, configured to control at least two cameras to obtain preview images respectively.
The matching module 1004 is used for selecting at least two candidate preview images from the preview images to match to obtain a target overlapping area; the at least two candidate preview images include a preview image corresponding to each camera.
And the photometric processing module 1006 is configured to perform photometric processing on the target overlapping area to obtain a second exposure parameter, and send the second exposure parameter to the at least two cameras.
And a shooting module 1008, configured to control the at least two cameras to shoot based on the second exposure parameter.
The shooting method controls at least two cameras to respectively acquire preview images, acquires at least two candidate preview images from the preview images for matching, and obtains a target overlapping area, wherein the target overlapping area exists in the candidate preview images of each camera. Therefore, the light metering processing is carried out on the target overlapping area, each camera can obtain the consistent second exposure parameter, so that at least two cameras are controlled to shoot based on the second exposure parameter, the shooting accuracy is improved, images with the same exposure can be obtained, and the target images with the consistent exposure of all areas can be synthesized.
In one embodiment, the matching module 1004 is further configured to control the wide-angle camera to obtain a wide-angle preview image; matching the at least two candidate preview images with the wide-angle preview image respectively to obtain at least two overlapping areas of the wide-angle preview image; and matching the at least two overlapping areas to obtain a target overlapping area of the wide-angle preview image, wherein areas corresponding to the target overlapping areas exist in at least two candidate preview images.
The division of each module in the above-mentioned shooting device is only used for illustration, in other embodiments, the shooting device may be divided into different modules as needed to complete all or part of the functions of the above-mentioned shooting device.
For the specific definition of the photographing apparatus, reference may be made to the above definition of the photographing method, which is not described herein again. The modules in the shooting device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 11, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a photographing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the photographing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the photography method.
A computer program product containing instructions which, when run on a computer, cause the computer to perform a shooting method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A shooting method is applied to electronic equipment comprising a first camera and at least one second camera, and is characterized by comprising the following steps:
controlling the first camera to acquire a first preview image;
determining the contrast of the first preview image according to the first preview image;
when the contrast is smaller than a contrast threshold, performing photometric processing on the first preview image to obtain a first exposure parameter, and sending the first exposure parameter to the second camera; when the contrast is greater than or equal to a contrast threshold, acquiring a target area in the first preview image; performing photometric processing on a target area in the first preview image to obtain a first exposure parameter;
controlling the first camera and the at least one second camera to shoot based on the first exposure parameter;
acquiring a first image obtained by shooting by the first camera based on the first exposure parameter and a second image obtained by shooting by at least one second camera based on the first exposure parameter;
matching the first image with the second image to obtain a coincidence region of the first image and a coincidence region of the second image;
dividing the overlapping area of the first image into at least two first sub-areas, and dividing the overlapping area of the second image into at least two second sub-areas, wherein the first sub-areas and the second sub-areas are in one-to-one correspondence and completely overlap;
determining an attribute value of each first sub-area and an attribute value of each second sub-area corresponding to each first sub-area;
comparing the attribute value of the first sub-area with the attribute value of the corresponding second sub-area to obtain a comparison result;
obtaining each target subregion according to the comparison result;
splicing the target sub-regions, and filtering noise by using filtering waves to obtain a target region;
and synthesizing the target area, the first image and the second image to obtain a target image.
2. The method of claim 1, wherein determining the contrast of the first preview image from the first preview image comprises:
and acquiring a partial region from the first preview image, determining the contrast of the partial region, and taking the contrast of the partial region as the contrast of the first preview image.
3. The method of claim 1, wherein the first exposure parameter is exposure time, EV value, aperture value.
4. The utility model provides a shooting method, is applied to the electronic equipment who contains two at least cameras, and its characterized in that, two at least cameras among the electronic equipment are the long focus camera, still include the wide camera among the electronic equipment, the wide camera with the long focus camera set up in the same one side of electronic equipment to shoot the scene of same direction, the method includes:
controlling the at least two cameras to respectively acquire preview images; controlling the wide-angle camera to acquire a wide-angle preview image;
selecting at least two candidate preview images from the preview images, and respectively matching the at least two candidate preview images with the wide-angle preview image to obtain at least two overlapping areas of the wide-angle preview image; the at least two candidate preview images comprise a preview image corresponding to each camera in the at least two cameras;
matching the at least two overlapping areas to obtain a target overlapping area of the wide-angle preview image, wherein areas corresponding to the target overlapping area exist in the at least two candidate preview images;
performing photometric processing on the target overlapping area to obtain a second exposure parameter, and sending the second exposure parameter to the at least two cameras;
and controlling the at least two cameras to shoot based on the second exposure parameters.
5. The method of claim 4, further comprising:
acquiring at least two candidate images shot by at least two cameras based on the second exposure parameters;
matching at least two candidate images to obtain a superposition area of each candidate image;
obtaining a target area according to the overlapping area of each candidate image;
and synthesizing the target area and each candidate image to obtain a target image.
6. The method of claim 5, further comprising:
dividing the corresponding overlapping area of the candidate images into a plurality of identical sub-areas, acquiring the attribute values of the sub-areas, and comparing the attribute values of the corresponding sub-areas to obtain a target sub-area;
and splicing the obtained target sub-areas to obtain a target area.
7. A shooting device is applied to an electronic device comprising a first camera and at least one second camera, and is characterized by comprising:
the preview image acquisition module is used for controlling the first camera to acquire a first preview image;
the contrast acquisition module is used for determining the contrast of the first preview image according to the first preview image;
the light metering processing module is used for performing light metering processing on the first preview image to obtain a first exposure parameter when the contrast is smaller than a contrast threshold, and sending the first exposure parameter to the second camera; when the contrast is greater than or equal to a contrast threshold, acquiring a target area in the first preview image; performing photometric processing on a target area in the first preview image to obtain a first exposure parameter;
the shooting module is used for controlling the first camera and the at least one second camera to shoot based on the first exposure parameter;
the synthesis module is used for acquiring a first image obtained by shooting the first camera based on the first exposure parameter and a second image obtained by shooting at least one second camera based on the first exposure parameter; matching the first image with the second image to obtain a coincidence region of the first image and a coincidence region of the second image; dividing the overlapping area of the first image into at least two first sub-areas, and dividing the overlapping area of the second image into at least two second sub-areas, wherein the first sub-areas and the second sub-areas are in one-to-one correspondence and completely overlap; determining an attribute value of each first sub-area and an attribute value of each second sub-area corresponding to each first sub-area; comparing the attribute value of the first sub-area with the attribute value of the corresponding second sub-area to obtain a comparison result; obtaining each target subregion according to the comparison result; splicing the target sub-regions, and filtering noise by using filtering waves to obtain a target region; and synthesizing the target area, the first image and the second image to obtain a target image.
8. The utility model provides a shooting device, is applied to the electronic equipment who contains two at least cameras, a serial communication port, two at least cameras in the electronic equipment are long focus camera, still include wide camera in the electronic equipment, wide camera with long focus camera set up in the same one side of electronic equipment to shoot the scene of same direction, the device includes:
the preview image acquisition module is used for controlling the at least two cameras to respectively acquire preview images; controlling the wide-angle camera to acquire a wide-angle preview image;
the matching module is used for selecting at least two candidate preview images from the preview images, and matching the at least two candidate preview images with the wide-angle preview images respectively to obtain at least two overlapping areas of the wide-angle preview images; the at least two candidate preview images comprise a preview image corresponding to each camera in the at least two cameras; matching the at least two overlapping areas to obtain a target overlapping area of the wide-angle preview image, wherein areas corresponding to the target overlapping area exist in the at least two candidate preview images;
the photometric processing module is used for performing photometric processing on the target overlapping area to obtain a second exposure parameter, and sending the second exposure parameter to the at least two cameras;
and the shooting module is used for controlling the at least two cameras to shoot based on the second exposure parameters.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the photographing method according to any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910593943.2A 2019-07-03 2019-07-03 Photographing method and device, electronic equipment and computer readable storage medium Active CN110213494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910593943.2A CN110213494B (en) 2019-07-03 2019-07-03 Photographing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910593943.2A CN110213494B (en) 2019-07-03 2019-07-03 Photographing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110213494A CN110213494A (en) 2019-09-06
CN110213494B true CN110213494B (en) 2021-05-11

Family

ID=67795986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910593943.2A Active CN110213494B (en) 2019-07-03 2019-07-03 Photographing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110213494B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021051307A1 (en) * 2019-09-18 2021-03-25 深圳市大疆创新科技有限公司 Image color calibration method, camera devices, and image color calibration system
CN112335228A (en) * 2019-11-22 2021-02-05 深圳市大疆创新科技有限公司 Image processing method, image acquisition device, movable platform and storage medium
CN111654609B (en) * 2020-06-12 2021-10-26 杭州海康威视数字技术股份有限公司 Control method for realizing low power consumption of camera and camera
CN112954210B (en) * 2021-02-08 2023-04-18 维沃移动通信(杭州)有限公司 Photographing method and device, electronic equipment and medium
CN113438411A (en) * 2021-05-21 2021-09-24 上海闻泰电子科技有限公司 Image shooting method, image shooting device, computer equipment and computer readable storage medium
CN113572970A (en) * 2021-06-24 2021-10-29 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN113630558B (en) * 2021-07-13 2022-11-29 荣耀终端有限公司 Camera exposure method and electronic equipment
CN115802153A (en) * 2021-09-09 2023-03-14 哲库科技(上海)有限公司 Image shooting method and device, computer equipment and storage medium
CN114422665A (en) * 2021-12-23 2022-04-29 广东未来科技有限公司 Shooting method based on multiple cameras and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015028549A (en) * 2013-07-30 2015-02-12 キヤノン株式会社 Imaging device
CN108737738A (en) * 2018-04-20 2018-11-02 深圳岚锋创视网络科技有限公司 A kind of panorama camera and its exposure method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5930792B2 (en) * 2012-03-26 2016-06-08 キヤノン株式会社 Imaging apparatus and control method thereof
JP6384000B1 (en) * 2017-05-24 2018-09-05 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Control device, imaging device, imaging system, moving object, control method, and program
JP7043219B2 (en) * 2017-10-26 2022-03-29 キヤノン株式会社 Image pickup device, control method of image pickup device, and program
CN108012062A (en) * 2017-12-22 2018-05-08 信利光电股份有限公司 A kind of dual camera signal processing system and method
JP2018186514A (en) * 2018-06-13 2018-11-22 パナソニックIpマネジメント株式会社 Image processing device, imaging system having the same and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015028549A (en) * 2013-07-30 2015-02-12 キヤノン株式会社 Imaging device
CN108737738A (en) * 2018-04-20 2018-11-02 深圳岚锋创视网络科技有限公司 A kind of panorama camera and its exposure method and device

Also Published As

Publication number Publication date
CN110213494A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213494B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN109767467B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
CN110225248B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN107948519B (en) Image processing method, device and equipment
KR102279436B1 (en) Image processing methods, devices and devices
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN112004029B (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
CN110475067B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109146906B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110166705B (en) High dynamic range HDR image generation method and device, electronic equipment and computer readable storage medium
CN107948617B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN110049240B (en) Camera control method and device, electronic equipment and computer readable storage medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
US11601600B2 (en) Control method and electronic device
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN112019734B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN108600631B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant