CN117710265A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN117710265A
CN117710265A CN202311507855.9A CN202311507855A CN117710265A CN 117710265 A CN117710265 A CN 117710265A CN 202311507855 A CN202311507855 A CN 202311507855A CN 117710265 A CN117710265 A CN 117710265A
Authority
CN
China
Prior art keywords
image
channel
determining
images
red
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311507855.9A
Other languages
Chinese (zh)
Inventor
杨东玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311507855.9A priority Critical patent/CN117710265A/en
Publication of CN117710265A publication Critical patent/CN117710265A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method and related equipment, and relates to the field of image processing, wherein the method comprises the following steps: detecting a first operation; in response to a first operation, turning on the camera; determining to meet a first preset condition, and collecting multiple frames of images, wherein the multiple frames of images comprise an ambient light image and a first image; determining a first group of channel images corresponding to the ambient light images and a second group of channel images corresponding to the first images; determining an exposure gain value according to a first channel image in the first set of channel images and a first channel image in the second set of channel images; and determining a first target image corresponding to the first image according to the exposure gain value. The image processing method provided by the application can effectively improve the color and the color temperature of the image after fusion of the multi-frame images acquired from the dim light scene or the scene with high dynamic range.

Description

Image processing method and related device
The present application is a divisional application of a chinese patent application filed 25 days of 2022, 01, and 25 days, with application number 202210089124.6, application name "image processing method and related apparatus".
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and related apparatus.
Background
With the widespread use of electronic devices, photographing using electronic devices has become a daily way of doing people's lives. Taking an electronic device as an example of a mobile phone, various algorithms are developed to improve image quality, for example, multi-frame image fusion.
In some high dynamic range scenes (high dynamic range, HDR) or scenes with darker light, for example, night scenes, because the illumination in the scenes is lower, when a mobile phone shoots and images, a white flash lamp is used for light filling, a white flash image is correspondingly generated, and then the white flash image and an ambient light image shot without light filling are fused, so that the quality of the finally obtained fused image is improved. The white flash image and the ambient light image are images shot on the same scene to be shot.
However, since the responses of the color channels included in the white flash image and the ambient light image are not matched and are difficult to register, the color of the generated fusion image will deviate and the color temperature will be inaccurate, and the related art cannot effectively solve the problem. Therefore, how to effectively register the flash image collected during flash light filling and the ambient light image collected during non-filling, and how to correct the color of the image collected during flash light filling becomes a problem to be solved.
Disclosure of Invention
The application provides an image processing method and related equipment thereof, which can correct the color and the color temperature of an image acquired during the light supplementing process of a flash lamp in a scene with lower illumination or a scene with a high dynamic range, thereby improving the quality and improving the user experience.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, there is provided an image processing method, the method comprising:
detecting a first operation; responsive to the first operation, turning on the camera; determining to meet a first preset condition, and collecting multiple frames of images, wherein the multiple frames of images comprise an ambient light image and a dark red flash image, the ambient light image is an image shot by the camera when the flash lamp is not started, and the dark red flash image is an image shot by the camera when the flash lamp is started; and obtaining a first target image according to the multi-frame image.
The wavelength range corresponding to the flash lamp emitting deep red light is 660nm to 700nm, for example.
The embodiment of the application provides an image processing method, which processes multi-frame images comprising a deep red flash image and an ambient light image by utilizing a deep red flash lamp to supplement light, collecting the deep red flash image corresponding to the flash lamp emitting deep red light and collecting the ambient light image corresponding to the flash lamp not emitting the flash light, so that the color and the color temperature of the image after fusion of the multi-frame images collected from a dark light scene or a scene with a high dynamic range can be effectively improved.
In a possible implementation manner of the first aspect, obtaining a first target image according to the multi-frame image includes:
determining a first red channel image, a first green channel image and a first blue channel image corresponding to the dark red flash image, and a second red channel image corresponding to the ambient light image;
determining an exposure gain value according to the first red channel image and the second red channel image;
determining a green channel enhanced image according to the first green channel image and the exposure gain value;
determining a blue channel enhanced image according to the first blue channel image and the exposure gain value;
the first target image is determined from the first red channel image, the green channel enhanced image, and the blue channel enhanced image.
The first red channel image is used for representing an image formed by a red channel signal in each pixel included in the dark red flash image, the first green channel image is used for representing an image formed by a green channel signal in each pixel included in the dark red flash image, and the first blue channel image is used for representing an image formed by a blue channel signal in each pixel included in the dark red flash image. The second red channel image is used to represent the image formed by the red channel signal in each pixel comprised by the ambient light image.
In this implementation, since the light emitted when the flash is turned on is deep red light, only the red channel is affected, and the green channel and the blue channel are not affected, the multiple of the brightness increase when the flash is turned on can be determined by the first red channel image and the second red channel image, and the multiple of the brightness increase can be further referred to other color channels which are not increased, so as to correct the color and tone of the subsequent image.
In a possible implementation manner of the first aspect, the method further includes:
determining a second green channel image and a second blue channel image corresponding to the ambient light image;
registering the first green channel image with the second green channel image and registering the first blue channel image with the second blue channel image;
determining a green channel enhanced image from the first green channel image and the exposure gain value, comprising:
determining a green channel enhanced image according to the registered first green channel image and the exposure gain value;
determining a blue channel enhanced image according to the first blue channel image and the exposure gain value, including:
And determining a blue channel enhanced image according to the registered first blue channel image and the exposure gain value.
Wherein the second green channel image is used to represent an image formed by the green channel signal in each pixel included in the ambient light image, and the second blue channel image is used to represent an image formed by the blue channel signal in each pixel included in the ambient light image.
In this implementation, since the light emitted when the flash is turned on is deep red light, only the red channel is affected, and the green channel and the blue channel are not affected, the first green channel image and the second green channel image may be directly registered, and the first blue channel image and the second blue channel image may be registered, so as to improve the quality of the subsequent images.
In a possible implementation manner of the first aspect, obtaining a first target image according to the multi-frame image includes:
determining a first red channel image and a first green-blue channel image corresponding to the dark red flash image and a second red channel image corresponding to the ambient light image;
determining an exposure gain value according to the first red channel image and the second red channel image;
Determining a green-blue channel enhanced image according to the first green-blue channel image and the exposure gain value;
and determining the first target image according to the first red channel image and the green-blue channel enhanced image.
The first red channel image is used for representing an image formed by a red channel signal in each pixel included in the dark red flash image, and the first green-blue channel image is used for representing an image formed by a green channel signal and a blue channel signal in each pixel included in the dark red flash image. The second red channel image is used to represent the image formed by the red channel signal in each pixel comprised by the ambient light image.
In this implementation, since the light emitted when the flash is turned on is deep red light, only the red channel is affected, and no effect is caused on the green-blue channel, the multiple of the brightness increase when the flash is turned on can be determined by the first red channel image and the second red channel image, and the multiple of the brightness increase can be further referred to other color channels which are not increased, so as to correct the color of the subsequent image. Here, since the light of the flash lamp has no influence on the green channel and the blue channel, when the dark red flash image and the ambient light image are split, the green channel and the blue channel are set to be a green-blue channel image capable of carrying two channel information, so that the subsequent calculation amount is reduced, and the power consumption is reduced.
In a possible implementation manner of the first aspect, the method further includes:
determining a second green-blue channel image corresponding to the ambient light image;
registering the first green-blue channel image with the second green-blue channel image;
determining a green-blue channel enhanced image according to the first green-blue channel image and the exposure gain value;
and determining a green-blue channel enhanced image according to the registered first green-blue channel image and the exposure gain value.
Wherein the second green-blue channel image is used to represent an image formed by the green channel signal and the blue channel signal in each pixel included in the ambient light image.
In this implementation, since the light emitted when the flash is turned on is deep red light, only the red channel is affected, and the green channel and the blue channel are not affected, the first and second images of the green and blue channels may be directly registered to improve the quality of the subsequent images.
In a possible implementation manner of the first aspect, the method further includes:
and performing tone mapping on the ambient light image and the first target image to determine a second target image.
In this manner, the fused second target image may be a nonlinear RGB image by tone mapping the first target image and the ambient light image, so as to improve the visual experience of the user.
In a possible implementation manner of the first aspect, determining an exposure gain value according to the first red channel image and the second red channel image includes:
determining the average value of all red channel signals in the first red channel image as a first value;
determining the average value of all red channel signals in the second red channel image as a second value;
and determining the ratio of the first value to the second value as the exposure gain value.
In a possible implementation manner of the first aspect, the registration is any one of KLT (Kanade-Lucas-Tomasi), SIFT (Scale-invariant feature transform), optical flow method, and the like.
In a second aspect, there is provided an image processing apparatus comprising means for performing the steps of the first aspect above or any possible implementation of the first aspect.
In a third aspect, an electronic device is provided, comprising: one or more processors and memory;
the memory is coupled with one or more processors for storing computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform the steps of processing in the image processing method as provided in the first aspect or any possible implementation of the first aspect.
In a fourth aspect, a chip is provided, comprising: a processor for calling and running a computer program from a memory, causing a chip-mounted device to perform the steps of processing in an image processing method as provided in the first aspect or any possible implementation of the first aspect.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the steps of processing in an image processing method as provided in the first aspect or any possible implementation of the first aspect.
In a sixth aspect, a computer program product is provided, the computer program product comprising a computer readable storage medium storing a computer program for causing a computer to perform the steps of processing in an image processing method as provided in the first aspect or any possible implementation of the first aspect.
The advantages of the second aspect to the sixth aspect may be referred to the advantages of the first aspect, and are not described here again.
Drawings
Fig. 1 is an application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a channel response provided by an embodiment of the present application;
FIG. 3 is a flow chart of an image processing method provided in the related art;
fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another channel response provided by an embodiment of the present application;
FIG. 6 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of another image processing method according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 10 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
First, some terms in the embodiments of the present application are explained for easy understanding by those skilled in the art.
1. RGB (red, green, blue) color space, or RGB domain, refers to a color model that relates to the structure of a human visual system. All colors are considered to be different combinations of red, green and blue depending on the structure of the human eye.
2. YUV color space refers to a color coding method, where Y represents luminance (luminance or luma), and U and V represent chrominance (chroma). The above RGB color space focuses on the color sensing of human eyes, and the YUV color space focuses on the sensitivity of vision to brightness, and the RGB color space and the YUV color space can be mutually converted.
3. Pixel values refer to a set of color components corresponding to each pixel in a color image in the RGB color space. For example, each pixel corresponds to a set of three primary color components, wherein the three primary color components are red component R, green component G, and blue component B, respectively.
4. Bayer image, an image output from an image sensor based on bayer-format color filter array. Pixels of a plurality of colors in the image are arranged in bayer format. Wherein each pixel in the bayer format image corresponds to only one color of channel signal. For example, since human vision is sensitive to green, it may be set that green pixels (pixels corresponding to green channel signals) account for 50% of all pixels, and blue pixels (pixels corresponding to blue channel signals) and red pixels (pixels corresponding to red channel signals) each account for 25% of all pixels. Wherein, the minimum repeating unit of the bayer format image is: one red pixel, two green pixels, and one blue pixel are arranged in a 2×2 manner. Among them, images arranged in bayer format can be considered to be located in the RAW domain.
5. Registration (image registration) refers to the matching of the geographic coordinates of different images obtained with different imaging modalities within the same region. The method comprises three steps of geometric correction, projective transformation and unified scale.
6. A Luminance (LV) value for estimating the ambient luminance, the specific calculation formula is as follows:
wherein Exposure is Exposure time, aperture is Aperture size, ISO is sensitivity, and Luma is average value of Y in XYZ color space.
7. Dynamic range (dynamic range) value is used to represent the proportion information of the overexposed area in the preview image obtained by the camera.
The foregoing is a simplified description of the terminology involved in the embodiments of the present application, and is not described in detail below.
With the widespread use of electronic devices, photographing using electronic devices has become a daily way of doing people's lives. Taking an electronic device as an example of a mobile phone, various algorithms are developed to improve image quality, for example, multi-frame image fusion.
In some high dynamic range scenes or scenes with darker light, for example, night scenes, because the illumination in the scenes is lower, when a mobile phone shoots and images, a white flash lamp is used for light filling, a white flash image is correspondingly generated, and then the white flash image and an ambient light image shot without light filling are fused, so that the quality of the finally obtained fused image is improved. The white flash image and the ambient light image are images shot on the same scene to be shot.
However, on the one hand, because the brightness difference between the white flash image and the ambient light image is relatively large, on the other hand, the brightness ratio of each color channel included in the white flash image is different from the brightness ratio of each color channel included in the ambient light image, or the response is not matched, so that the white flash image and the ambient light image are difficult to register, and further the fusion algorithm is invalid, so that the color of the generated fusion image is deviated, the color temperature is inaccurate, and the problems of ghost (artifacts) and the like are possibly caused.
Fig. 1 illustrates an application scenario to which the embodiments of the present application are applicable. Fig. 2 shows a schematic diagram of a channel response.
As shown in fig. 1 (a), in the night scene, the light emitted by the street lamp is yellow, and the illuminated ground, wall surface and the like reflect the yellow, at this time, if the flash is not turned on for shooting, the light illuminated area in the obtained ambient light image is yellow correspondingly, and the overall color temperature is warm. However, if the white flash is turned on to supplement light, as shown in (b) of fig. 1, the white light irradiates the environment to be photographed, so that the brightness of the whole light is improved, and secondly, the whole color temperature is warm and colder.
Illustratively, when the white flash is turned on for light filling, the channel response distribution of the three primary color components (the second blue component B2, the second green component G2, and the second red component R2) included in the resulting white flash image photographed by the electronic device is as shown in (a) of fig. 2. When the flash is not turned on for light filling, the channel response distribution of the three primary color components (the first blue component B1, the first green component G1, and the first red component R1) included in the obtained ambient light image photographed by the electronic device is as shown in (B) of fig. 2.
Therefore, if the white flash image and the ambient light image with the mismatched color channel responses are registered, the white flash image and the ambient light image are difficult to register because the registration mode is based on the brightness of the local area of the image.
In this regard, fig. 3 shows a schematic flow chart of an image processing method provided in the prior art.
As shown in fig. 3, in the prior art, in order to facilitate the fusion of the white flash image and the ambient light image, first, the white flash image and the ambient light image are registered, and before the registration, the illumination difference between the white flash image and the ambient light image is relatively large, so that a first gray-scale image corresponding to the white flash image and a second gray-scale image corresponding to the ambient light image are determined first; then, the first gray-scale image and the second gray-scale image are adjusted to be in a similar gray-scale range or a similar dynamic range; and then according to the adjusted first gray-scale image, determining a first intermediate image corresponding to the white flash image, and according to the adjusted second gray-scale image, determining a second intermediate image corresponding to the ambient light image, so that the number and the spatial distribution of the features extracted from the first intermediate image and the second intermediate image by the registration algorithm are more similar when the features in the first intermediate image and the second intermediate image are registered, and the registration quality of the registration algorithm is further improved.
In determining the grayscale image corresponding to each of the first white flash image and the first ambient light image, the determination may be performed using the following formula (one).
Q (λ) =r (λ) ×r+g (λ) ×g+b (λ) ×b formula (one)
Wherein R, G and B are used to indicate red, green and blue components corresponding to any one pixel in a white flash image or an ambient light image; q is used for indicating the gray scale value corresponding to the pixel at the same position; r (λ), g (λ), and b (λ) are used to indicate weights respectively corresponding to the red, green, and blue components. In this formula (one), the ratio of the three weights r (λ), g (λ), and b (λ) is fixed.
With the above-described method, although the gray-scale image corresponding to each of the white flash image and the ambient light can be determined and adjusted to the similar gray-scale range, it is equivalent to one adjustment for the whole of the red component, the green component, and the blue component. In this way, even if the gray scale values corresponding to the adjusted first intermediate image and the adjusted second intermediate image at a certain same position are the same, the same color components among the red component, the green component, and the blue component corresponding to the first intermediate image and the second intermediate image, respectively, are not the same. Therefore, when the first intermediate image and the second intermediate image are registered, the components corresponding to the same color are not the same, in other words, the same color channel response is not matched, so that the registration is not accurate enough, and the quality of the fused target image is affected. Moreover, the target image after fusion may be colored or pseudo-colored. For example, as shown in (c) of fig. 2, the channel response distribution of the three primary color components (the third blue component B3, the third green component G3, and the third red component R3) included for the fused image.
In addition, although the illumination intensity in the scene to be photographed can be improved when the electronic device turns on the white flash to perform light filling, the color temperature corresponding to the white light emitted by the white flash is different from the color temperature of the ambient light, so that the color temperature between the white flash image acquired during light filling and the ambient light image acquired during non-light filling is different. Then, when the two white flash images with different color temperatures and the ambient light image are subsequently fused, the color temperature of the obtained fused image is also inaccurate with respect to the color temperature in the real environment.
In view of this, an embodiment of the present application provides an image processing method, by performing light filling with a deep red flash, collecting a deep red flash image corresponding to when the flash emits deep red light, and collecting an ambient light image corresponding to when the flash emits no light; and then registering and fusing the multi-frame images comprising the dark red flash image and the environment light image, and improving the accuracy of a registration algorithm and adjusting the exposure gain of each color channel, so that the color and the color temperature of the image after fusing the multi-frame images acquired from a dark light scene or a scene with a high dynamic range can be effectively improved.
It should be understood that the scenario shown in fig. 1 is an illustration of an application scenario, and is not limited in any way to the application scenario of the present application. The image processing method provided by the embodiment of the application can be applied to but not limited to the following scenes:
shooting images, recording videos, video calls, video conference applications, long and short video applications, video live broadcast applications, video net class applications, intelligent fortune mirror application scenes, shooting scenes such as video recording functions of a system camera, video monitoring, intelligent peepholes and the like.
The image processing method provided in the embodiment of the present application is described in detail below with reference to the accompanying drawings.
Fig. 4 shows a flowchart of an image processing method provided in an embodiment of the present application. The method is applied to an electronic device comprising a camera and a flash. The flash emits deep red light when turned on. The dark red flash is friendly to human eyes and has small visual stimulus.
Alternatively, when the flash light emits deep red light, the corresponding internal structure may include a white light source and a deep red filter, and the deep red filter is irradiated with white light emitted by the white light source, and the transmitted deep red light is used as light emitted by the flash light. Alternatively, the corresponding internal structure of the flash lamp may include a deep red diode for emitting deep red light.
The wavelength range corresponding to the flash lamp emitting deep red light is 660nm to 700nm, for example, and of course, other wavelength ranges are also possible, which is not limited in any way in the embodiment of the present application.
As shown in fig. 4, the image processing method provided in the embodiment of the present application may include the following S110 to S160. These steps are described in detail below.
S110, acquiring multi-frame images. The multi-frame image includes an ambient light image and a dark red flash image.
The environment light image is an image shot by the camera when the flash lamp is not started, and the dark red flash image is an image shot by the camera when the flash lamp is started.
In some embodiments, one or more image sensors may be included in the electronic device, and then the electronic device may control the one or more image sensors to capture multiple frames of images. In other embodiments, the electronic device may obtain multi-frame images from a local store or from other devices, whether or not an image sensor is included in the electronic device. For example, the user may capture a multi-frame image by using the first electronic device D1, and then send the multi-frame image to the second electronic device D2, where after receiving the multi-frame image, the second electronic device D2 may execute the image processing method provided in the embodiment of the present application to perform image processing. Of course, in the practical application process, the electronic device may also acquire the multi-frame image in other manners, which is not limited in any way in the embodiment of the present application.
Before the electronic equipment acquires the multi-frame images, the electronic equipment can also judge the scene to be shot first, judge whether the scene to be shot needs to be shot, and acquire the multi-frame images when needed.
For example, the electronic device may determine whether the scene to be photographed is a dim light scene or a high dynamic range (high dynamic range, HDR) scene. When the scene to be shot is determined to be a dim light scene or an HDR scene, acquiring multi-frame images aiming at the scene to be shot. When not a dim light scene or an HDR scene, then no multi-frame image is acquired.
Wherein the dim light scene may be determined from a Luminance (LV) value, the process may specifically comprise: and when the brightness value corresponding to the scene to be shot is smaller than the brightness value threshold, determining that the scene is a dark scene, otherwise, determining that the scene is a non-dark scene. Here, the example brightness value is smaller than the brightness value threshold, which is a first preset condition, in other words, the electronic device will collect multiple frames of images only when the first preset condition is met.
In addition, the HDR scene may be determined according to a Dynamic Range (DR) value, and the process may specifically include: when the dynamic range value corresponding to the scene to be shot is larger than the preset dynamic range value, determining that the scene is an HDR scene, otherwise, determining that the scene is a non-HDR scene. Of course, other manners may be used to determine whether acquisition of multiple frames of images is required, which is not limited in this embodiment of the present application. Here, the dynamic range value of the example is greater than the preset dynamic range value, that is, the first preset condition, in other words, the electronic device will collect the multi-frame image only when the first preset condition is met.
Before multi-frame images are acquired, a scene to be shot is judged by utilizing a first preset condition, a dim light scene and a high dynamic range scene can be screened out from a plurality of scenes, only the dim light scene and the high dynamic range scene are supplemented by utilizing a dark red flash lamp, and the method provided by the embodiment of the application is utilized for processing, so that the problems occurring when the white flash lamp is used for supplementing light under the two scenes are solved better, and the quality of fused images acquired under the two scenes is improved. In contrast, the image processing method provided by the embodiment of the application does not need to be called for other scenes, so that the power consumption can be reduced, and the cost can be reduced.
It should be appreciated that the multi-frame image may be an image generated directly by the image sensor or may be an image obtained after one or more processing operations have been performed on the image.
It should be understood that a multi-frame image includes 2 frames and more. When the multi-frame image includes more than 2 frames of images, it may include 1 frame of ambient light image and a multi-frame of dark red flash image, or may include a multi-frame of ambient light image and a 1 frame of dark red flash image, or may include a multi-frame of ambient light image and a multi-frame of dark red flash image. The embodiment of the present application does not impose any limitation on this. The multi-frame images may be bayer format images, that is, images in the RAW domain. Alternatively, the multi-frame images may all be RGB images, i.e., images that are all in the RGB domain. Fig. 4 illustrates an example in which the multi-frame images are all RGB images.
It should be understood that a multi-frame image is an image continuously photographed on the same scene to be photographed. When shooting is continuously carried out, the flash lamp can be started to collect the dark red flash image, then the flash lamp is closed to collect the environment light image, or the flash lamp can not be started to collect the environment light image, and then the flash lamp is opened to collect the dark red flash image. In this regard, the embodiments of the present application are not limited in any way.
S120, determining a first single-channel image corresponding to each color in the dark red flash image and a second single-channel image corresponding to each color in the ambient light image.
The first single-channel image is used for representing an image formed by channel signals belonging to the same color in each pixel included in the dark red flash image, and the second single-channel image is used for representing an image formed by channel signals belonging to the same color in each pixel included in the ambient light image.
It will be appreciated that the deep red flash image and the ambient light image are RGB images, each pixel in which comprises three primary color components, i.e. each pixel comprises a red channel signal, a green channel signal and a blue channel signal.
Thus, the first single-channel image corresponding to each color in the determined dark red flash image includes a first red channel image, a first green channel image and a first blue channel image, wherein the first red channel image is used for representing an image formed by a red channel signal in each pixel included in the dark red flash image, the first green channel image is used for representing an image formed by a green channel signal in each pixel included in the dark red flash image, and the first blue channel image is used for representing an image formed by a blue channel signal in each pixel included in the dark red flash image. It will be appreciated that each pixel in the first red channel image corresponds to only the red channel signal at the same location in the magenta flash image, each pixel in the first green channel image corresponds to only the green channel signal at the same location in the magenta flash image, and each pixel in the first blue channel image corresponds to only the blue channel signal at the same location in the magenta flash image. It is also understood that the dark red flash image is split into three images containing different channel information.
The determined second channel image corresponding to each color in the ambient light image comprises a second red channel image, a second green channel image and a second blue channel image, wherein the second red channel image is used for representing an image formed by a red channel signal in each pixel included in the ambient light image, the second green channel image is used for representing an image formed by a green channel signal in each pixel included in the ambient light image, and the second blue channel image is used for representing an image formed by a blue channel signal in each pixel included in the ambient light image. It will be appreciated that each pixel in the second red channel image corresponds to only the red channel signal at the same location in the ambient light image, each pixel in the second green channel image corresponds to only the green channel signal at the same location in the ambient light image, and each pixel in the second blue channel image corresponds to only the blue channel signal at the same location in the ambient light image. It is also understood that the ambient light image is split into three images containing different channel information.
S130, determining an exposure gain value between the first red channel image and the second red channel image.
Alternatively, by determining the average value of all red channel signals in the first red channel image as a first value and determining the average value of all red channel signals in the second red channel image as a second value, then determining a multiple between the first value and the second value as an exposure gain value.
Of course, the exposure gain value may also be determined by determining a ratio of other values, for example, by determining the mode of all red channel signals in the first red channel image as a first value, determining the mode of all red channel signals in the second red channel image as a second value, and then determining a multiple between the first value and the second value as an exposure gain value, which is just one example, and the embodiment of the present application does not limit this in any way.
It will be appreciated that since the light emitted when the flash is on is a deep red light, which affects only the red channel and not the green and blue channels, the multiple of the increase in brightness when the flash is on can be determined by the first and second red channel images and can be referenced to the other color channels that are not increased to correct for the color and hue of the subsequent image.
And S140, registering the first green channel image with the second green channel image, and registering the first blue channel image with the second blue channel image.
Alternatively, the registration method may be KLT (Kanade-Lucas-Tomasi), SIFT (Scale-invariant feature transform), optical flow, or the like.
It will be appreciated that since the light emitted when the flash is turned on is deep red, a significant gain will be produced for the red channel and no or negligible effect will be produced for the green and blue channels, therefore, the first and second green channel images may be registered directly here, and the first and second blue channel images may be registered to improve the quality of the subsequent images.
S150, applying an exposure gain value to the registered first green channel image to obtain a green channel enhanced image; and applying the exposure gain value to the registered first blue channel image to obtain a blue channel enhanced image. And obtaining a first target image according to the first red channel image, the green channel enhanced image and the blue channel enhanced image.
It will be appreciated that the exposure gain value is applied to the registered first green channel image, that is, each green channel signal in the registered first green channel image is multiplied by the exposure gain value to expand by an equal multiple to the red channel signal.
The exposure gain value is applied to the registered first blue channel image, that is, each blue channel signal in the registered first blue channel image is multiplied by the exposure gain value to perform an equal multiple expansion with the red channel signal.
Here, the color temperature of the first target image can be corrected by equally enlarging both the green channel and the blue channel in accordance with the enlargement factor of the red channel, so that the hue of the first target image is kept consistent with the hue of the ambient light image.
By way of example, fig. 5 illustrates a channel response diagram. Wherein the horizontal axis represents brightness, and the vertical axis represents statistical frequency.
As shown in fig. 5 (b), when the flash is not turned on, the channel response diagram corresponding to the collected ambient light image is shown, and it can be seen from the diagram that the red component (r 1), the green component (g 1) and the blue component (b 1) in the collected ambient light image are distributed uniformly, and the color is relatively normal.
As shown in fig. 5 (a), when the flash is turned on, the channel response diagram corresponding to the collected dark red flash image is shown, and it can be seen from the diagram that the red component (r 2) in the collected dark red flash image is relatively brighter and warmer in color tone than the green component (g 2) and the blue component (b 2) due to the addition of the dark red light to the environment by the flash.
Based on this, a corresponding second red channel image is extracted from the ambient light image, as shown in (d) of fig. 5, for representing only red channel information in each pixel in the ambient light image; the corresponding first red channel image is extracted from the deep red flash image, as shown in (c) of fig. 5, and is used only to represent red channel information in each pixel in the deep red flash image.
Then, from the second red channel image and the first red channel image, an exposure gain value can be determined, and then applied to the green channel and the blue channel in the dark red flash image by the same factor, whereby a first target image can be obtained as shown in (e) of fig. 5, which includes red component (r 2/r 3), green component (r 3), and blue component (b 3) that will be consistent with the hue of the ambient light image.
S160, tone mapping (tone mapping) is conducted on the first target image and the ambient light image, and a second target image is determined.
Tone mapping refers to mapping transformation of image colors, for example, the weights of the same color channels of pixels at the same position in a first target image and an ambient light image can be adjusted through tone mapping, so that the color change of a fused second target image is finer, and the second target image after tone mapping treatment can better express information and characteristics in an original image.
Here, after the exposure gain value is used to increase both the registered first green channel signal and the registered first blue channel signal, the color tone of the first target image obtained according to the first red channel image, the green channel enhanced image and the blue channel enhanced image may be consistent with the color tone of the ambient light image, but the process is a linear change, the obtained first target image only meets the display requirement of the electronic device, does not meet the viewing requirement of human eyes, and may be considered as a linear RGB image, so that tone mapping is also needed to be performed on the first target image and the ambient light image, so that the fused second target image may be a nonlinear RGB image, so as to improve the visual experience of the user.
It should be understood that the second target image will be displayed on the interface of the electronic device as a captured image, or simply stored, and specifically transmitted as needed, which is not limited in any way by the embodiments of the present application.
It should also be understood that the above process is only an example, and the steps may be added or subtracted, and the embodiment of the present application is not limited in any way, and the sequential adjustment may be specifically performed as needed.
The embodiment of the application provides an image processing method, which is used for carrying out light supplementing by utilizing a flash lamp emitting deep red light, collecting a deep red flash image, and collecting an ambient light image when the flash lamp does not emit light; then, determining an exposure gain value corresponding to a red channel signal when light supplementing exists based on a first red channel image corresponding to the dark red flash image and a second red channel image corresponding to the ambient light image; then the exposure gain value is applied to a green channel and a blue channel corresponding to the dark red flash image, and the two channels are improved by the same multiple to obtain a green channel enhanced image and a blue channel enhanced image; and then, carrying out tone mapping processing on the first target image determined based on the first red channel image, the green channel enhanced image and the blue channel enhanced image and then combining the ambient light image, so that a second target image with the same tone as the ambient light image and a high dynamic range can be obtained, and the problems of color cast, false color and inaccurate color temperature of the acquired image in a dark light scene or a high dynamic range scene can be avoided.
Fig. 6 shows a flowchart of another image processing method provided in an embodiment of the present application. The method is applied to an electronic device comprising a camera and a flash that emits deep red light when turned on.
As shown in fig. 6, the image processing method provided in the embodiment of the present application may include the following S210 to S260. These steps are described in detail below.
S210, acquiring multi-frame images. The multi-frame image includes an ambient light image and a dark red flash image.
Here, the description of S210 is the same as that of S110 described above, and will not be repeated here.
S220, determining a first red channel image and a first green-blue channel image corresponding to the dark red flash image, and determining a second red channel image and a second blue-green channel image corresponding to the ambient light image.
It will be appreciated that the deep red flash image and the ambient light image are RGB images, each pixel in which comprises three primary color components, i.e. each pixel comprises a red channel signal, a green channel signal and a blue channel signal.
The first red channel image is used for representing an image formed by a red channel signal in each pixel included in the dark red flash image, and the first green-blue channel image is used for representing an image formed by a green channel signal and a blue channel signal in each pixel included in the dark red flash image. It will be appreciated that each pixel in the first red channel image corresponds to only the red channel signal at the same location in the dark red flash image and each pixel in the first green-blue channel image corresponds to both the green and blue channel signals at the same location in the dark red flash image. It is also understood that the dark red flash image is split into two images containing different channel information.
The second red channel image is used to represent an image formed by the red channel signal in each pixel included in the ambient light image, and the second green-blue channel image is used to represent an image formed by the green channel signal and the blue channel signal in each pixel included in the ambient light image. It will be appreciated that each pixel in the second red channel image corresponds to only the red channel signal at the same location in the ambient light image and each pixel in the second green-blue channel image corresponds to both the green and blue channel signals at the same location in the ambient light image. It is also understood that the ambient light image is split into two images containing different channel information.
S230, determining an exposure gain value between the first red channel image and the second red channel image.
The method for determining the exposure gain value may refer to the description in S130, and will not be described herein.
It will be appreciated that since the light emitted when the flash is turned on is deep red, which affects only the red channel and not the green-blue channel, the multiple of the increase in brightness when the flash is turned on can be determined by the first red channel image and the second red channel image, and this multiple of the increase can be referenced to the other color channels that are not increased to correct the color of the subsequent image. Here, since the light of the flash lamp has no influence on the green channel and the blue channel, when the dark red flash image and the ambient light image are split, the green channel and the blue channel are set to be a green-blue channel image capable of carrying two channel information, so that the subsequent calculation amount is reduced, and the power consumption is reduced.
S240, registering the first green-blue channel image with the second green-blue channel image.
The registration method may refer to the description in S140, and will not be described herein.
It should be appreciated that since the light emitted when the flash is turned on is deep red light, which only affects the red channel and does not affect the green and blue channels, the first and second images may be directly registered to improve the quality of subsequent images.
S250, applying the exposure gain value to the registered first green-blue channel image to obtain a green-blue channel enhanced image. And obtaining a first target image according to the first red channel image and the green-blue channel enhanced image.
It will be appreciated that the exposure gain value is applied to the registered first green-blue channel image, that is, each of the green and blue channel signals in the registered first green-blue channel image is multiplied by the exposure gain value to expand by a factor equal to the red channel signal.
Here, the color temperature of the first target image can be corrected by equally enlarging both the green channel and the blue channel in accordance with the enlargement factor of the red channel, so that the hue in the first target image is kept consistent with the hue of the ambient light image.
S260, performing tone mapping on the first target image and the ambient light image to determine a second target image.
The tone mapping method may refer to the description in S160, and will not be described herein.
Here, after the registered first green-blue channel image is increased by using the exposure gain value, the hue of the first target image obtained according to the first red channel image and the green-blue channel enhanced image may be consistent with the hue of the ambient light image, but the process is a linear change, the obtained first target image only meets the display requirement of the electronic device, does not meet the viewing requirement of human eyes, and may be considered as a linear RGB image, so tone mapping is further required to be performed on the first target image and the ambient light image, so that the fused second target image may be a nonlinear RGB image, so as to improve the visual experience of the user.
It should be understood that the second target image will be displayed on the interface of the electronic device as a captured image, or simply stored, and specifically transmitted as needed, which is not limited in any way by the embodiments of the present application.
It should also be understood that the above process is only an example, and the steps may be added or subtracted, and the embodiment of the present application is not limited in any way, and the sequential adjustment may be specifically performed as needed.
The embodiment of the application provides an image processing method, which is used for carrying out light supplementing by utilizing a flash lamp emitting deep red light, collecting a deep red flash image, and collecting an ambient light image when the flash lamp does not emit light; then, determining an exposure gain value corresponding to a red channel signal when light supplementing exists based on a first red channel image corresponding to the dark red flash image and a second red channel image corresponding to the ambient light image; then the exposure gain value is applied to a green channel and a blue channel corresponding to the dark red flash image, and the two channels are improved by the same multiple to obtain a green-blue channel enhanced image; and then, carrying out tone mapping processing on the first target image determined based on the third blue channel image and the green-blue channel enhanced image and combining the ambient light image to obtain a second target image with the same tone as the ambient light image and a high dynamic range, thereby avoiding the problems of color cast, false color and inaccurate color temperature of the acquired image in a dark light scene or a high dynamic range scene.
Fig. 7 shows a flowchart of still another image processing method provided in an embodiment of the present application. The method is applied to an electronic device comprising a camera and a flash. The flash is turned on to emit deep blue light.
Alternatively, when the flash light emits deep blue light, the corresponding internal structure may include a white light source and a deep blue filter, and the white light emitted by the white light source irradiates the deep blue filter, and the transmitted deep blue light is used as light emitted by the flash light. Alternatively, the corresponding internal structure of the flash lamp may include a deep blue diode for emitting deep blue light.
The wavelength range corresponding to the flash lamp when emitting deep blue light is, for example, 450nm to 490nm, but of course, other wavelength ranges may be used, which is not limited in any way in the embodiment of the present application.
As shown in fig. 7, the image processing method provided in the embodiment of the present application may include the following S310 to S360. These steps are described in detail below.
S310, acquiring multi-frame images. The multi-frame image includes an ambient light image and a deep blue flash image.
The dark blue flash image is an image shot by the camera when the flash lamp is started.
Other descriptions may refer to the description of S110, and are not repeated here.
S320, determining a third single-channel image corresponding to each color in the dark blue flash image, and determining a second single-channel image corresponding to each color of the ambient light image.
The third single-channel image is used for representing an image formed by channel signals belonging to the same color in each pixel included in the dark blue flash image.
It will be appreciated that the deep blue flash image is an RGB image, and that each pixel in the deep blue flash image comprises three primary color components, i.e. each pixel comprises a red channel signal, a green channel signal and a blue channel signal.
Thus, the third single-channel image corresponding to each color in the determined dark blue flash image includes a third red channel image, a third green channel image and a third blue channel image, wherein the third red channel image is used for representing an image formed by a red channel signal in each pixel included in the dark blue flash image, and the third blue channel image is used for representing an image formed by a blue channel signal in each pixel included in the dark blue flash image. It will be appreciated that each pixel in the third red channel image corresponds to only the blue channel signal at the same location in the deep blue flash image, each pixel in the third green channel image corresponds to only the green channel signal at the same location in the deep blue flash image, and each pixel in the third blue channel image corresponds to only the blue channel signal at the same location in the deep blue flash image. It is also understood that the dark blue flash image is split into three images of different channel colors.
The description in S120 may be referred to for the introduction, and will not be repeated here.
S330, determining an exposure gain value between the third blue channel image and the second blue channel image.
The method for determining the exposure gain value may refer to the description in S130, and will not be described herein.
It will be appreciated that since the light emitted when the flash is turned on is deep blue light, which affects only the blue channel and not the red and blue channels, the multiple of the increase in brightness when the flash is turned on can be determined by the third and second blue channel images, and can be referenced to other color channels that are not increased to correct the color and hue of the subsequent image.
And S340, registering the third red channel image and the second red channel image, and registering the third green channel image and the second green channel image.
The registration method may refer to the description in S140, and will not be described herein.
It should be appreciated that since the light emitted when the flash is turned on is deep blue light, which only affects the blue channel and does not affect the red and green channels, the third red channel image and the second red channel image may be directly registered, and the third green channel image and the second green channel image may be registered to improve the quality of subsequent images.
S350, applying an exposure gain value to the registered third red channel image to obtain a red channel enhanced image; and applying the exposure gain value to the registered third green channel image to obtain a green channel enhanced image. And obtaining a third target image according to the third blue channel image, the red channel enhanced image and the green channel enhanced image.
It will be appreciated that the exposure gain is applied to the registered third red channel image, that is, each red channel signal in the registered third red channel image is multiplied by the exposure gain value to expand by an equal multiple to the blue channel signal.
The exposure gain value is applied to the registered third green channel image, that is, each green channel signal in the registered third green channel image is multiplied by the exposure gain value to perform an equal multiple expansion as the blue channel signal.
Here, the color temperature of the third target image can be corrected by equally enlarging both the red channel and the green channel in accordance with the enlargement factor of the blue channel, so that the hue of the third target image is kept consistent with the hue of the environmental image.
S360, tone mapping is conducted on the third target image and the environment light image, and a fourth target image is determined.
For a specific description, reference may be made to the description in S160 above, and the description is not repeated here.
It should be understood that the fourth target image will be displayed on the interface of the electronic device as a captured image, or simply stored, and specifically transmitted as needed, which is not limited in any way by the embodiments of the present application.
It should also be understood that the above process is only an example, and the steps may be added or subtracted, and the embodiment of the present application is not limited in any way, and the sequential adjustment may be specifically performed as needed.
The embodiment of the application provides an image processing method, which is used for supplementing light by utilizing a flash lamp emitting deep blue light, collecting a deep blue flash image, and collecting an ambient light image when the flash lamp does not emit light; then, determining an exposure gain value corresponding to a blue channel signal when light supplementing exists based on a third blue channel image corresponding to the dark blue flash image and a second blue channel image corresponding to the ambient light image; then the exposure gain value is applied to a red channel and a green channel corresponding to the dark blue flash image, and the two channels are improved by the same multiple to obtain a red channel enhanced image and a green channel enhanced image; and then, a third target image determined based on the third blue channel image, the red channel enhanced image and the green channel enhanced image is combined with the ambient light image to carry out tone mapping processing, so that a fourth target image with the same tone as the ambient light image and a high dynamic range can be obtained, and the problems of color cast, false color and inaccurate color temperature of the acquired image in a dark light scene or a high dynamic range scene can be avoided.
Fig. 8 shows a flowchart of still another image processing method provided in an embodiment of the present application. The method is applied to an electronic device comprising a camera and a flash. The flash is turned on to emit deep blue light.
As shown in fig. 8, the image processing method provided by the embodiment of the present application may include the following S410 to S460. These steps are described in detail below.
S410, acquiring multi-frame images. The multi-frame image includes an ambient light image and a deep blue flash image.
The dark blue flash image is an image shot by the camera when the flash lamp is started.
Other descriptions may refer to the description of S110, and are not repeated here.
S420, determining a third blue channel image and a third red-green channel image corresponding to the dark blue flash image, and determining a second blue channel image and a second red-green channel image corresponding to the ambient light image.
It will be appreciated that the deep blue flash image and the ambient light image are RGB images, each pixel in the deep blue flash image and the ambient light image comprising three primary color components, i.e. each pixel comprising a red channel signal, a green channel signal and a blue channel signal.
Wherein the third blue channel image is used for representing an image formed by a blue channel signal in each pixel included in the deep blue flash image, and the third red-green channel image is used for representing an image formed by a red channel signal and a green channel signal in each pixel included in the deep blue flash image. It will be appreciated that each pixel in the third blue channel image corresponds to the same sustain blue channel signal in the deep blue flash image and each pixel in the third red green channel image corresponds to both red and green channel signals at the same location in the deep blue flash image. It is also understood that the dark blue flash image is split into two images containing different channel information.
The second blue channel image is used to represent an image formed by the blue channel signal in each pixel included in the ambient light image, and the second red-green channel image is used to represent an image formed by the red channel signal and the green channel signal in each pixel included in the ambient light image. It will be appreciated that each pixel in the second blue channel image corresponds to only the blue channel signal at the same location in the ambient light image and each pixel in the second red-green channel image corresponds to both the red and green channel signals at the same location in the ambient light image. It is also understood that the ambient light image is split into two images containing different channel information.
S430, determining an exposure gain value between the third blue channel image and the second blue channel image.
The method for determining the exposure gain value may refer to the description in S130, and will not be described herein.
It will be appreciated that since the light emitted when the flash is turned on is deep blue light, which affects only the blue channel and not the green-blue channel image, the multiple of the increase in brightness when the flash is turned on can be determined by the third blue channel image and the second blue channel image, and this multiple of the increase can be referenced to other color channels that are not increased to correct the color of the subsequent image. Here, since the light of the flash lamp is not influenced by the red channel and the green channel, when the dark red flash image and the ambient light image are split, the red channel and the green channel can be set as a red-green channel image which can carry two-channel information, so that the subsequent calculation amount is reduced, and the power consumption is reduced.
S440, registering the third red-green channel image with the second red-green channel image.
The registration method may refer to the description in S140, and will not be described herein.
It should be appreciated that since the light emitted when the flash is turned on is deep blue light, only the blue channel will be affected, and the red channel and the green channel will not be affected, the third red-green channel image and the second red-green channel image may be directly registered to improve the quality of the subsequent images.
S450, applying the exposure gain value to the registered third red-green channel image to obtain a red-green channel enhanced image. And obtaining a third target image according to the third blue channel image and the red-green channel enhanced image.
It will be appreciated that the exposure gain value is applied to the registered third red-green channel image, that is, each of the red and green channel signals in the registered third red-green channel image is multiplied by the exposure gain value to expand by a factor equal to the blue channel signal.
Here, the color temperature of the third target image can be corrected by equally expanding both the red channel and the green channel in accordance with the expansion multiple of the blue channel, so that the hue in the third target image is kept consistent with the hue of the ambient light image.
S460, performing tone mapping on the third target image and the ambient light image to determine a fourth target image.
The tone mapping method may refer to the description in S160, and will not be described herein.
Here, after the registered third red-green channel image is increased by using the exposure gain value, the hue of the third target image obtained according to the third blue channel image and the red-green channel enhanced image may be consistent with the hue of the ambient light image, but the process is a linear change, the obtained third target image only meets the display requirement of the electronic device, does not meet the viewing requirement of human eyes, and may be considered to be a linear RGB image, so tone mapping is further required to be performed on the third target image and the ambient light image, so that the fused fourth target image may be a nonlinear RGB image, so as to improve the visual experience of the user.
It should be understood that the fourth target image will be displayed on the interface of the electronic device as a captured image, or simply stored, and specifically transmitted as needed, which is not limited in any way by the embodiments of the present application.
It should also be understood that the above process is only an example, and the steps may be added or subtracted, and the embodiment of the present application is not limited in any way, and the sequential adjustment may be specifically performed as needed.
The embodiment of the application provides an image processing method, which is used for supplementing light by utilizing a flash lamp emitting deep blue light, collecting a deep blue flash image, and collecting an ambient light image when the flash lamp does not emit light; then, determining an exposure gain value corresponding to a blue channel signal when light supplementing exists based on a third blue channel image corresponding to the dark blue flash image and a second blue channel image corresponding to the ambient light image; then the exposure gain value is applied to a red channel and a green channel corresponding to the dark blue flash image, and the two channels are improved by the same multiple to obtain a red-green channel enhanced image; and then, carrying out tone mapping processing on the third target image determined based on the third blue channel image and the red-green channel enhanced image and combining the ambient light image to obtain a fourth target image with the same tone as the ambient light image and a high dynamic range, so that the problems of color cast, false color and inaccurate color temperature of the acquired image in a dark light scene or a high dynamic range scene can be avoided.
Based on the above, in a night scene environment with low illuminance, if the image quality obtained by direct shooting is very poor, in the prior art, the white flash is usually used for light filling, and the acquired multi-frame images (including the white flash image and the ambient light image) are fused by using the existing image processing method. At this time, because the illuminance is lower, the white flash performs the light supplementing of the white light, so that the gains of each color channel of the fused image are different, and the problems of color cast and color temperature change occur in a partial region of the fused image, thereby causing very poor user experience.
When the dark red flash lamp provided by the embodiment of the application is used for supplementing light and the image processing method provided by the embodiment of the application is used for processing multi-frame images (including dark red flash images and environment light images), at this time, as the dark red flash images only change red channel information relative to the environment light images, the expansion of the same multiple can be carried out on other color channels by determining the expansion multiple of the red channel, thereby the color and the color temperature of a real scene can be well restored, the problems of color cast, false color, inaccurate color temperature and the like are avoided, the quality of the obtained target images after fusion is improved, and the user experience is improved.
The image processing method and the related display interface and effect graph provided in the embodiments of the present application are described in detail above in conjunction with fig. 1 to 8; the electronic device, the apparatus, and the chip provided in the embodiments of the present application will be described in detail below with reference to fig. 9 to 12. It should be understood that the electronic device, the apparatus and the chip in the embodiments of the present application may perform the various image processing methods in the embodiments of the present application, that is, the specific working processes of the following various products may refer to the corresponding processes in the embodiments of the foregoing methods.
Fig. 9 shows a schematic structural diagram of an electronic device suitable for use in the present application. The electronic device 100 may be used to implement the methods described in the method embodiments described above.
The electronic device 100 may be a cell phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 9 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 9, or electronic device 100 may include a combination of some of the components shown in FIG. 9, or electronic device 100 may include sub-components of some of the components shown in FIG. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture images or video. The shooting function can be realized by triggering and starting through an application program instruction, such as shooting and acquiring an image of any scene. The camera may include imaging lenses, filters, image sensors, and the like. Light rays emitted or reflected by the object enter the imaging lens, pass through the optical filter and finally are converged on the image sensor. The image sensor is mainly used for converging and imaging light emitted or reflected by all objects (also called a scene to be shot and a target scene, and also called a scene image expected to be shot by a user) in a shooting view angle; the optical filter is mainly used for filtering out redundant light waves (such as light waves except visible light, such as infrared light) in the light; the image sensor is mainly used for performing photoelectric conversion on the received optical signal, converting the received optical signal into an electrical signal, and inputting the electrical signal into the processor 130 for subsequent processing. The cameras 193 may be located in front of the electronic device 100 or may be located at the back of the electronic device 100, and the specific number and arrangement of the cameras may be set according to requirements, which is not limited in this application.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The internal memory 121 may also store software codes of the image processing method provided in the embodiment of the present application, and when the processor 110 runs the software codes, the process steps of the image processing method are executed, so as to obtain an image with higher definition.
The internal memory 121 may also store photographed images.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music are stored in an external memory card.
Of course, the software code of the image processing method provided in the embodiment of the present application may also be stored in an external memory, and the processor 110 may execute the software code through the external memory interface 120 to execute the flow steps of the image processing method, so as to obtain an image with higher definition. The image captured by the electronic device 100 may also be stored in an external memory.
It should be understood that the user may specify whether the image is stored in the internal memory 121 or the external memory. For example, when the electronic device 100 is currently connected to the external memory, if the electronic device 100 captures 1 frame of image, a prompt message may be popped up to prompt the user whether to store the image in the external memory or the internal memory; of course, other specified manners are possible, and the embodiments of the present application do not limit this in any way; alternatively, the electronic device 100 may automatically store the image in the external memory when detecting that the memory amount of the internal memory 121 is less than the preset amount.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. The electronic device 100 may set the characteristics of automatic unlocking of the flip cover according to the detected open-close state of the leather sheath or the open-close state of the flip cover.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
The distance sensor 180F is used to measure a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a light-emitting diode (LED) and a light detector, for example, a photodiode. The LED may be an infrared LED. The electronic device 100 emits infrared light outward through the LED. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When the reflected light is detected, the electronic device 100 may determine that an object is present nearby. When no reflected light is detected, the electronic device 100 may determine that there is no object nearby. The electronic device 100 can use the proximity light sensor 180G to detect whether the user holds the electronic device 100 close to the ear for talking, so as to automatically extinguish the screen for power saving. The proximity light sensor 180G may also be used for automatic unlocking and automatic screen locking in holster mode or pocket mode.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key and an volume key. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive a key input signal and implement a function related to the case input signal.
The motor 191 may generate vibration. The motor 191 may be used for incoming call alerting as well as for touch feedback. The motor 191 may generate different vibration feedback effects for touch operations acting on different applications. The motor 191 may also produce different vibration feedback effects for touch operations acting on different areas of the display screen 194. Different application scenarios (e.g., time alert, receipt message, alarm clock, and game) may correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, which may be used to indicate a change in state of charge and charge, or may be used to indicate a message, missed call, and notification.
In the embodiment of the present application, the camera 193 may capture multiple frames of images, and the processor 110 performs image processing on the multiple frames of images, where the image processing may include tone mapping, and the like, and the image processing obtains a target image with a better color effect. The processor 110 may then control the display 194 to present a processed target image, i.e., an image captured in a scene with low illuminance or in a scene with high dynamic range.
The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is described below. The software system may employ a layered architecture, an event driven architecture, a microkernel architecture, a micro-service architecture, or a cloud architecture, and the embodiments of the present application illustratively describe the software system of the electronic device 100.
As shown in fig. 10, the software system using the hierarchical architecture is divided into several layers, each of which has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the software system may be divided into five layers, from top to bottom, an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250, respectively.
The application layer 210 may include cameras, gallery, and may also include calendar, phone, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer 220 provides an application access interface and programming framework for the applications of the application layer 210.
For example, the application framework layer includes a camera access interface for providing a photographing service of a camera through camera management and a camera device.
Camera management in the application framework layer 220 is used to manage cameras. The camera management may obtain parameters of the camera, for example, determine an operating state of the camera, and the like.
The camera devices in the application framework layer 220 are used to provide a data access interface between the camera devices and camera management.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera hardware abstraction layer and other hardware device abstraction layers; the camera hardware abstract layer may include a camera device 1, a camera device 2, and the like; the camera hardware abstraction layer may be coupled to a camera algorithm library, and the camera hardware abstraction layer may invoke algorithms in the camera algorithm library.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the drive layer may include a camera drive; a digital signal processor driver and a graphics processor driver.
The hardware layer 250 may include sensors, image signal processors, digital signal processors, graphics processors, and other hardware devices. The sensor may include the sensor 1, the sensor 2, and the like, and may also include a depth sensor (TOF) and a multispectral sensor, and the like, which are not limited in any way.
The workflow of the software system of the electronic device 100 is illustrated in connection with displaying a photo scene.
When a user performs a click operation on the touch sensor 180K, after the camera APP is awakened by the click operation, each camera device of the camera hardware abstraction layer is invoked through the camera access interface. For example, the camera hardware abstraction layer may send an instruction for calling a certain camera to the camera device driver, and at the same time, the camera algorithm library starts to load the image processing method utilized by the embodiment of the present application.
When a sensor of a hardware layer is called, for example, a sensor 1 in a certain camera is called to acquire an ambient light image; when the flash and sensor of the hardware layer are called, a dark red flash image is acquired. The multi-frame image (comprising the ambient light image and the dark red flash image) is processed by an image signal processor and then returned to a hardware abstraction layer, and the image processing method in the loaded camera algorithm library is used for carrying out registration, tone mapping and other processing to generate a second target image.
And sending the obtained second target image back to the camera application for display and storage through the camera hardware abstraction layer and the camera access interface.
An embodiment of the device of the present application will be described in detail below in conjunction with fig. 11. It should be understood that the apparatus in the embodiments of the present application may perform the methods in the embodiments of the present application, that is, specific working procedures of the following various products may refer to corresponding procedures in the embodiments of the methods.
Fig. 11 is a schematic structural diagram of an image processing apparatus 300 according to an embodiment of the present application. The image processing apparatus 300 includes an acquisition module 310 and a processing module 320.
Wherein the processing module 320 is configured to detect a first operation; in response to a first operation, turning on the camera; and determining whether the first preset condition is met.
The obtaining module 310 is configured to collect, when the first preset condition is met, a plurality of images including an ambient light image and a dark red flash image, where the ambient light image is an image captured by a camera when the flash is not turned on, and the dark red flash image is an image captured by the camera when the flash is turned on.
The processing module 320 is configured to obtain a first target image according to the multi-frame image.
Optionally, as an embodiment, the processing module 320 is further configured to:
determining a first red channel image, a first green channel image and a first blue channel image corresponding to the dark red flash image, and a second red channel image corresponding to the ambient light image; determining an exposure gain value according to the first red channel image and the second red channel image; determining a green channel enhanced image according to the first green channel image and the exposure gain value; determining a blue channel enhanced image according to the first blue channel image and the exposure gain value; a first target image is determined from the first red channel image, the green channel enhanced image, and the blue channel enhanced image.
Optionally, as an embodiment, the processing module 320 is further configured to: determining a second green channel image and a second blue channel image corresponding to the ambient light image; registering the first green channel image with the second green channel image, registering the first blue channel image with the second blue channel image; determining a green channel enhanced image according to the registered first green channel image and the exposure gain value; and determining a blue channel enhanced image according to the registered first blue channel image and the exposure gain value.
Optionally, as an embodiment, the processing module 320 is further configured to: determining a first red channel image and a first green-blue channel image corresponding to the dark red flash image and a second red channel image corresponding to the ambient light image; determining an exposure gain value according to the first red channel image and the second red channel image; determining a green-blue channel enhanced image according to the first green-blue channel image and the exposure gain value; a first target image is determined from the first red channel image and the green-blue channel enhancement image.
Optionally, as an embodiment, the processing module 320 is further configured to: determining a second green-blue channel image corresponding to the ambient light image; registering the first green-blue channel image with the second green-blue channel image; determining a green-blue channel enhanced image according to the first green-blue channel image and the exposure gain value; and determining a green-blue channel enhanced image according to the registered first green-blue channel image and the exposure gain value.
Optionally, as an embodiment, the processing module 320 is further configured to: the ambient light image and the first target image are tone mapped to determine a second target image.
Optionally, as an embodiment, the processing module 320 is further configured to: determining the average value of all red channel signals in the first red channel image as a first value; determining the average value of all red channel signals in the second red channel image as a second value; the ratio of the first value and the second value is determined as an exposure gain value.
Optionally registered as any one of KLT (Kanade-Lucas-Tomasi), SIFT (Scale-invariant feature transform), optical flow method, etc.
The image processing apparatus 300 is embodied in the form of a functional module. The term "module" herein may be implemented in software and/or hardware, and is not specifically limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Embodiments of the present application also provide a computer-readable storage medium having computer instructions stored therein; the computer readable storage medium, when run on the folded screen angle determining means, causes the image processing apparatus 300 to perform the image processing method shown previously.
The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium, or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The present embodiments also provide a computer program product comprising computer instructions which, when run on the image processing apparatus 300, enable the image processing apparatus 300 to perform the image processing method shown in the foregoing.
Fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip shown in fig. 12 may be a general-purpose processor or a special-purpose processor. The chip includes a processor 401. Wherein the processor 401 is configured to support the image processing apparatus 300 to execute the technical solution described above.
Optionally, the chip further comprises a transceiver 402, where the transceiver 402 is configured to be controlled by the processor 401 and is configured to support the image processing apparatus 300 to perform the foregoing technical solution.
Optionally, the chip shown in fig. 12 may further include: a storage medium 403.
It should be noted that the chip shown in fig. 12 may be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic devices (programmable logic device, PLD), controllers, state machines, gate logic, discrete hardware components, any other suitable circuit or combination of circuits capable of performing the various functions described throughout this application.
The electronic device, the image processing apparatus 300, the computer storage medium, the computer program product, and the chip provided in the embodiments of the present application are all configured to execute the method provided above, so that the advantages achieved by the method provided above can be referred to the advantages corresponding to the method provided above, and are not described herein again.
It should be understood that the foregoing is only intended to assist those skilled in the art in better understanding the embodiments of the present application and is not intended to limit the scope of the embodiments of the present application. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, certain steps may not be necessary in the various embodiments of the detection methods described above, or certain steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations are also within the scope of embodiments of the present application.
It should also be understood that the foregoing description of embodiments of the present application focuses on highlighting differences between the various embodiments and that the same or similar elements not mentioned may be referred to each other and are not described in detail herein for brevity.
It should be further understood that the sequence numbers of the above processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and the internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It should be further understood that, in the embodiments of the present application, the "preset" and "predefined" may be implemented by pre-storing corresponding codes, tables, or other manners that may be used to indicate relevant information in a device (including, for example, an electronic device), and the present application is not limited to a specific implementation manner thereof.
It should also be understood that the manner, condition, class and division of the embodiments in the embodiments of the present application are for convenience of description only and should not be construed as being particularly limited, and the various manners, classes, conditions and features of the embodiments may be combined without contradiction.
It is also to be understood that in the various embodiments of the application, terms and/or descriptions of the various embodiments are consistent and may be referenced to one another in the absence of a particular explanation or logic conflict, and that the features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
Finally, it should be noted that: the foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. An image processing method, characterized by being applied to an electronic device including a camera and a flash, the method comprising: detecting a first operation;
responsive to the first operation, turning on the camera;
collecting multiple frames of images, wherein the multiple frames of images comprise an ambient light image and a first image, the ambient light image is an image shot by the camera when the flash lamp is not started, and the first image is an image shot by the camera when the flash lamp is started;
determining a first group of channel images corresponding to the ambient light image and a second group of channel images corresponding to the first image, wherein the first group of channel images and the second group of channel images at least comprise first channel images belonging to the same single channel, and the single channel corresponds to the wavelength range of light emitted when the flash lamp is started;
determining an exposure gain value according to a first channel image in the first set of channel images and a first channel image in the second set of channel images;
and determining a first target image corresponding to the first image according to the exposure gain value.
2. The method of claim 1, wherein determining a first target image corresponding to the first image based on the exposure gain value comprises:
Determining an enhanced image corresponding to a second channel image in the second group of channel images according to the exposure gain value, wherein the second channel image is different from the first channel image;
the first target image is determined from a first channel image of the second set of channel images and the enhanced image.
3. The method of claim 2, wherein the second channel image of the second set of channel images comprises a second single channel image and a third single channel image;
determining an enhanced image corresponding to a second channel image in the second set of channel images according to the exposure gain value, including:
determining a second single-channel enhanced image corresponding to the second single-channel image according to the exposure gain value;
and determining a third single-channel enhanced image corresponding to the third single-channel image according to the exposure gain value, wherein the enhanced image comprises the second single-channel enhanced image and the third single-channel enhanced image.
4. The method of claim 3, wherein the first set of channel images further comprises a third channel image, the third channel image comprising a fourth single channel image and a fifth single channel image;
The method further comprises the steps of:
registering the second single-channel image with the fourth single-channel image, and registering the third single-channel image with the fifth single-channel image;
and determining a second single-channel enhanced image corresponding to the second single-channel image according to the exposure gain value, wherein the determining comprises the following steps:
determining the second single-channel enhanced image corresponding to the registered second single-channel image according to the exposure gain value;
and determining a third single-channel enhanced image corresponding to the third single-channel image according to the exposure gain value, wherein the determining comprises the following steps:
and determining the third single-channel enhanced image corresponding to the registered third single-channel image according to the exposure gain value.
5. The method of claim 2, wherein the second channel image in the second set of channel images is a second dual channel image;
determining an enhanced image corresponding to a second channel image in the second set of channel images according to the exposure gain value, including:
and determining a second double-channel enhanced image corresponding to the second double-channel image according to the exposure gain value, wherein the second double-channel enhanced image is the enhanced image.
6. The method of claim 5, wherein the first set of channel images further comprises a first dual channel image, the method further comprising:
registering the second dual-channel image with the first dual-channel image;
determining a second dual-channel enhanced image corresponding to the second dual-channel image according to the exposure gain value, including:
and determining a second double-channel enhanced image corresponding to the registered second double-channel image according to the exposure gain value.
7. The method of any of claims 1 to 6, wherein determining an exposure gain value from a first channel image of the first set of channel images and a first channel image of the second set of channel images comprises:
determining the mean value of all first channel signals of a first channel image in the first group of channel images as a first value;
determining the mean value of all first channel signals of the first channel images in the second group of channel images as a second value;
and determining the ratio of the first value to the second value as the exposure gain value.
8. The method according to claim 4 or 6, wherein the registration is any one of KLT (Kanade-Lucas-Tomasi), SIFT (Scale-invariant feature transform), optical flow method, and the like.
9. The method according to any one of claims 1 to 8, further comprising:
and performing tone mapping on the ambient light image and the first target image to determine a second target image.
10. The method according to any one of claims 1 to 9, wherein the wavelength range of the light emitted when the flash is turned on includes at least [660nm,700nm ] or [450nm,490nm ].
11. The method according to any one of claims 1 to 10, wherein the multi-frame image is an image acquired for the same dim light scene or the same HDR scene.
12. An electronic device, the electronic device comprising:
one or more processors and memory;
the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform the image processing method of any one of claims 1 to 11.
13. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the image processing method of any of claims 1 to 11.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the image processing method of any one of claims 1 to 11.
CN202311507855.9A 2022-01-25 2022-01-25 Image processing method and related device Pending CN117710265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311507855.9A CN117710265A (en) 2022-01-25 2022-01-25 Image processing method and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210089124.6A CN115526786B (en) 2022-01-25 2022-01-25 Image processing method and related device
CN202311507855.9A CN117710265A (en) 2022-01-25 2022-01-25 Image processing method and related device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210089124.6A Division CN115526786B (en) 2022-01-25 2022-01-25 Image processing method and related device

Publications (1)

Publication Number Publication Date
CN117710265A true CN117710265A (en) 2024-03-15

Family

ID=84694914

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210089124.6A Active CN115526786B (en) 2022-01-25 2022-01-25 Image processing method and related device
CN202311507855.9A Pending CN117710265A (en) 2022-01-25 2022-01-25 Image processing method and related device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210089124.6A Active CN115526786B (en) 2022-01-25 2022-01-25 Image processing method and related device

Country Status (1)

Country Link
CN (2) CN115526786B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101575803B1 (en) * 2010-01-15 2015-12-09 삼성전자 주식회사 Method and apparatus of generating high sensitivity image in dark environment
CN107948539B (en) * 2015-04-30 2020-07-17 Oppo广东移动通信有限公司 Flash lamp control method and terminal
US9756256B2 (en) * 2015-05-28 2017-09-05 Intel Corporation Spatially adjustable flash for imaging devices
CN105301868A (en) * 2015-12-03 2016-02-03 上海卓易科技股份有限公司 Shooting method and apparatus for multi-color flash lamp
CN105959559A (en) * 2016-06-08 2016-09-21 维沃移动通信有限公司 Night scene shooting method and mobile terminal
CN107105148B (en) * 2017-06-26 2023-09-26 上海传英信息技术有限公司 Photographing system and photographing method using combined flash lamp module
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device
CN110798627B (en) * 2019-10-12 2021-05-18 深圳酷派技术有限公司 Shooting method, shooting device, storage medium and terminal
CN110958401B (en) * 2019-12-16 2022-08-23 北京迈格威科技有限公司 Super night scene image color correction method and device and electronic equipment
CN111614894B (en) * 2020-04-28 2022-04-01 深圳英飞拓智能技术有限公司 Image acquisition method and device and terminal equipment
CN113824873B (en) * 2021-08-04 2022-11-15 荣耀终端有限公司 Image processing method and related electronic equipment
CN113781358A (en) * 2021-09-26 2021-12-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115526786B (en) 2023-10-20
CN115526786A (en) 2022-12-27

Similar Documents

Publication Publication Date Title
WO2023015981A1 (en) Image processing method and related device therefor
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN115601244B (en) Image processing method and device and electronic equipment
CN113810603B (en) Point light source image detection method and electronic equipment
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN113630558B (en) Camera exposure method and electronic equipment
CN116744120B (en) Image processing method and electronic device
CN116416122A (en) Image processing method and related device
CN115767290B (en) Image processing method and electronic device
CN116668862B (en) Image processing method and electronic equipment
CN115631250B (en) Image processing method and electronic equipment
WO2022267608A1 (en) Exposure intensity adjusting method and related apparatus
CN116095476B (en) Camera switching method and device, electronic equipment and storage medium
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115526786B (en) Image processing method and related device
CN115705663B (en) Image processing method and electronic equipment
CN116437198B (en) Image processing method and electronic equipment
CN116051386B (en) Image processing method and related device
CN116055855B (en) Image processing method and related device
CN116029914B (en) Image processing method and electronic equipment
WO2023124201A1 (en) Image processing method and electronic device
CN115426458B (en) Light source detection method and related equipment thereof
US20240137659A1 (en) Point light source image detection method and electronic device
CN117135293A (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination