WO2023137956A1 - Image processing method and apparatus, electronic device, and storage medium - Google Patents

Image processing method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023137956A1
WO2023137956A1 PCT/CN2022/098717 CN2022098717W WO2023137956A1 WO 2023137956 A1 WO2023137956 A1 WO 2023137956A1 CN 2022098717 W CN2022098717 W CN 2022098717W WO 2023137956 A1 WO2023137956 A1 WO 2023137956A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frequency component
enhanced
low
electronic device
Prior art date
Application number
PCT/CN2022/098717
Other languages
French (fr)
Chinese (zh)
Inventor
范文明
Original Assignee
上海闻泰信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰信息技术有限公司 filed Critical 上海闻泰信息技术有限公司
Publication of WO2023137956A1 publication Critical patent/WO2023137956A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to an image processing method, device, electronic equipment and storage medium.
  • an image processing method, device, electronic device, and storage medium are provided.
  • An image processing method applied to electronic equipment comprising:
  • the first image focused on the backlight area is collected by the camera, and the second image focused on the non-backlight area is collected by the camera;
  • the enhanced first image and the enhanced second image are fused to obtain a third image.
  • the method before said fusing the enhanced first image and the enhanced second image to obtain the third image, the method further includes: performing registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; said fusing the enhanced first image and the enhanced second image to obtain the third image includes: The processed second image is fused to obtain a third image.
  • the fusing the enhanced first image and the enhanced second image to obtain a third image includes: decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; In the second high-frequency region in the second image, the second low-frequency component corresponds to the second low-frequency region in the enhanced second image; fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component; fusing the third high-frequency component and the third low-frequency component to obtain a third image.
  • the fusing the first high-frequency component and the second high-frequency component into a third high-frequency component includes: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component; the third high frequency component.
  • the fusing the first low-frequency component and the second low-frequency component into a third low-frequency component includes: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component include: performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain a first high-frequency component and a first low-frequency component; performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on an enhanced second image to obtain a second high-frequency component and a second low-frequency component.
  • the detecting the shooting scene where the electronic device is located includes: acquiring a preview image collected by a camera, and calculating a grayscale value of the preview image; if the grayscale value of the preview image is greater than a preset grayscale threshold, determining that the electronic device is in a backlight shooting scene; if the grayscale value of the preview image is less than or equal to the preset grayscale threshold, determining that the electronic device is not in a backlight shooting scene.
  • performing enhancement processing on the first image and the second image respectively includes: performing first enhancement processing on the first image, and performing second enhancement processing on the second image; wherein, the first enhancement processing includes a multi-scale retinal enhancement algorithm, and the second enhancement processing includes a homomorphic filtering algorithm.
  • the method further includes: dividing the preview image captured by the camera into multiple image areas according to a preset area size; calculating the gray value corresponding to each of the image areas; comparing the gray value corresponding to each of the image areas with a regional gray threshold to divide the preview image into a backlit area and a non-backlit area, wherein the backlit area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlit area refers to the area in which the gray value of the preview image is less than or an area equal to the gray threshold of the area.
  • An image processing device comprising:
  • a detection module configured to detect the shooting scene where the electronic device is located
  • a focus acquisition module configuring the focus acquisition module as a module that collects a first image focused on a backlight area through the camera and acquires a second image focused on a non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene;
  • An enhancement module configuring the focus acquisition module as a module for respectively performing enhancement processing on the first image and the second image;
  • a fusion module configured to fuse the enhanced first image with the enhanced second image to obtain a third image.
  • the device further includes: an alignment module configured to perform registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; and further configure the fusion module to fuse the registered first image and the registered second image to obtain a third image.
  • the fusion module is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; the second high-frequency component corresponds to a second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the enhanced second image.
  • the fusion module is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, use the first high-frequency component as the third high-frequency component;
  • the fusion module is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
  • the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
  • the detection module is further configured to acquire a preview image collected by the camera, and calculate a gray value of the preview image; if the gray value of the preview image is greater than a preset gray threshold, then determine that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, then determine that the electronic device is not in a backlight shooting scene.
  • the enhancement module is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
  • the focus acquisition module is further configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each of the image areas; compare the gray value corresponding to each of the image areas with the area gray threshold, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlight area refers to the gray scale in the preview image Areas with values less than or equal to the area grayscale threshold.
  • An electronic device comprising a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in any one of the above.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors are made to execute the steps of any one of the image processing methods described above.
  • FIG. 1 is an application scene diagram of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 2 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 3 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure
  • Fig. 5 is a structural block diagram of an image processing device in one or more embodiments of the present disclosure.
  • Fig. 6 is a structural block diagram of an electronic device in one or more embodiments of the present disclosure.
  • first and second and the like in the specification and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first image and the second image are for distinguishing different images, not for describing a specific sequence of images.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present disclosure shall not be construed as being preferred or advantageous over other embodiments or designs. To be precise, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of "plurality” refers to two or more.
  • One solution includes using a flash for exposure compensation and subject brightness compensation through a flash. Since the brightness of the flash of an electronic device is insufficient, brightness compensation can only be performed on a subject that is shot at a close distance. When the subject is farther away from the camera, the effect of this solution is worse.
  • Another solution is to increase the dynamic range of the image through HDR (High-Dynamic Range, high-dynamic range image), reduce the bright or dark areas in the image captured by the camera, and increase the dynamic range of the image through HDR. Since this solution only selects over-exposed, over-dark and normal exposure images for fusion, the dynamic range of the obtained image is limited, and it cannot fully meet various complex backlight environments. In some complex backlight shooting scenes, the effect of this solution is not ideal.
  • the camera of the electronic device When the electronic device is in the backlight shooting scene for image capture, the camera of the electronic device will be directly or indirectly reflected by the light source, and the images collected by the camera will be different when the focus position of the camera is different.
  • the focus is in an area disturbed by strong light, that is, when focusing on the backlight area, the image collected by the camera has a high-brightness area, and this high-brightness area covers up part of the information in the image; image effects.
  • the image processing method provided in the present disclosure can be applied to the application environment shown in FIG. 1 , and the image processing method is applied to an image processing system.
  • the image processing system may include an electronic device 10, wherein the electronic device 10 may include but not limited to a mobile phone, a tablet computer, a wearable device, a notebook computer, a PC (Personal Computer, personal computer), a video camera, and the like.
  • the operating system of the above-mentioned electronic device 10 may include but not limited to Android (Android) operating system, IOS operating system, Symbian (Symbian) operating system, Black Berry (Blackberry) operating system, Windows Phone8 operating system, etc., the embodiments of the present disclosure are not limited.
  • the electronic device 10 After the electronic device 10 detects the shooting operation, it may collect images through a camera, and the camera may be a camera of the electronic device 10 .
  • the electronic device 10 may also collect images through cameras of other electronic devices establishing a communication connection with the electronic device 10, which is not limited here.
  • the electronic device 10 may analyze the image collected by the camera to determine that it is currently in a backlight shooting scene. As shown in FIG. 1 , the backlight shooting scene may be a scene where the direction of the camera faces a light source. In this backlight shooting scene, the electronic device 10 uses the camera to shoot, and the captured image may be overexposed in some areas and/or underexposed in some areas.
  • the electronic device 10 may detect the shooting scene where the electronic device 10 is currently located before detecting the shooting operation.
  • the method of detecting the current shooting scene may include but not limited to detection by a light sensor, detection of images collected by the camera after the camera is turned on and before a shooting operation is detected, etc., which is not limited here.
  • the electronic device 10 may detect the current light intensity through the light sensor, and determine the current shooting scene according to the light intensity. For example, if the light intensity is greater than the intensity threshold, the electronic device 10 determines that the current shooting scene is a backlight shooting scene; if the light intensity is less than or equal to the intensity threshold, the electronic device 10 determines that it is not currently in a backlight shooting scene.
  • the electronic device 10 When the electronic device 10 detects that it is currently in a backlight shooting scene, it can automatically switch to a processing mode for processing images collected in the backlight shooting scene, or output a prompt message to prompt the user to switch to the processing mode corresponding to the switching operation. After detecting the switching operation, the electronic device 10 switches to a processing mode for processing images collected in the backlight shooting scene.
  • the specific content of the image processing method disclosed in the embodiments of the present disclosure will be described in the following embodiments, and will not be explained too much here.
  • FIG. 2 is a flow chart of the steps of an image processing method provided by one or more embodiments of the present disclosure.
  • the image processing method can be applied to the above-mentioned electronic device, and may include the following steps:
  • Step 210 detecting the shooting scene where the electronic device is located.
  • the electronic device after the electronic device collects the preview image through the camera, it can detect the shooting scene where the electronic device is located according to the collected preview image, that is, detect whether the electronic device is in a backlit shooting scene.
  • the detection method may include detecting the gray value of the image pixel, detecting the RGB value of the image pixel, etc., which are not limited here.
  • the electronic device may acquire a preview image collected by the camera, and calculate the grayscale value of the preview image. If the grayscale value is greater than a preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene. The electronic device may continue to perform steps 220 to 240. If the grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene. The electronic device may perform image enhancement processing and/or filtering processing on the collected images.
  • the image enhancement processing may include contrast enhancement, Gamma ( ⁇ , gamma) correction, histogram equalization, histogram regulation, color image enhancement method based on HSV space, etc.
  • the filtering processing includes mean filtering, median filtering, Gaussian filtering, bilateral filtering, etc., which are not limited in the embodiments of the present disclosure.
  • the electronic device may calculate an average gray value of all pixels in the image collected by the camera, and compare the average gray value of all pixels in the image with a preset gray threshold. If the average grayscale value is greater than the preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene; if the average grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • the electronic device may calculate the grayscale value of each pixel in the image collected by the camera, and compare the grayscale value of each pixel in the image with a preset grayscale threshold. Calculate the number of pixels whose grayscale value is greater than the preset grayscale threshold. If the number of pixels is greater than the preset number threshold, it is determined that the electronic device is in a backlight shooting scene. If the number of pixels is less than or equal to the preset number threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • Step 220 if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
  • the electronic device detects that the electronic device is in a backlit shooting scene, the first image focused on the backlit area is collected through the camera, and the second image focused on the non-backlit area is collected through the camera. There is no sequence relationship between the first image and the second image, and they can also be executed simultaneously.
  • the backlight area refers to an area that is affected by light and has a backlight effect
  • the non-backlight area refers to an area that does not have a backlight effect due to the light.
  • the electronic device can divide the preview image into multiple image areas according to the preset area size, for example, each image area includes 64 ⁇ 64 pixels, and calculate the gray value corresponding to each image area in the preview image, respectively compare the gray value corresponding to each image area with the area gray threshold, so as to divide the preview image into a backlight area and a non-backlight area.
  • the area refers to the area in the preview image whose gray value is less than or equal to the gray threshold of the area, that is, the area where the backlight effect does not appear due to the influence of light.
  • the electronic device collects the first image focused on the backlit area and the second image focused on the non-backlit area through the camera.
  • the camera can automatically collect the first image focused on the backlight area, and the camera can collect the second image focused on the non-backlight area, without requiring the user to find the focus area, simplifying the user's operation, and improving the convenience of the image processing method.
  • the electronic device when it detects that the electronic device is in a backlit shooting scene, it can send a prompt message to prompt the user to manually select the focus on the backlit area and the non-backlit area.
  • the camera When the focus selection operation for the backlit area is detected, the camera focuses on the backlit area and collects the first image according to the focus selection operation for the backlit area. The area that needs to be focused, and then collect the first image and the second image, so as to improve the accuracy of the focus of the collected image.
  • Step 230 performing enhancement processing on the first image and the second image respectively.
  • the electronic device may respectively perform enhancement processing on the first image and the second image to improve the image quality of the first image and the second image.
  • the first enhancement process can be performed on the first image, and the second enhancement process can be performed on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm (MSR, Multi-Scale Retinex), and the second enhancement process includes a homomorphic filtering algorithm.
  • MSR multi-scale retinal enhancement algorithm
  • the second enhancement process includes a homomorphic filtering algorithm.
  • the first image is collected when the camera focuses on the backlit area, and focusing on the backlit area will affect the clarity and contrast of the first image. Therefore, the electronic device can use a multi-scale retinal enhancement algorithm to enhance the contrast and clarity of the first image.
  • the multi-scale retinal enhancement algorithm determines multiple scales of the first image, performs Gaussian blur on the first image at multiple scales, and obtains multiple blurred images after blurring.
  • the enhanced first image is obtained based on the first image and the multiple blurred images, which can improve the image quality of the first image.
  • the second image is collected when the camera focuses on the non-backlit area, and focusing on the non-backlit area will cause more image noise in the second image. Therefore, the electronic device can use a homomorphic filtering algorithm to remove the noise in the second image.
  • the homomorphic filtering algorithm transforms the illumination-reflection model corresponding to the second image, and then passes the transformed result through a frequency domain filter to obtain a filtered result, and inversely transforms the filtered result to obtain an enhanced second image, which can improve the image quality of the second image.
  • Step 240 fusing the enhanced first image and the enhanced second image to obtain a third image.
  • the electronic device may fuse the enhanced first image and the enhanced second image to obtain a third image.
  • the third image may be an image finally presented on a display screen for viewing by a user, or may be an image stored in a memory.
  • the electronic device may divide the first image and the second image into image areas according to the number of pixels. For example, each image area includes 64 ⁇ 64 pixels. There is a one-to-one correspondence between each image area in the first image and each image area in the second image. For every two image areas corresponding to the first image and the second image, select the image area with higher image quality in the two image areas. , as the image area corresponding to the position in the third image, the same operation is performed on other image areas, and the third image is obtained after the fusion of all areas is completed.
  • the electronic device can detect the current environment.
  • the camera captures the first image focused on the backlight area and the second image focused on the non-backlight area.
  • underexposure or overexposure in the backlight shooting scene will cause the first image and the second image to lose part of the effective information.
  • the sharpness and contrast of the second image improve the image quality of the image captured in the backlight shooting scene. .
  • FIG. 3 is a flow chart of the steps of the image processing method provided by one or more embodiments of the present disclosure.
  • the image processing method can also be applied to the above-mentioned electronic device, and may include the following steps:
  • Step 310 detecting the shooting scene where the electronic device is located.
  • Step 320 if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
  • Step 330 performing enhancement processing on the first image and the second image respectively.
  • Steps 310-330 are the same as steps 210-230, and will not be repeated here.
  • Step 340 performing registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the first image is consistent with the spatial position information of the second image.
  • the electronic device may move when collecting the first image and the second image, there is a difference between the spatial position information of the first image and the spatial position information of the second image. Therefore, the electronic device may perform registration processing on the enhanced first image and the enhanced second image, for example, register key points in the enhanced first image and the enhanced second image, register image features in the enhanced first image and the enhanced second image, etc., so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image, so that the subsequent image fusion step can be performed.
  • the electronic device may perform affine transformation registration on the enhanced first image and the enhanced second image, first use feature matching between the enhanced first image and the enhanced second image as data to obtain a predicted affine transformation matrix, and register the enhanced first image with the enhanced second image according to the predicted affine transformation matrix, so that the spatial position information of the enhanced first image and the spatial position information of the enhanced second image are consistent.
  • the electronic device may perform feature matching between the enhanced first image and the enhanced second image, obtain a predicted radial transformation matrix for transforming the enhanced first image into the enhanced second image, and transform the enhanced first image according to the predicted affine transformation matrix, thereby obtaining a first image that is registered with the enhanced second image.
  • Step 350 merging the registered first image and the registered second image to obtain a third image.
  • the electronic device may fuse the registered first image and the registered second image to obtain a third image.
  • the first image after the registration processing and the second image after the registration processing are obtained after registration processing is performed on the basis of the first image after the enhancement processing and the second image after the enhancement processing, and the method of fusing the first image after the registration processing and the second image after the registration processing in step 350 can be the same as the method of fusing the first image after the enhancement processing and the second image after the enhancement processing in step 240, and will not be repeated here.
  • the electronic device can weight and calculate the pixel values (such as grayscale values or RGB (Red, Green, Blue, red, green, blue) values, etc.) After adding, the obtained target pixel value can be used as the pixel value of the corresponding pixel point in the third image, and the above-mentioned fusion operation can be performed on all corresponding pixel points in the registered first image and the registered second image to obtain the third image.
  • the pixel values such as grayscale values or RGB (Red, Green, Blue, red, green, blue) values, etc.
  • the electronic device may also perform registration processing on the first image and the second image, so as to facilitate the fusion of the first image and the second image, and improve the accuracy of image fusion.
  • FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure.
  • the step of fusing the enhanced first image with the enhanced second image may include the following steps:
  • Step 410 decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency region in the enhanced second image.
  • the electronic device may decompose the enhanced first image into a first high frequency component and a first low frequency component, and decompose the enhanced second image into a second high frequency component and a second low frequency component.
  • the electronic device may calculate the gray value of each pixel in the enhanced first image, so as to obtain the gray value change speed between each pixel in the enhanced first image and surrounding pixels, and decompose the enhanced first image into a first high frequency component and a first low frequency component according to the gray value change speed, the first high frequency component includes a plurality of images having the same image size as the enhanced first image, and the first low frequency component includes an image having the same image size as the enhanced first image.
  • the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, that is, the multiple images in the first high-frequency component correspond to different first high-frequency regions in the enhanced first image, and the first high-frequency region is an image region in the enhanced first image whose gray value change speed is greater than the first change speed threshold.
  • the first high frequency component may correspond to the edge region of the enhanced first image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region.
  • the first low-frequency component corresponds to the first low-frequency region in the enhanced first image, that is, the image in the first low-frequency component corresponds to the first low-frequency region in the enhanced first image, and the first low-frequency region is an image region in the enhanced first image whose grayscale value change speed is less than or equal to the first change speed threshold.
  • the first low-frequency component may correspond to the middle region of the enhanced first image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region.
  • the electronic device may calculate the gray value of each pixel in the enhanced second image, thereby obtaining the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and decompose the enhanced second image into a second high frequency component and a second low frequency component according to the gray value change speed.
  • the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, that is, the multiple images in the second high-frequency component correspond to different second high-frequency regions in the enhanced second image, and the second high-frequency region is an image region in the enhanced second image whose gray value change speed is greater than the second change speed threshold.
  • the second high frequency component may correspond to the edge region of the enhanced second image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region.
  • the second low-frequency component corresponds to the second low-frequency region in the enhanced second image, that is, the image in the second low-frequency component corresponds to the second low-frequency region in the enhanced second image, and the second low-frequency region is an image region whose grayscale value change speed in the enhanced second image is less than or equal to the second change speed threshold.
  • the second low-frequency component may correspond to the middle region of the enhanced second image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region.
  • the first change speed threshold and the second change speed threshold may be equal in magnitude or unequal in magnitude.
  • the electronic device may perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced first image, thereby obtaining the first high-frequency component and the first low-frequency component, and the first high-frequency transformation coefficient corresponding to the first high-frequency component and the first low-frequency transformation coefficient corresponding to the first low-frequency component.
  • NSCT Nonsubsampled contourlet transform, non-subsampled contourlet transform
  • FNSCT Fast non-subsampled contourlet transform
  • the electronic device converts the enhanced first image into a first spectrogram
  • the first spectrogram can illustrate the gray value change speed between each pixel in the enhanced first image and the surrounding pixels, and perform multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced first image according to the first spectrogram, thereby obtaining one or more first bandpass subband images and a first lowpass subband image.
  • the image size of the first bandpass subband image is the same as that of the enhanced first image, and each first bandpass subband image corresponds to a different first high frequency region in the enhanced first image.
  • the image size of the first low-pass sub-band image is the same as that of the enhanced first image, the first low-pass sub-band image corresponds to the first low-frequency region in the enhanced first image, and the first low-pass sub-band image can be used as the first low-frequency component.
  • the electronic device then performs NSDFB (Nonsubsampled directional filter bank, non-subsampled directional filter bank) decomposition on each first bandpass subband image, and can decompose each first bandpass subband image into a plurality of first multi-directional subband images.
  • the image size of the first multi-directional subband image is the same as the image size of the enhanced first image, and the plurality of first direction subband images can be jointly used as the first high frequency component.
  • the first low-frequency transform coefficient corresponding to the first low-pass sub-band image is obtained.
  • NSDFB decomposition on the first bandpass sub-band image to obtain the first multi-directional sub-band image, and obtain the first high-frequency transformation coefficient corresponding to the first multi-directional sub-band image, since multiple first multi-directional sub-band images can be obtained, that is, the first high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple first multi-directional sub-band images.
  • the electronic device may also perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced second image, thereby obtaining a second high-frequency component and a second low-frequency component, and a second high-frequency transformation coefficient corresponding to the second high-frequency component and a second low-frequency transformation coefficient corresponding to the second low-frequency component.
  • NSCT Nonsubsampled contourlet transform, non-subsampled contourlet transform
  • FNSCT Fast non-subsampled contourlet transform
  • the electronic device converts the enhanced second image into a second spectrogram, and the second spectrogram can illustrate the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and performs multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced second image according to the second spectrogram, thereby obtaining one or more second bandpass subband images and a second lowpass subband image.
  • the image size of the second bandpass subband image is the same as that of the enhanced second image, and each second bandpass subband image corresponds to a different second high frequency region in the enhanced second image.
  • the image size of the second low-pass sub-band image is the same as that of the enhanced second image, the second low-pass sub-band image corresponds to the second low-frequency area in the enhanced second image, and the second low-pass sub-band image can be used as the second low-frequency component.
  • the electronic device then performs NSDFB (Nonsubsampled directional filter bank) decomposition on each second bandpass subband image, and can decompose each second bandpass subband image into a plurality of second multi-directional subband images.
  • the image size of the second multidirectional subband image is the same as the image size of the enhanced second image, and the plurality of second direction subband images can be jointly used as the second high frequency component.
  • the second low-frequency transformation coefficient corresponding to the second low-pass sub-band image is obtained.
  • NSDFB decomposition is performed on the second bandpass sub-band image to obtain the second multi-directional sub-band image, and the second high-frequency transformation coefficient corresponding to the second multi-directional sub-band image is obtained, since a plurality of second multi-directional sub-band images can be obtained, that is, the second high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple second multi-directional sub-band images.
  • Step 420 fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component.
  • the step of fusing the first high-frequency component and the second high-frequency component into a third high-frequency component may include: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component;
  • the first high-frequency component can correspond to a plurality of first modulus values
  • the plurality of first modulus values can correspond to a plurality of images in the first high-frequency component respectively. Since the first high-frequency region in the first image corresponds to the second high-frequency region in the second image, the plurality of first modulus values corresponding to the first high-frequency component corresponds to the plurality of second modulus values corresponding to the second high-frequency component.
  • the electronic device compares every two corresponding first modulus values with the second modulus value, and determines the largest modulus value among the first modulus value and the second modulus value corresponding to each two. If the largest modulus value is the first modulus value, the image of the first high-frequency component corresponding to the first modulus value is fused to the image of the third high-frequency component;
  • the electronic equipment can obtain the first high -frequency transformation coefficient and the second high -frequency transformation coefficient on the enhanced first image and the enhanced secondary image.
  • the direction of the direction is the image, so the fusion of the first high -frequency transformation coefficient and the second high -frequency variable change coefficient can be equivalent to integrating the first high frequency component and the second high frequency component.
  • the electronic device respectively takes absolute values of multiple values in the first high-frequency transformation coefficient to obtain multiple first modular values, respectively takes absolute values of multiple values in the second high-frequency transformation coefficient to obtain multiple second modular values, there is a one-to-one correspondence between the first and second modular values, compares every two corresponding first and second modular values, and determines the largest modulus value among the first and second corresponding first and second modulus values, and if the largest modulus value is the first modulus value, then use the value of the first high-frequency transformation coefficient corresponding to the first modulus value as the value of the third high-frequency component transformation coefficient , if the largest modulus value is the second modulus value, the value of the second high-frequency transform coefficient corresponding to the second modulus value is used as the value of the third high-frequency component transform coefficient, and all the first modulus values are compared with the second modulus value to obtain the third high-frequency component transform coefficient, thereby determining the third high-frequency component corresponding to the third high-frequency transform coefficient.
  • the step of fusing the first low-frequency component and the second low-frequency component into a third low-frequency component may include: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the electronic device may calculate the average low-frequency component of the first low-frequency component and the second low-frequency component, that is, calculate an average value of data corresponding to the first low-frequency component and data corresponding to the second low-frequency component, thereby obtaining the average low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the electronic device performs NSCT forward transform or FNSCT forward transform on the enhanced first image and the enhanced second image to obtain the first low-frequency transform coefficient and the second low-frequency transform coefficient, because the first low-frequency transform coefficient can correspond to the first low-pass subband image in the first low-frequency component, and the second low-frequency transform coefficient can correspond to the second low-pass sub-band image in the second low-frequency component, so fusing the first low-frequency transform coefficient and the second low-frequency transform coefficient can be equivalent to fusing the first low-frequency component and the second low-frequency component to obtain The third low-frequency transform coefficient of corresponds to the third low-frequency component.
  • the electronic device may fuse the first low-frequency transformation coefficient representing the first low-frequency component and the second low-frequency transformation coefficient representing the second low-frequency component obtained after NSCT forward transformation or FNSCT forward transformation of the first image and the second image, to obtain a third low-frequency transformation coefficient, and use the third low-frequency transformation coefficient to represent the third low-frequency component.
  • the electronic device calculates an average value of the first low-frequency transformation coefficient and the second low-frequency transformation coefficient, and uses the average value as a third low-frequency transformation coefficient, thereby determining a third low-frequency component corresponding to the third low-frequency transformation coefficient.
  • Step 430 fusing the third high-frequency component and the third low-frequency component into a third image.
  • the electronic device may fuse a third high-frequency component obtained by fusing the first high-frequency component and the second high-frequency component with a third low-frequency component obtained by fusing the first low-frequency component and the second low-frequency component to obtain a third image.
  • NSCT inverse transformation may be performed on the third high-frequency component and the third low-frequency component respectively, and the obtained result is a third image.
  • the electronic device may decompose the enhanced first image into the first high-frequency component and the first low-frequency component, and decompose the enhanced second image into the second high-frequency component and the second low-frequency component, then fuse the first high-frequency component and the second high-frequency component to obtain the third high-frequency component, and fuse the first low-frequency component and the second low-frequency component to obtain the third low-frequency component, and finally fuse the third high-frequency component and the third low-frequency component to obtain the third image, and can complement the effective information in the enhanced first image and the enhanced second image to solve the problem Underexposure or overexposure in backlit scenes.
  • steps in the flow charts in FIGS. 2-4 are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 2-4 may include a plurality of sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these sub-steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • the embodiment of the present disclosure also provides an image processing device.
  • the embodiment of the device corresponds to the embodiment of the method described above.
  • this embodiment of the device does not repeat the details in the embodiment of the method described above one by one, but it should be clear that the device in the embodiment of the present disclosure can correspondingly implement all the content of the method embodiment described above.
  • FIG. 5 is a structural block diagram of an image processing device provided in an embodiment of the present disclosure. As shown in FIG. 5 , the image processing device 500 provided in an embodiment of the present disclosure includes:
  • the detection module 510 is configured to detect the shooting scene where the electronic device is located.
  • the focus collection module 520 is configured to configure the focus collection module 520 as a module for collecting the first image focused on the backlight area through the camera and the second image focused on the non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene.
  • the enhancement module 530 is configured as a module for performing enhancement processing on the first image and the second image respectively.
  • the fusion module 540 is a module that configures the fusion module 540 to fuse the enhanced first image and the enhanced second image to obtain a third image.
  • the image processing apparatus 500 further includes:
  • the alignment module is configured to perform registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image.
  • the fusion module 540 is also configured to fuse the registered first image and the registered second image to obtain a third image.
  • the fusion module 540 is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency component in the enhanced second image.
  • the first high-frequency component and the second high-frequency component are fused into a third high-frequency component
  • the first low-frequency component and the second low-frequency component are fused into a third low-frequency component
  • the third high-frequency component and the third low-frequency component are fused to obtain a module of a third image.
  • the fusion module 540 is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, then use the first high-frequency component as the third high-frequency component;
  • the fusion module 540 is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
  • the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
  • the detection module 510 is also configured to obtain a preview image collected by the camera, and calculate the gray value of the preview image; if the gray value of the preview image is greater than the preset gray threshold, it is determined that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • the enhancement module 530 is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
  • the focus acquisition module 520 is also configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each image area; compare the gray value corresponding to each image area with the area gray threshold value, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to the area in the preview image whose gray value is greater than the area gray threshold value, and the non-backlight area refers to the area in which the gray value in the preview image is less than or equal to the area gray value The region of the degree threshold.
  • the image processing apparatus provided in the embodiments of the present disclosure can execute the image processing methods provided in the foregoing method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
  • Each module in the above-mentioned image processing device may be fully or partially realized by software, hardware or a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the electronic device in the form of hardware, and can also be stored in the memory of the electronic device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • an electronic device is provided.
  • the electronic device may be a terminal device, and its internal structure may be as shown in FIG. 6 .
  • the electronic device includes a processor, a memory, a communication interface, a database, a display screen and an input device connected through a system bus.
  • the processor of the electronic device is configured as a module providing calculation and control capabilities.
  • the memory of the electronic device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the communication interface of the electronic device is configured as a wired or wireless communication module with an external terminal, and the wireless mode can be realized through WIFI, operator network, near field communication (NFC) or other technologies.
  • the computer-readable instructions are executed by the processor, the image processing method provided by the above-mentioned embodiments can be implemented.
  • the display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the housing of the electronic device, or an external keyboard, touch pad or mouse.
  • FIG. 6 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation to the electronic device to which the disclosed solution is applied.
  • the specific electronic device may include more or less components than those shown in the figure, or combine certain components, or have a different component arrangement.
  • the image processing apparatus provided in the present disclosure may be implemented in the form of computer-readable instructions, and the computer-readable instructions may be run on a computer device as shown in FIG. 6 .
  • Various program modules constituting the electronic device can be stored in the memory of the computer device, for example, the detection module 510 and the focus acquisition module 520 shown in FIG. 5 .
  • the computer-readable instructions constituted by the various program modules cause the processor to execute the steps in the image processing methods of the various embodiments of the present disclosure described in this specification.
  • an electronic device including a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in the above method embodiments.
  • the electronic device provided by the embodiment of the present disclosure can implement the image processing method provided by the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors are made to perform the image processing steps described in any one of the above.
  • the computer-readable instructions stored on the computer-readable storage medium provided by the embodiments of the present disclosure can implement the image processing method provided by the above-mentioned method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the image processing method provided by the present disclosure can effectively solve the problem of underexposure or overexposure when an electronic device is shooting in a strong light interference or backlight environment, improve the image quality of an image shot in a backlight shooting scene, and has strong industrial applicability.

Abstract

Disclosed in the embodiments of the present disclosure are an image processing method and apparatus, an electronic device, and a storage medium. The method comprises: detecting a shooting scene where the electronic device is located; if detecting that the electronic device is in a backlit shooting scene, collecting, by means of a camera, a first image focused on a backlit area, and collecting, by means of the camera, a second image focused on a non-backlit area; separately performing enhancement processing on the first image and the second image; and fusing an enhanced first image and an enhanced second image so as to obtain a third image. By implementing the embodiments of the present disclosure, the image quality of an image shot in a backlit shooting scene can be improved.

Description

图像处理方法、装置、电子设备及存储介质Image processing method, device, electronic device and storage medium
相关交叉引用related cross reference
本公开要求于2022年1月18日提交中国专利局、申请号为2022100575855、发明名称为“图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of the Chinese patent application with the application number 2022100575855 and the title of the invention "image processing method, device, electronic equipment and storage medium" submitted to the China Patent Office on January 18, 2022, the entire contents of which are incorporated in this disclosure by reference.
技术领域technical field
本公开涉及图像处理方法、装置、电子设备及存储介质。The present disclosure relates to an image processing method, device, electronic equipment and storage medium.
背景技术Background technique
随着科学技术的进步及用户对电子设备的极致追求,对电子设备(例如手机等)拍摄图像的质量方面的要求越来越高,电子设备的拍摄功能也越来越丰富,但是在强光干扰或是逆光环境下拍摄还存在着曝光不足或者过曝的问题,拍摄的图像质量较低。With the advancement of science and technology and the ultimate pursuit of electronic devices by users, the requirements for the quality of images taken by electronic devices (such as mobile phones, etc.) are getting higher and higher, and the shooting functions of electronic devices are also becoming more and more abundant.
发明内容Contents of the invention
(一)要解决的技术问题(1) Technical problems to be solved
在现有技术中,在强光干扰或是逆光环境下电子设备进行拍摄时,存在曝光不足或者过曝的问题,拍摄的图像质量较低。In the prior art, when an electronic device shoots under strong light interference or a backlight environment, there is a problem of underexposure or overexposure, and the quality of the captured image is low.
(二)技术方案(2) Technical solution
根据本公开公开的各种实施例,提供一种图像处理方法、装置、电子设备及存储介质。According to various embodiments of the present disclosure, an image processing method, device, electronic device, and storage medium are provided.
一种图像处理方法,应用于电子设备,所述方法包括:An image processing method applied to electronic equipment, the method comprising:
检测所述电子设备所处的拍摄场景;Detecting the shooting scene where the electronic device is located;
若检测到所述电子设备处于逆光拍摄场景,通过所述摄像头采集对焦于逆光区域的第一图像,以及通过所述摄像头采集对焦于非逆光区域的第二图像;If it is detected that the electronic device is in a backlight shooting scene, the first image focused on the backlight area is collected by the camera, and the second image focused on the non-backlight area is collected by the camera;
分别对所述第一图像及所述第二图像进行增强处理;performing enhancement processing on the first image and the second image respectively;
将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像。The enhanced first image and the enhanced second image are fused to obtain a third image.
作为本公开实施例一种可选的实施方式,在所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像之前,所述方法还包括:对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得所述增强处理后的第一图像的空间位置信息和所述增强处理后的第二图像的空间位置信息一致化;所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像,包括:将配准处理后的第一图像和配准处理后的第二图像进行融合,得到第三图像。As an optional implementation manner of the embodiment of the present disclosure, before said fusing the enhanced first image and the enhanced second image to obtain the third image, the method further includes: performing registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; said fusing the enhanced first image and the enhanced second image to obtain the third image includes: The processed second image is fused to obtain a third image.
作为本公开实施例一种可选的实施方式,所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像,包括:将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频 分量和第二低频分量;其中,所述第一高频分量对应所述增强处理后的第一图像中的第一高频区域,所述第一低频分量对应所述增强处理后的第一图像中的第一低频区域;所述第二高频分量对应所述增强处理后的第二图像中的第二高频区域,所述第二低频分量对应所述增强处理后的第二图像中的第二低频区域;将所述第一高频分量和所述第二高频分量融合为第三高频分量,将所述第一低频分量和所述第二低频分量融合为第三低频分量;将所述第三高频分量和所述第三低频分量进行融合,以得到第三图像。As an optional implementation manner of an embodiment of the present disclosure, the fusing the enhanced first image and the enhanced second image to obtain a third image includes: decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; In the second high-frequency region in the second image, the second low-frequency component corresponds to the second low-frequency region in the enhanced second image; fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component; fusing the third high-frequency component and the third low-frequency component to obtain a third image.
作为本公开实施例一种可选的实施方式,所述将所述第一高频分量和所述第二高频分量融合为第三高频分量,包括:计算所述第一高频分量对应的第一模值和所述第二高频分量对应的第二模值;将所述第一模值与所述第二模值进行比较,并确定所述第一模值与所述第二模值中最大的模值,若所述最大的模值为所述第一模值,则将所述第一高频分量作为所述第三高频分量,若所述最大的模值为所述第二模值,则将所述第二高频分量作为所述第三高频分量。As an optional implementation manner of an embodiment of the present disclosure, the fusing the first high-frequency component and the second high-frequency component into a third high-frequency component includes: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component; the third high frequency component.
作为本公开实施例一种可选的实施方式,所述将所述第一低频分量和所述第二低频分量融合为第三低频分量,包括:根据所述第一低频分量和所述第二低频分量确定平均低频分量,将所述平均低频分量作为第三低频分量。As an optional implementation manner of an embodiment of the present disclosure, the fusing the first low-frequency component and the second low-frequency component into a third low-frequency component includes: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
作为本公开实施例一种可选的实施方式,所述将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量,包括:对增强处理后的第一图像进行非下采样轮廓波变换或快速非下采样轮廓波变换,得到第一高频分量和第一低频分量;对增强处理后的第二图像进行所述非下采样轮廓波变换或所述快速非下采样轮廓波变换,得到第二高频分量和第二低频分量。As an optional implementation manner of the embodiment of the present disclosure, the decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component include: performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain a first high-frequency component and a first low-frequency component; performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on an enhanced second image to obtain a second high-frequency component and a second low-frequency component.
作为本公开实施例一种可选的实施方式,所述检测所述电子设备所处的拍摄场景,包括:获取通过摄像头采集的预览图像,计算所述预览图像的灰度值;若所述预览图像的灰度值大于预设灰度阈值,则确定所述电子设备处于逆光拍摄场景;若所述预览图像的灰度值小于或等于所述预设灰度阈值,则确定所述电子设备不处于逆光拍摄场景。As an optional implementation manner of an embodiment of the present disclosure, the detecting the shooting scene where the electronic device is located includes: acquiring a preview image collected by a camera, and calculating a grayscale value of the preview image; if the grayscale value of the preview image is greater than a preset grayscale threshold, determining that the electronic device is in a backlight shooting scene; if the grayscale value of the preview image is less than or equal to the preset grayscale threshold, determining that the electronic device is not in a backlight shooting scene.
作为本公开实施例一种可选的实施方式,所述分别对所述第一图像及所述第二图像进行增强处理,包括:对所述第一图像进行第一增强处理,对所述第二图像进行第二增强处理;其中,所述第一增强处理包括多尺度视网膜增强算法,所述第二增强处理包括同态滤波算法。As an optional implementation manner of the embodiment of the present disclosure, performing enhancement processing on the first image and the second image respectively includes: performing first enhancement processing on the first image, and performing second enhancement processing on the second image; wherein, the first enhancement processing includes a multi-scale retinal enhancement algorithm, and the second enhancement processing includes a homomorphic filtering algorithm.
作为本公开实施例一种可选的实施方式,所述方法还包括:根据预设的区域尺寸将所述摄像头采集的预览图像划分为多个图像区域;计算各个所述图像区域对应的灰度值;将各个所述图像区域对应的灰度值与区域灰度阈值进行比较,以将所述预览图像划分为逆光区域和非逆光区域,其中,所述逆光区域指的是所述预览图像中灰度值大于区域灰度阈值的区域,所述非逆光区域指的是所述预览图像中灰度值小于或等于区域灰度阈值的区域。As an optional implementation of this embodiment of the present disclosure, the method further includes: dividing the preview image captured by the camera into multiple image areas according to a preset area size; calculating the gray value corresponding to each of the image areas; comparing the gray value corresponding to each of the image areas with a regional gray threshold to divide the preview image into a backlit area and a non-backlit area, wherein the backlit area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlit area refers to the area in which the gray value of the preview image is less than or an area equal to the gray threshold of the area.
一种图像处理装置,包括:An image processing device, comprising:
检测模块,将所述检测模块配置成检测所述电子设备所处的拍摄场景的模块;A detection module, configured to detect the shooting scene where the electronic device is located;
对焦采集模块,将所述对焦采集模块配置成若检测到所述电子设备处于逆光拍摄场景,通过所述摄像头采集对焦于逆光区域的第一图像,以及通过所述摄像头采集对焦于非逆光区域的第二图像的模块;A focus acquisition module, configuring the focus acquisition module as a module that collects a first image focused on a backlight area through the camera and acquires a second image focused on a non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene;
增强模块,将所述对焦采集模块配置成分别对所述第一图像及所述第二图像进行增强处理的模块;An enhancement module, configuring the focus acquisition module as a module for respectively performing enhancement processing on the first image and the second image;
融合模块,将所述融合模块配置成将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像的模块。A fusion module, configured to fuse the enhanced first image with the enhanced second image to obtain a third image.
作为本公开实施例一种可选的实施方式,所述装置还包括:对准模块,将所述对准模块配置成对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得所述增强处理后的第一图像的空间位置信息和所述增强处理后的第二图像的空间位置信息一致化的模块;还将所述融合模块配置成将配准处理后的第一图像和配准处理后的第二图像进行融合,得到第三图像的模块。As an optional implementation manner of the embodiment of the present disclosure, the device further includes: an alignment module configured to perform registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; and further configure the fusion module to fuse the registered first image and the registered second image to obtain a third image.
作为本公开实施例一种可选的实施方式,还将所述融合模块配置成增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量;其中,所述第一高频分量对应所述增强处理后的第一图像中的第一高频区域,所述第一低频分量对应所述增强处理后的第一图像中的第一低频区域;所述第二高频分量对应所述增强处理后的第二图像中的第二高频区域,所述第二低频分量对应所述增强处理后的第二图像中的第二低频区域;将所述第一高频分量和所述第二高频分量融合为第三高频分量,将所述第一低频分量和所述第二低频分量融合为第三低频分量;将所述第三高频分量和所述第三低频分量进行融合,以得到第三图像的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; the second high-frequency component corresponds to a second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the enhanced second image. The second low-frequency region in the image; the first high-frequency component and the second high-frequency component are fused into a third high-frequency component, and the first low-frequency component and the second low-frequency component are fused into a third low-frequency component; the third high-frequency component and the third low-frequency component are fused to obtain a module of a third image.
作为本公开实施例一种可选的实施方式,还将所述融合模块配置成计算所述第一高频分量对应的第一模值和所述第二高频分量对应的第二模值;将所述第一模值与所述第二模值进行比较,并确定所述第一模值与所述第二模值中最大的模值,若所述最大的模值为所述第一模值,则将所述第一高频分量作为所述第三高频分量,若所述最大的模值为所述第二模值,则将所述第二高频分量作为所述第三高频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, use the first high-frequency component as the third high-frequency component;
作为本公开实施例一种可选的实施方式,还将所述融合模块配置成根据所述第一低频分量和所述第二低频分量确定平均低频分量,将所述平均低频分量作为第三低频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
作为本公开实施例一种可选的实施方式,还将所述融合模块配置成对增强处理后的第一图像进行非下采样轮廓波变换或快速非下采样轮廓波变换,得到第一高频分量和第一低频分量;对增强处理后的第二图像进行所述非下采样轮廓波变换或所述快速非下采样轮廓波变换,得到第二高频分量和第二低频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
作为本公开实施例一种可选的实施方式,还将所述检测模块配置成获取通过摄像头采集的预览图像,计算所述预览图像的灰度值;若所述预览图像的灰度值大于预设灰度阈值,则确定所述电子设备处于逆光拍摄场景;若所述预览图像的灰度值小于或等于所述预设灰度阈值,则确定所述电子设备不处于逆光拍摄场景的模块。As an optional implementation of the embodiment of the present disclosure, the detection module is further configured to acquire a preview image collected by the camera, and calculate a gray value of the preview image; if the gray value of the preview image is greater than a preset gray threshold, then determine that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, then determine that the electronic device is not in a backlight shooting scene.
作为本公开实施例一种可选的实施方式,还将所述增强模块配置成对第一图像进行第一增强处理,对第二图像进行第二增强处理;其中,第一增强处理包括多尺度视网膜增强算法,第二增强处理包括同态滤波算法的模块。As an optional implementation manner of the embodiment of the present disclosure, the enhancement module is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
作为本公开实施例一种可选的实施方式,还将所述对焦采集模块配置成根据预设的区域尺寸将所述摄像头采集的预览图像划分为多个图像区域;计算各个所述图像区域对应的灰度值;将各个所述图像区域对应的灰度值与区域灰度阈值进行比较,将所述预览图像划分为逆光区域和非逆光区域的模块;其中,所述逆光区域指的所述预览图像中灰度值大于区域灰度阈值的区域,所述非逆光区域指的是所述预览图像中灰度值小于或等于区域灰度阈值的区域。As an optional implementation of the embodiment of the present disclosure, the focus acquisition module is further configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each of the image areas; compare the gray value corresponding to each of the image areas with the area gray threshold, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlight area refers to the gray scale in the preview image Areas with values less than or equal to the area grayscale threshold.
一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储 计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述任一项所述的图像处理方法的步骤。An electronic device, comprising a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in any one of the above.
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的图像处理方法的步骤。One or more non-volatile storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, one or more processors are made to execute the steps of any one of the image processing methods described above.
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the disclosure. The objectives and other advantages of the disclosure will be realized and attained by the structure particularly pointed out in the written description, claims hereof as well as the accompanying drawings, the details of one or more embodiments of the disclosure being set forth in the accompanying drawings and the description below.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。In order to make the above objects, features and advantages of the present disclosure more comprehensible, optional embodiments are given below and described in detail in conjunction with the accompanying drawings.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用来解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, for those of ordinary skill in the art, other drawings can also be obtained based on these drawings without creative labor.
图1为本公开一个或多个实施例提供的图像处理方法的应用场景图;FIG. 1 is an application scene diagram of an image processing method provided by one or more embodiments of the present disclosure;
图2为本公开一个或多个实施例提供的图像处理方法的步骤流程图;FIG. 2 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure;
图3为本公开一个或多个实施例提供的图像处理方法的步骤流程图;FIG. 3 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure;
图4为本公开一个或多个实施例提供的将增强处理后的第一图像与增强处理后的第二图像进行融合的步骤流程图;FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure;
图5为本公开一个或多个实施例中图像处理装置的结构框图;Fig. 5 is a structural block diagram of an image processing device in one or more embodiments of the present disclosure;
图6是本公开一个或多个实施例中电子设备的结构框图。Fig. 6 is a structural block diagram of an electronic device in one or more embodiments of the present disclosure.
具体实施方式Detailed ways
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。In order to more clearly understand the above objects, features and advantages of the present disclosure, the solutions of the present disclosure will be further described below. It should be noted that, in the case of no conflict, the embodiments of the present disclosure and the features in the embodiments can be combined with each other.
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。In the following description, a lot of specific details are set forth in order to fully understand the present disclosure, but the present disclosure can also be implemented in other ways than described here; obviously, the embodiments in the description are only some of the embodiments of the present disclosure, not all of them.
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用来区别不同的对象,而不是用来描述对象的特定顺序。例如,第一图像和第二图像是为了区别不同的图像,而不是为了描述图像的特定顺序。The terms "first" and "second" and the like in the specification and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects. For example, the first image and the second image are for distinguishing different images, not for describing a specific sequence of images.
在本公开实施例中,“示例性的”或者“例如”等词来表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。In the embodiments of the present disclosure, words such as "exemplary" or "for example" are used as examples, illustrations or illustrations. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present disclosure shall not be construed as being preferred or advantageous over other embodiments or designs. To be precise, the use of words such as "exemplary" or "for example" is intended to present related concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of "plurality" refers to two or more.
在相关技术中,解决逆光拍摄场景下拍摄的图像出现曝光不足或者过曝的问题包括两种方案。一种方案包括采用闪光灯进行曝光补偿,通过闪光灯进行主体亮度补偿,由于电子设备的闪光灯的亮度不足,所以只能对近距离拍摄的主体进 行亮度补偿,当拍摄的主体力摄像头的距离越远时,该方案的效果越差。另一种方案是通过HDR(High-Dynamic Range,高动态范围图像)方式增加图像的动态范围,减少摄像头采集的图像中的过亮或过暗区域,通过HDR方式增加图像动态范围,该方案由于只选取过曝、过暗和正常曝光度的图像进行融合,得到的图像的动态范围有限,并不能完全满足各种复杂的逆光环境,在一些复杂的逆光拍摄场景下,该方案的效果并不理想。In related technologies, there are two solutions to solve the problem of underexposure or overexposure of images captured in backlit shooting scenarios. One solution includes using a flash for exposure compensation and subject brightness compensation through a flash. Since the brightness of the flash of an electronic device is insufficient, brightness compensation can only be performed on a subject that is shot at a close distance. When the subject is farther away from the camera, the effect of this solution is worse. Another solution is to increase the dynamic range of the image through HDR (High-Dynamic Range, high-dynamic range image), reduce the bright or dark areas in the image captured by the camera, and increase the dynamic range of the image through HDR. Since this solution only selects over-exposed, over-dark and normal exposure images for fusion, the dynamic range of the obtained image is limited, and it cannot fully meet various complex backlight environments. In some complex backlight shooting scenes, the effect of this solution is not ideal.
在使用电子设备处于逆光拍摄场景进行图像拍摄时,电子设备的摄像头会受到光源的直射或间接反射,在摄像头的对焦位置不同时摄像头采集的图像也不同,当焦点处于受强光干扰的区域时,即对焦逆光区域时,摄像头采集的图像存在高亮度区域,该高亮度区域掩盖了图像中的部分信息;当焦点处于不受强光干扰的区域时,即对焦非逆光区域时,摄像头采集的图像的整体亮度较暗,严重影响采集到拍摄的图像效果。When the electronic device is in the backlight shooting scene for image capture, the camera of the electronic device will be directly or indirectly reflected by the light source, and the images collected by the camera will be different when the focus position of the camera is different. When the focus is in an area disturbed by strong light, that is, when focusing on the backlight area, the image collected by the camera has a high-brightness area, and this high-brightness area covers up part of the information in the image; image effects.
本公开提供的图像处理方法,可以应用于图1所示的应用环境中,该图像处理方法应用于图像处理系统中。该图像处理系统中可以包括电子设备10,其中,电子设备10可包括但不限于手机、平板电脑、可穿戴设备、笔记本电脑、PC(Personal Computer,个人计算机)、摄像机等。此外,上述电子设备10的操作系统可包括但不限于Android(安卓)操作系统、IOS操作系统、Symbian(塞班)操作系统、Black Berry(黑莓)操作系统、Windows Phone8操作系统等等,本公开实施例不作限定。The image processing method provided in the present disclosure can be applied to the application environment shown in FIG. 1 , and the image processing method is applied to an image processing system. The image processing system may include an electronic device 10, wherein the electronic device 10 may include but not limited to a mobile phone, a tablet computer, a wearable device, a notebook computer, a PC (Personal Computer, personal computer), a video camera, and the like. In addition, the operating system of the above-mentioned electronic device 10 may include but not limited to Android (Android) operating system, IOS operating system, Symbian (Symbian) operating system, Black Berry (Blackberry) operating system, Windows Phone8 operating system, etc., the embodiments of the present disclosure are not limited.
电子设备10在检测到拍摄操作之后,可以通过摄像头进行图像采集,该摄像头可以是电子设备10的摄像头。可选的,电子设备10也可以通过与电子设备10建立通信连接的其它电子设备的摄像头进行采集图像,此处不作限定。电子设备10可以对摄像头采集到的图像进行分析,确定当前处于逆光拍摄场景,如图1所示,逆光拍摄场景可以为摄像头方向对着光源的场景,在该逆光拍摄场景下,电子设备10使用摄像头进行拍摄,采集到的图像会产生部分区域过曝和/或部分区域曝光不足的情况。After the electronic device 10 detects the shooting operation, it may collect images through a camera, and the camera may be a camera of the electronic device 10 . Optionally, the electronic device 10 may also collect images through cameras of other electronic devices establishing a communication connection with the electronic device 10, which is not limited here. The electronic device 10 may analyze the image collected by the camera to determine that it is currently in a backlight shooting scene. As shown in FIG. 1 , the backlight shooting scene may be a scene where the direction of the camera faces a light source. In this backlight shooting scene, the electronic device 10 uses the camera to shoot, and the captured image may be overexposed in some areas and/or underexposed in some areas.
作为一种可选的实施方式,电子设备10可以在检测到拍摄操作之前,检测电子设备10当前所处的拍摄场景。其中,检测当前所处的拍摄场景的方法可包括但不限于通过光传感器检测、在打开摄像头之后以及在检测到拍摄操作之前对摄像头所采集到的图像进行检测等,此处不做限定。电子设备10可以通过光传感器检测当前的光线强度,根据光线强度确定当前所处的拍摄场景。例如,若光线强度大于强度阈值,则电子设备10确定当前所处的拍摄场景为逆光拍摄场景,若光线强度小于或等于强度阈值,则电子设备10确定当前不处于逆光拍摄场景。As an optional implementation manner, the electronic device 10 may detect the shooting scene where the electronic device 10 is currently located before detecting the shooting operation. Wherein, the method of detecting the current shooting scene may include but not limited to detection by a light sensor, detection of images collected by the camera after the camera is turned on and before a shooting operation is detected, etc., which is not limited here. The electronic device 10 may detect the current light intensity through the light sensor, and determine the current shooting scene according to the light intensity. For example, if the light intensity is greater than the intensity threshold, the electronic device 10 determines that the current shooting scene is a backlight shooting scene; if the light intensity is less than or equal to the intensity threshold, the electronic device 10 determines that it is not currently in a backlight shooting scene.
在电子设备10检测到当前处于逆光拍摄场景时,可以自动切换为针对该逆光拍摄场景下所采集的图像进行处理的处理模式,也可以输出提示信息,提示用户进行切换为该处理模式所对应的切换操作,在检测到该切换操作后,电子设备10切换为针对逆光拍摄场景下所采集的图像进行处理的处理模式,其中,切换操作可包括但不限于点击操作、语音操作或者手势操作等。本公开实施例公开的图像处理方法的具体内容在下述实施例进行说明,此处不作过多解释。When the electronic device 10 detects that it is currently in a backlight shooting scene, it can automatically switch to a processing mode for processing images collected in the backlight shooting scene, or output a prompt message to prompt the user to switch to the processing mode corresponding to the switching operation. After detecting the switching operation, the electronic device 10 switches to a processing mode for processing images collected in the backlight shooting scene. The specific content of the image processing method disclosed in the embodiments of the present disclosure will be described in the following embodiments, and will not be explained too much here.
参照图2所示,图2为本公开一个或多个实施例提供的图像处理方法的步骤流程图,该图像处理方法可应用于上述的电子设备,可包括如下步骤:Referring to FIG. 2, FIG. 2 is a flow chart of the steps of an image processing method provided by one or more embodiments of the present disclosure. The image processing method can be applied to the above-mentioned electronic device, and may include the following steps:
步骤210,检测电子设备所处的拍摄场景。 Step 210, detecting the shooting scene where the electronic device is located.
在一个实施例中,电子设备在通过摄像头采集到预览图像后,可以根据该采集到的预览图像检测电子设备所处的拍摄场景,即检测电子设备是否处于逆光拍 摄场景。其中,检测的方法可包括检测图像像素点的灰度值、检测图像像素点的RGB值等等,此处不作限定。In one embodiment, after the electronic device collects the preview image through the camera, it can detect the shooting scene where the electronic device is located according to the collected preview image, that is, detect whether the electronic device is in a backlit shooting scene. Wherein, the detection method may include detecting the gray value of the image pixel, detecting the RGB value of the image pixel, etc., which are not limited here.
在一个实施例中,电子设备可以获取通过摄像头采集的预览图像,计算预览图像的灰度值,若灰度值大于预设灰度阈值,则确定电子设备处于逆光拍摄场景,电子设备可以继续执行步骤220~步骤240,若灰度值小于或等于预设灰度阈值,则确定电子设备不处于逆光拍摄场景,电子设备可以对采集到的图像进行图像增强处理和/或滤波处理。其中,图像增强处理可以包括对比度拉升、Gamma(γ,伽马)校正、直方图均衡化、直方图规定化、基于HSV空间的彩色图像增强方法等,滤波处理包括均值滤波、中值滤波、高斯滤波、双边滤波等,本公开实施例对此不做限定。In one embodiment, the electronic device may acquire a preview image collected by the camera, and calculate the grayscale value of the preview image. If the grayscale value is greater than a preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene. The electronic device may continue to perform steps 220 to 240. If the grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene. The electronic device may perform image enhancement processing and/or filtering processing on the collected images. Wherein, the image enhancement processing may include contrast enhancement, Gamma (γ, gamma) correction, histogram equalization, histogram regulation, color image enhancement method based on HSV space, etc., and the filtering processing includes mean filtering, median filtering, Gaussian filtering, bilateral filtering, etc., which are not limited in the embodiments of the present disclosure.
作为一种可选的实施方式,电子设备可以计算摄像头采集到的图像中的所有像素点的平均灰度值,将图像中的所有像素点的平均灰度值与预设灰度阈值进行比较。若平均灰度值大于预设灰度阈值,则确定电子设备处于逆光拍摄场景,若平均灰度值小于或等于预设灰度阈值,则确定电子设备不处于逆光拍摄场景。As an optional implementation manner, the electronic device may calculate an average gray value of all pixels in the image collected by the camera, and compare the average gray value of all pixels in the image with a preset gray threshold. If the average grayscale value is greater than the preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene; if the average grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene.
作为一种可选的实施方式,电子设备可以计算摄像头采集到的图像中的每个像素点的灰度值,将图像中的每个像素点的灰度值与预设灰度阈值进行比较。计算灰度值大于预设灰度阈值的像素点个数,若像素点个数大于与预设数量阈值,则确定电子设备处于逆光拍摄场景,若像素点个数小于或等于预设数量阈值,则确定电子设备不处于逆光拍摄场景。As an optional implementation manner, the electronic device may calculate the grayscale value of each pixel in the image collected by the camera, and compare the grayscale value of each pixel in the image with a preset grayscale threshold. Calculate the number of pixels whose grayscale value is greater than the preset grayscale threshold. If the number of pixels is greater than the preset number threshold, it is determined that the electronic device is in a backlight shooting scene. If the number of pixels is less than or equal to the preset number threshold, it is determined that the electronic device is not in a backlight shooting scene.
步骤220,若检测到电子设备处于逆光拍摄场景,通过摄像头采集对焦于逆光区域的第一图像,以及通过摄像头采集对焦于非逆光区域的第二图像。 Step 220, if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
电子设备若检测到电子设备处于逆光拍摄场景,通过摄像头采集对焦于逆光区域的第一图像,以及通过摄像头采集对焦于非逆光区域的第二图像,采集第一图像与第二图像之间不存在先后顺序关系,也可以同时执行。其中,逆光区域指的是受到光照影响导致出现逆光效果的区域,非逆光区域指的是未因光照影响出现逆光效果的区域。If the electronic device detects that the electronic device is in a backlit shooting scene, the first image focused on the backlit area is collected through the camera, and the second image focused on the non-backlit area is collected through the camera. There is no sequence relationship between the first image and the second image, and they can also be executed simultaneously. Wherein, the backlight area refers to an area that is affected by light and has a backlight effect, and the non-backlight area refers to an area that does not have a backlight effect due to the light.
在一个实施例中,电子设备可以根据预设的区域尺寸将预览图像划分为多个图像区域,例如每个图像区域包括64×64个像素,并计算出预览图像中各个图像区域对应的灰度值,分别将各个图像区域对应的灰度值与区域灰度阈值进行比较,从而将预览图像划分为逆光区域和非逆光区域,逆光区域指的是预览图像中灰度值大于区域灰度阈值的区域,即会受到光照影响导致出现逆光效果的区域,非逆光区域指的是预览图像中灰度值小于或等于区域灰度阈值的区域,即未因光照影响出现逆光效果的区域,电子设备再通过摄像头采集对焦于逆光区域的第一图像和对焦于非逆光区域的第二图像。In one embodiment, the electronic device can divide the preview image into multiple image areas according to the preset area size, for example, each image area includes 64×64 pixels, and calculate the gray value corresponding to each image area in the preview image, respectively compare the gray value corresponding to each image area with the area gray threshold, so as to divide the preview image into a backlight area and a non-backlight area. The area refers to the area in the preview image whose gray value is less than or equal to the gray threshold of the area, that is, the area where the backlight effect does not appear due to the influence of light. The electronic device then collects the first image focused on the backlit area and the second image focused on the non-backlit area through the camera.
作为一种可选的实施方式,电子设备可以在检测到电子设备处于逆光拍摄场景时,通过摄像头自动采集对焦于逆光区域的第一图像,以及通过摄像头采集对焦于非逆光区域的第二图像,不需要用户去寻找对焦区域,简化用户的操作,提高图像处理方法的便捷性。As an optional implementation manner, when the electronic device detects that the electronic device is in a backlight shooting scene, the camera can automatically collect the first image focused on the backlight area, and the camera can collect the second image focused on the non-backlight area, without requiring the user to find the focus area, simplifying the user's operation, and improving the convenience of the image processing method.
作为一种可选的实施方式,电子设备可以在检测到电子设备处于逆光拍摄场景时,发出提示信息以提示用户手动选择对焦于逆光区域和非逆光区域,在检测到针对逆光区域的焦点选择操作时,根据该针对逆光区域的焦点选择操作,通过摄像头对焦逆光区域并采集第一图像,在检测到针对非逆光区域的焦点选择操作时,根据该针对非逆光区域的焦点选择操作,通过摄像头对焦非逆光区域并采集第二图像,可以根据用户的需求选择需要对焦的区域,再采集第一图像和第二图 像,以提高采集到的图像对焦的准确性。As an optional implementation, when the electronic device detects that the electronic device is in a backlit shooting scene, it can send a prompt message to prompt the user to manually select the focus on the backlit area and the non-backlit area. When the focus selection operation for the backlit area is detected, the camera focuses on the backlit area and collects the first image according to the focus selection operation for the backlit area. The area that needs to be focused, and then collect the first image and the second image, so as to improve the accuracy of the focus of the collected image.
步骤230,分别对第一图像及第二图像进行增强处理。 Step 230, performing enhancement processing on the first image and the second image respectively.
由于摄像头在对焦于不同区所采集到的第一图像及第二图像的图像质量不够,电子设备可以分别对第一图像及第二图像进行增强处理,以提高第一图像及第二图像的图像质量。Since the image quality of the first image and the second image collected by the camera focusing on different areas is not enough, the electronic device may respectively perform enhancement processing on the first image and the second image to improve the image quality of the first image and the second image.
在一个实施例中,可以对第一图像进行第一增强处理,对第二图像进行第二增强处理;其中,第一增强处理包括多尺度视网膜增强算法(MSR,Multi-Scale Retinex),第二增强处理包括同态滤波算法。In one embodiment, the first enhancement process can be performed on the first image, and the second enhancement process can be performed on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm (MSR, Multi-Scale Retinex), and the second enhancement process includes a homomorphic filtering algorithm.
第一图像为摄像头在对焦于逆光区域时采集的,而对焦于逆光区域会影响到第一图像的清晰度及对比度,因此,电子设备可以使用多尺度视网膜增强算法增强第一图像的对比度和清晰度,多尺度视网膜增强算法通过确定第一图像的多个尺度,对第一图像进行多个尺度下的高斯模糊,得到模糊后的多个模糊图像,根据第一图像和多个模糊图像得到增强后的第一图像,能够提升第一图像的图像质量。The first image is collected when the camera focuses on the backlit area, and focusing on the backlit area will affect the clarity and contrast of the first image. Therefore, the electronic device can use a multi-scale retinal enhancement algorithm to enhance the contrast and clarity of the first image. The multi-scale retinal enhancement algorithm determines multiple scales of the first image, performs Gaussian blur on the first image at multiple scales, and obtains multiple blurred images after blurring. The enhanced first image is obtained based on the first image and the multiple blurred images, which can improve the image quality of the first image.
第二图像为摄像头在对焦于非逆光区域时采集的,而对焦于非逆光区域会造成第二图像中的图像噪声较多,因此,电子设备可以使用同态滤波算法去除第二图像中的噪声,同态滤波算法通过对第二图像对应的照明-反射模型进行变换,再将变换的结果通过频域滤波器得到滤波的结果,将滤波的结果逆变换后得到增强后的第二图像,能够提升第二图像的图像质量。The second image is collected when the camera focuses on the non-backlit area, and focusing on the non-backlit area will cause more image noise in the second image. Therefore, the electronic device can use a homomorphic filtering algorithm to remove the noise in the second image. The homomorphic filtering algorithm transforms the illumination-reflection model corresponding to the second image, and then passes the transformed result through a frequency domain filter to obtain a filtered result, and inversely transforms the filtered result to obtain an enhanced second image, which can improve the image quality of the second image.
步骤240,将增强处理后的第一图像和增强处理后的第二图像融合,得到第三图像。 Step 240, fusing the enhanced first image and the enhanced second image to obtain a third image.
电子设备可以将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像,该第三图像可以是最终呈现在显示屏上供用户察看的图像,也可以是保存在存储器中的图像。The electronic device may fuse the enhanced first image and the enhanced second image to obtain a third image. The third image may be an image finally presented on a display screen for viewing by a user, or may be an image stored in a memory.
作为一种可选的实施方式,电子设备可以根据像素点的数量将第一图像和第二图像划分为各个图像区域,例如每个图像区域包括64×64个像素,第一图像中的每个图像区域与第二图像中的每个图像区域之间为一一对应的关系,对于第一图像与第二图像中相对应的每两个图像区域,选择两个图像区域中的图像质量更高的图像区域,图像质量更高可以是图像的清晰度更高、图像的对比度更高、图像的噪点更少等,并将图像质量更高的图像区域,作为第三图像中相对应位置的图像区域,对其它图像区域也进行相同的操作,在完成所有区域的融合之后,得到第三图像。As an optional implementation, the electronic device may divide the first image and the second image into image areas according to the number of pixels. For example, each image area includes 64×64 pixels. There is a one-to-one correspondence between each image area in the first image and each image area in the second image. For every two image areas corresponding to the first image and the second image, select the image area with higher image quality in the two image areas. , as the image area corresponding to the position in the third image, the same operation is performed on other image areas, and the third image is obtained after the fusion of all areas is completed.
在本公开实施例中,电子设备能够检测当前所处的环境,在检测到电子设备处于逆光拍摄场景时,通过摄像头采集对焦于逆光区域的第一图像和对焦于非逆光区域的第二图像,而在逆光拍摄场景中曝光不足或者过曝的问题会使得第一图像和第二图像都会丢失部分的有效信息,通过将第一图像与第二图像增强处理后进行融合,能够将第一图像与第二图像中的有效信息进行互补,以及对第一图像与第二图像的增强处理能够提高第一图像与第二图像的清晰度及对比度,提高了在逆光拍摄场景下拍摄的图像的图像质量。。In the embodiment of the present disclosure, the electronic device can detect the current environment. When it detects that the electronic device is in a backlight shooting scene, the camera captures the first image focused on the backlight area and the second image focused on the non-backlight area. However, underexposure or overexposure in the backlight shooting scene will cause the first image and the second image to lose part of the effective information. The sharpness and contrast of the second image improve the image quality of the image captured in the backlight shooting scene. .
参照图3所示,图3为本公开一个或多个实施例提供的图像处理方法的步骤流程图,该图像处理方法也可应用于上述的电子设备,可包括如下步骤:Referring to FIG. 3, FIG. 3 is a flow chart of the steps of the image processing method provided by one or more embodiments of the present disclosure. The image processing method can also be applied to the above-mentioned electronic device, and may include the following steps:
步骤310,检测电子设备所处的拍摄场景。 Step 310, detecting the shooting scene where the electronic device is located.
步骤320,若检测到电子设备处于逆光拍摄场景,通过摄像头采集对焦于逆光区域的第一图像,以及通过摄像头采集对焦于非逆光区域的第二图像。 Step 320, if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
步骤330,分别对第一图像及第二图像进行增强处理。 Step 330, performing enhancement processing on the first image and the second image respectively.
步骤310~330与步骤210~230相同,此处不再赘述。Steps 310-330 are the same as steps 210-230, and will not be repeated here.
步骤340,对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得第一图像的空间位置信息和第二图像的空间位置信息一致化。 Step 340, performing registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the first image is consistent with the spatial position information of the second image.
由于电子设备在采集第一图像与第二图像时可能产生移动,造成第一图像的空间位置信息与第二图像的空间位置信息存在差异。因此,电子设备可以对增强处理后的第一图像和增强处理后的第二图像进行配准处理,例如,通过增强处理后的第一图像和增强处理后的第二图像中的关键点进行配准、通过增强处理后的第一图像和增强处理后的第二图像中的图像特征进行配准等,从而使得增强处理后的第一图像的空间位置信息和增强处理后的第二图像的空间位置信息一致化,使得后续的图像融合步骤可以进行。Since the electronic device may move when collecting the first image and the second image, there is a difference between the spatial position information of the first image and the spatial position information of the second image. Therefore, the electronic device may perform registration processing on the enhanced first image and the enhanced second image, for example, register key points in the enhanced first image and the enhanced second image, register image features in the enhanced first image and the enhanced second image, etc., so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image, so that the subsequent image fusion step can be performed.
作为一种可选的实施方式,电子设备可以对增强处理后的第一图像和增强处理后的第二图像进行仿射变换配准,先以增强处理后的第一图像与增强处理后的第二图像之间的特征匹配作为数据得到预计仿射变换矩阵,根据预计仿射变换矩阵将增强处理后的第一图像与增强处理后的第二图像进行配准,从而使得增强处理后的第一图像的空间位置信息和增强处理后的第二图像的空间位置信息一致化。例如,电子设备可以进行增强处理后的第一图像与增强处理后的第二图像之间的特征匹配,得到增强处理后的第一图像变换为增强处理后的第二图像的预计放射变换矩阵,根据该预计仿射变换矩阵将增强处理后的第一图像进行变换,从而得到与增强处理后的第二图像完成配准的第一图像。As an optional implementation manner, the electronic device may perform affine transformation registration on the enhanced first image and the enhanced second image, first use feature matching between the enhanced first image and the enhanced second image as data to obtain a predicted affine transformation matrix, and register the enhanced first image with the enhanced second image according to the predicted affine transformation matrix, so that the spatial position information of the enhanced first image and the spatial position information of the enhanced second image are consistent. For example, the electronic device may perform feature matching between the enhanced first image and the enhanced second image, obtain a predicted radial transformation matrix for transforming the enhanced first image into the enhanced second image, and transform the enhanced first image according to the predicted affine transformation matrix, thereby obtaining a first image that is registered with the enhanced second image.
步骤350,将配准处理后的第一图像及配准处理后的第二图像进行融合,得到第三图像。 Step 350, merging the registered first image and the registered second image to obtain a third image.
电子设备可以将配准处理后的第一图像及配准处理后的第二图像进行融合,得到第三图像。配准处理后的第一图像及配准处理后的第二图像是在增强处理后的第一图像及增强处理后的第二图像基础上进行配准处理后得到的,而步骤350中对配准处理后的第一图像及配准处理后的第二图像进行融合的方式,可与步骤240中对增强处理后的第一图像及增强处理后的第二图像进行融合的方法相同,此处不再赘述。The electronic device may fuse the registered first image and the registered second image to obtain a third image. The first image after the registration processing and the second image after the registration processing are obtained after registration processing is performed on the basis of the first image after the enhancement processing and the second image after the enhancement processing, and the method of fusing the first image after the registration processing and the second image after the registration processing in step 350 can be the same as the method of fusing the first image after the enhancement processing and the second image after the enhancement processing in step 240, and will not be repeated here.
作为一种可选的实施方式,由于配准处理后的第一图像中的各个像素点与配准处理后的第二图像中的各个像素点之间存在一一对应的关系,电子设备可以对配准处理后的第一图像与配准处理后的第二图像中相对应的每两个像素点的像素值(如灰度值或RGB(Red、Green、Blue,红绿蓝)值等)进行加权和计算,得到对应的目标像素值,即将两个对应的像素点的像素值按照不同的权重系数分别相乘再相加,得到的目标像素值可作为第三图像中相对应位置的像素点的像素值,对配准处理后的第一图像和配准处理后的第二图像中所有对应的像素点均可进行上述融合操作,得到第三图像。As an optional implementation manner, since there is a one-to-one correspondence between each pixel point in the first image after registration processing and each pixel point in the second image after registration processing, the electronic device can weight and calculate the pixel values (such as grayscale values or RGB (Red, Green, Blue, red, green, blue) values, etc.) After adding, the obtained target pixel value can be used as the pixel value of the corresponding pixel point in the third image, and the above-mentioned fusion operation can be performed on all corresponding pixel points in the registered first image and the registered second image to obtain the third image.
在本公开实施例,电子设备还可以将第一图像与第二图像进行配准处理,以便于第一图像与第二图像进行融合,提高图像融合的准确性。In the embodiment of the present disclosure, the electronic device may also perform registration processing on the first image and the second image, so as to facilitate the fusion of the first image and the second image, and improve the accuracy of image fusion.
参照图4所示,图4为本公开一个或多个实施例提供的将增强处理后的第一图像与增强处理后的第二图像进行融合的步骤流程图,步骤将增强处理后的第一图像与增强处理后的第二图像进行融合,可以包括以下步骤:Referring to FIG. 4 , FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure. The step of fusing the enhanced first image with the enhanced second image may include the following steps:
步骤410,将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量;其中,所述第一高频分量对应增强处理后的第一图像中的第一高频区域,所述第一低频分量对应所 述增强处理后的第一图像中的第一低频区域;所述第二高频分量对应所述增强处理后的第二图像中的第二高频区域,所述第二低频分量对应所述增强处理后的第二图像中的第二低频区域。 Step 410, decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency region in the enhanced second image.
电子设备可以将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量。The electronic device may decompose the enhanced first image into a first high frequency component and a first low frequency component, and decompose the enhanced second image into a second high frequency component and a second low frequency component.
其中,电子设备可以计算增强处理后的第一图像中各个像素点的灰度值,从而得到增强后的第一图像中各个像素点与周围像素点之间的灰度值变化速度,根据灰度值变化速度将增强处理后的第一图像分解为第一高频分量和第一低频分量,第一高频分量包括多个与增强处理后的第一图像的图像尺寸相同的图像,第一低频分量包括一个与增强处理后的第一图像的图像尺寸相同的图像。第一高频分量对应增强处理后的第一图像中的第一高频区域,即第一高频分量中的多个图像对应增强处理后的第一图像中不同的第一高频区域,第一高频区域为增强处理后的第一图像中灰度值变化速度大于第一变化速度阈值的图像区域,例如,第一高频分量可以对应增强处理后的第一图像的边缘区域,因为图像的边缘区域的灰度值变化速度通常大于中间区域的灰度值变化速度。第一低频分量对应增强处理后的第一图像中的第一低频区域,即第一低频分量中的图像对应增强处理后的第一图像中的第一低频区域,第一低频区域为增强处理后的第一图像中灰度值变化速度小于或等于第一变化速度阈值的图像区域,例如,第一低频分量可以对应增强处理后的第一图像的中间区域,因为图像的中间区域的灰度值变化速度通常小于边缘区域的灰度值变化速度。Wherein, the electronic device may calculate the gray value of each pixel in the enhanced first image, so as to obtain the gray value change speed between each pixel in the enhanced first image and surrounding pixels, and decompose the enhanced first image into a first high frequency component and a first low frequency component according to the gray value change speed, the first high frequency component includes a plurality of images having the same image size as the enhanced first image, and the first low frequency component includes an image having the same image size as the enhanced first image. The first high-frequency component corresponds to the first high-frequency region in the enhanced first image, that is, the multiple images in the first high-frequency component correspond to different first high-frequency regions in the enhanced first image, and the first high-frequency region is an image region in the enhanced first image whose gray value change speed is greater than the first change speed threshold. For example, the first high frequency component may correspond to the edge region of the enhanced first image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region. The first low-frequency component corresponds to the first low-frequency region in the enhanced first image, that is, the image in the first low-frequency component corresponds to the first low-frequency region in the enhanced first image, and the first low-frequency region is an image region in the enhanced first image whose grayscale value change speed is less than or equal to the first change speed threshold. For example, the first low-frequency component may correspond to the middle region of the enhanced first image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region.
电子设备可以计算增强处理后的第二图像中各个像素点的灰度值,从而得到增强处理后的第二图像中各个像素点与周围像素点之间的灰度值变化速度,根据灰度值变化速度将增强处理后的第二图像分解为第二高频分量和第二低频分量,第二高频分量包括多个与增强处理后的第二图像的图像尺寸相同的图像,第二低频分量包括一个与增强处理后的第二图像的图像尺寸相同的图像。第二高频分量对应增强处理后的第二图像中的第二高频区域,即第二高频分量中的多个图像对应增强处理后的第二图像中不同的第二高频区域,第二高频区域为增强处理后的第二图像中灰度值变化速度大于第二变化速度阈值的图像区域,例如,第二高频分量可以对应增强处理后的第二图像的边缘区域,因为图像的边缘区域的灰度值变化速度通常大于中间区域的灰度值变化速度。第二低频分量对应增强处理后的第二图像中的第二低频区域,即第二低频分量中的图像对应增强处理后的第二图像中的第二低频区域,第二低频区域为增强处理后的第二图像中灰度值变化速度小于或等于第二变化速度阈值的图像区域,例如,第二低频分量可以对应增强处理后的第二图像的中间区域,因为图像的中间区域的灰度值变化速度通常小于边缘区域的灰度值变化速度。其中,第一变化速度阈值与第二变化速度阈值可以是数值大小相等的,也可以是数值大小不相等的。The electronic device may calculate the gray value of each pixel in the enhanced second image, thereby obtaining the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and decompose the enhanced second image into a second high frequency component and a second low frequency component according to the gray value change speed. The second high-frequency component corresponds to the second high-frequency region in the enhanced second image, that is, the multiple images in the second high-frequency component correspond to different second high-frequency regions in the enhanced second image, and the second high-frequency region is an image region in the enhanced second image whose gray value change speed is greater than the second change speed threshold. For example, the second high frequency component may correspond to the edge region of the enhanced second image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region. The second low-frequency component corresponds to the second low-frequency region in the enhanced second image, that is, the image in the second low-frequency component corresponds to the second low-frequency region in the enhanced second image, and the second low-frequency region is an image region whose grayscale value change speed in the enhanced second image is less than or equal to the second change speed threshold. For example, the second low-frequency component may correspond to the middle region of the enhanced second image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region. Wherein, the first change speed threshold and the second change speed threshold may be equal in magnitude or unequal in magnitude.
作为一种可选的实施方式,电子设备可以对增强处理后的第一图像进行NSCT(Nonsubsampled contourlet transform,非下采样轮廓波变换)正变换或FNSCT(FastNonsubsampled contourlet transform,快速非下采样轮廓波变换)正变换,从而得到第一高频分量和第一低频分量,以及第一高频分量对应的第一高频变换系数和第一低频分量对应的第一低频变换系数。电子设备将增强处理后的第一图像转换为第一频谱图,第一频谱图可以说明增强后的第一图像中各个像素点与周围像素点之间的灰度值变化速度,根据第一频谱图对增强处理后的第一图像进行多次NSP(Nonsubsampled Pyramid,非下采样塔式滤波器)分解,从而得到 一个或多个第一带通子带图像和一个第一低通子带图像。第一带通子带图像的图像尺寸与增强处理后的第一图像的图像尺寸相同,每个第一带通子带图像对应增强后的第一图像中不同的第一高频区域。第一低通子带图像的图像尺寸与增强处理后的第一图像的图像尺寸相同,第一低通子带图像对应增强后的第一图像中的第一低频区域,可将第一低通子带图像作为第一低频分量。电子设备再对每个第一带通子带图像进行NSDFB(Nonsubsampled directional filter bank,非下采样方向滤波器组)分解,可将每个第一带通子带图像分解为多个第一多方向子带图像,第一多方向子带图像的图像尺寸与增强处理后的第一图像的图像尺寸相同,可将多个第一方向子带图像共同作为第一高频分量。As an optional implementation manner, the electronic device may perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced first image, thereby obtaining the first high-frequency component and the first low-frequency component, and the first high-frequency transformation coefficient corresponding to the first high-frequency component and the first low-frequency transformation coefficient corresponding to the first low-frequency component. The electronic device converts the enhanced first image into a first spectrogram, the first spectrogram can illustrate the gray value change speed between each pixel in the enhanced first image and the surrounding pixels, and perform multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced first image according to the first spectrogram, thereby obtaining one or more first bandpass subband images and a first lowpass subband image. The image size of the first bandpass subband image is the same as that of the enhanced first image, and each first bandpass subband image corresponds to a different first high frequency region in the enhanced first image. The image size of the first low-pass sub-band image is the same as that of the enhanced first image, the first low-pass sub-band image corresponds to the first low-frequency region in the enhanced first image, and the first low-pass sub-band image can be used as the first low-frequency component. The electronic device then performs NSDFB (Nonsubsampled directional filter bank, non-subsampled directional filter bank) decomposition on each first bandpass subband image, and can decompose each first bandpass subband image into a plurality of first multi-directional subband images. The image size of the first multi-directional subband image is the same as the image size of the enhanced first image, and the plurality of first direction subband images can be jointly used as the first high frequency component.
此外,在对增强处理后的第一图像进行NSP分解得到第一低通子带图像时,并获得第一低通子带图像对应的第一低频变换系数。在对第一带通子带图像进行NSDFB分解得到第一多方向子带图像时,并获得第一多方向子带图像对应的第一高频变换系数,由于可以得到多个第一多方向子带图像,即第一高频变换系数可以包括多个值,该多个值对应多个第一多方向子带图像。In addition, when performing NSP decomposition on the enhanced first image to obtain the first low-pass sub-band image, the first low-frequency transform coefficient corresponding to the first low-pass sub-band image is obtained. When performing NSDFB decomposition on the first bandpass sub-band image to obtain the first multi-directional sub-band image, and obtain the first high-frequency transformation coefficient corresponding to the first multi-directional sub-band image, since multiple first multi-directional sub-band images can be obtained, that is, the first high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple first multi-directional sub-band images.
电子设备也可以对增强处理后的第二图像进行NSCT(Nonsubsampled contourlet transform,非下采样轮廓波变换)正变换或FNSCT(FastNonsubsampled contourlet transform,快速非下采样轮廓波变换)正变换,从而得到第二高频分量和第二低频分量,以及第二高频分量对应的第二高频变换系数和第二低频分量对应的第二低频变换系数。电子设备将增强处理后的第二图像转换第二频谱图,第二频谱图可以说明增强后的第二图像中各个像素点与周围像素点之间的灰度值变化速度,根据第二频谱图对增强处理后的第二图像进行多次NSP(Nonsubsampled Pyramid,非下采样塔式滤波器)分解,从而得到一个或多个第二带通子带图像和一个第二低通子带图像。第二带通子带图像的图像尺寸与增强处理后的第二图像的图像尺寸相同,每个第二带通子带图像对应增强后的第二图像中不同的第二高频区域。第二低通子带图像的图像尺寸与增强处理后的第二图像的图像尺寸相同,第二低通子带图像对应增强后的第二图像中的第二低频区域,可将第二低通子带图像作为第二低频分量。电子设备再对每个第二带通子带图像进行NSDFB(Nonsubsampled directional filter bank,非下采样方向滤波器组)分解,可将每个第二带通子带图像分解为多个第二多方向子带图像,第二多方向子带图像的图像尺寸与增强处理后的第二图像的图像尺寸相同,可将多个第二方向子带图像共同作为第二高频分量。The electronic device may also perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced second image, thereby obtaining a second high-frequency component and a second low-frequency component, and a second high-frequency transformation coefficient corresponding to the second high-frequency component and a second low-frequency transformation coefficient corresponding to the second low-frequency component. The electronic device converts the enhanced second image into a second spectrogram, and the second spectrogram can illustrate the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and performs multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced second image according to the second spectrogram, thereby obtaining one or more second bandpass subband images and a second lowpass subband image. The image size of the second bandpass subband image is the same as that of the enhanced second image, and each second bandpass subband image corresponds to a different second high frequency region in the enhanced second image. The image size of the second low-pass sub-band image is the same as that of the enhanced second image, the second low-pass sub-band image corresponds to the second low-frequency area in the enhanced second image, and the second low-pass sub-band image can be used as the second low-frequency component. The electronic device then performs NSDFB (Nonsubsampled directional filter bank) decomposition on each second bandpass subband image, and can decompose each second bandpass subband image into a plurality of second multi-directional subband images. The image size of the second multidirectional subband image is the same as the image size of the enhanced second image, and the plurality of second direction subband images can be jointly used as the second high frequency component.
此外,在对增强处理后的第二图像进行NSP分解得到第二低通子带图像时,并获得第二低通子带图像对应的第二低频变换系数。在对第二带通子带图像进行NSDFB分解得到第二多方向子带图像时,并获得第二多方向子带图像对应的第二高频变换系数,由于可以得到多个第二多方向子带图像,即第二高频变换系数可以包括多个值,该多个值对应多个第二多方向子带图像。In addition, when performing NSP decomposition on the enhanced second image to obtain the second low-pass sub-band image, the second low-frequency transformation coefficient corresponding to the second low-pass sub-band image is obtained. When NSDFB decomposition is performed on the second bandpass sub-band image to obtain the second multi-directional sub-band image, and the second high-frequency transformation coefficient corresponding to the second multi-directional sub-band image is obtained, since a plurality of second multi-directional sub-band images can be obtained, that is, the second high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple second multi-directional sub-band images.
步骤420,将第一高频分量和第二高频分量融合为第三高频分量,将第一低频分量和第二低频分量融合为第三低频分量。 Step 420, fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component.
在一个实施例中,步骤将第一高频分量和第二高频分量融合为第三高频分量,可包括:计算第一高频分量对应的第一模值和第二高频分量对应的第二模值;将第一模值与第二模值进行比较,并确定第一模值与第二模值中最大的模值,若最大的模值为第一模值,则将第一高频分量作为第三高频分量,若最大的模值为第二模值,则将第二高频分量作为第三高频分量。In one embodiment, the step of fusing the first high-frequency component and the second high-frequency component into a third high-frequency component may include: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component;
其中,由于第一高频分量中的多个图像可以对应增强处理后的第一图像中不 同的第一高频区域,所以第一高频分量可对应多个第一模值,多个第一模值可分别与第一高频分量中的多个图像一一对应,由于第二高频分量中的多个图像可以对应增强处理后的第二图像中不同的第二高频区域,所以第二高频分量可对应多个第二模值,多个第二模值可分别与第二高频分量中的多个图像一一对应。因为第一图像中的第一高频区域与第二图像中的第二高频区域一一对应,所以第一高频分量对应的多个第一模值与第二高频分量对应的多个第二模值一一对应。电子设备将每两个对应的第一模值与第二模值进行比较,并确定每两个对应的第一模值与第二模值中最大的模值,若其中最大的模值为第一模值,则将第一模值对应的第一高频分量的图像融合到第三高频分量的图像,若最大的模值为第二模值,则将第二模值对应的第二高频分量的图像融合到第三高频分量的图像,将所有的第一模值与第二模值进行比较,即可以得到第三高频分量。Wherein, because a plurality of images in the first high-frequency component can correspond to different first high-frequency regions in the first image after the enhancement process, so the first high-frequency component can correspond to a plurality of first modulus values, and the plurality of first modulus values can correspond to a plurality of images in the first high-frequency component respectively. Since the first high-frequency region in the first image corresponds to the second high-frequency region in the second image, the plurality of first modulus values corresponding to the first high-frequency component corresponds to the plurality of second modulus values corresponding to the second high-frequency component. The electronic device compares every two corresponding first modulus values with the second modulus value, and determines the largest modulus value among the first modulus value and the second modulus value corresponding to each two. If the largest modulus value is the first modulus value, the image of the first high-frequency component corresponding to the first modulus value is fused to the image of the third high-frequency component;
作为一种可选的实施方式,电子设备对增强处理后的第一图像和增强处理后的第二图像进行NSCT正变换或FNSCT正变换后可以得到第一高频变换系数和第二高频变换系数,因为第一高频变换系数可对应第一高频分量中的第一多方向子带图像,及第二高频变换系数可对应第二高频分量中的第二多方向子带图像,所以对第一高频变换系数和第二高频变换系数进行融合可以相当于将第一高频分量和第二高频分量进行融合,得到的第三高频变换系数对应第三高频分量。电子设备对第一高频变换系数中的多个值分别取绝对值,得到多个第一模值,对第二高频变换系数中的多个值分别取绝对值,得到多个第二模值,第一模值与第二模值之间存在一一对应关系,对每两个对应的第一模值与第二模值进行比较,并确定每两个对应的第一模值与第二模值中最大的模值,若其中最大的模值为第一模值,则将第一模值对应的第一高频变换系数的值作为第三高频分量变换系数的值,若最大的模值为第二模值,则将第二模值对应的第二高频变换系数的值作为第三高频分量变换系数的值,将所有的第一模值与第二模值进行比较,即可以得到第三高频分量变换系数,从而确定第三高频变换系数对应的第三高频分量。As an optional embodiment, the electronic equipment can obtain the first high -frequency transformation coefficient and the second high -frequency transformation coefficient on the enhanced first image and the enhanced secondary image. The direction of the direction is the image, so the fusion of the first high -frequency transformation coefficient and the second high -frequency variable change coefficient can be equivalent to integrating the first high frequency component and the second high frequency component. The electronic device respectively takes absolute values of multiple values in the first high-frequency transformation coefficient to obtain multiple first modular values, respectively takes absolute values of multiple values in the second high-frequency transformation coefficient to obtain multiple second modular values, there is a one-to-one correspondence between the first and second modular values, compares every two corresponding first and second modular values, and determines the largest modulus value among the first and second corresponding first and second modulus values, and if the largest modulus value is the first modulus value, then use the value of the first high-frequency transformation coefficient corresponding to the first modulus value as the value of the third high-frequency component transformation coefficient , if the largest modulus value is the second modulus value, the value of the second high-frequency transform coefficient corresponding to the second modulus value is used as the value of the third high-frequency component transform coefficient, and all the first modulus values are compared with the second modulus value to obtain the third high-frequency component transform coefficient, thereby determining the third high-frequency component corresponding to the third high-frequency transform coefficient.
在一个实施例中,步骤将第一低频分量和第二低频分量融合为第三低频分量,可包括:根据第一低频分量和第二低频分量确定平均低频分量,将平均低频分量作为第三低频分量。In one embodiment, the step of fusing the first low-frequency component and the second low-frequency component into a third low-frequency component may include: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
其中,第一低频分量的图像区域和第二低频分量的图像区域之间存在对应关系,电子设备可以计算第一低频分量与第二低频分量中的平均低频分量,即计算第一低频分量对应的数据与第二低频分量对应的数据的均值,从而得到平均低频分量,并将平均低频分量作为第三低频分量。Where there is a corresponding relationship between the image area of the first low-frequency component and the image area of the second low-frequency component, the electronic device may calculate the average low-frequency component of the first low-frequency component and the second low-frequency component, that is, calculate an average value of data corresponding to the first low-frequency component and data corresponding to the second low-frequency component, thereby obtaining the average low-frequency component, and using the average low-frequency component as the third low-frequency component.
作为一种可选的实施方式,电子设备对增强处理后的第一图像和增强处理后的第二图像进行NSCT正变换或FNSCT正变换后可以得到第一低频变换系数和第二低频变换系数,因为第一低频变换系数可对应第一低频分量中的第一低通子带图像,及第二低频变换系数可对应第二低频分量中的第二低通子带图像,所以对第一低频变换系数和第二低频变换系数进行融合可以相当于将第一低频分量和第二低频分量进行融合,得到的第三低频变换系数对应第三低频分量。电子设备可以对第一图像和第二图像NSCT正变换或FNSCT正变换后得到的代表第一低频分量的第一低频变换系数和代表第二低频分量的第二低频变换系数进行融合,以得到第三低频变换系数,将第三低频变换系数代表第三低频分量。电子设备计算第一低频变换系数与第二低频变换系数的均值,将该均值作为第三低频变换系数,从而确定第三低频变换系数对应的第三低频分量。As an optional implementation manner, the electronic device performs NSCT forward transform or FNSCT forward transform on the enhanced first image and the enhanced second image to obtain the first low-frequency transform coefficient and the second low-frequency transform coefficient, because the first low-frequency transform coefficient can correspond to the first low-pass subband image in the first low-frequency component, and the second low-frequency transform coefficient can correspond to the second low-pass sub-band image in the second low-frequency component, so fusing the first low-frequency transform coefficient and the second low-frequency transform coefficient can be equivalent to fusing the first low-frequency component and the second low-frequency component to obtain The third low-frequency transform coefficient of corresponds to the third low-frequency component. The electronic device may fuse the first low-frequency transformation coefficient representing the first low-frequency component and the second low-frequency transformation coefficient representing the second low-frequency component obtained after NSCT forward transformation or FNSCT forward transformation of the first image and the second image, to obtain a third low-frequency transformation coefficient, and use the third low-frequency transformation coefficient to represent the third low-frequency component. The electronic device calculates an average value of the first low-frequency transformation coefficient and the second low-frequency transformation coefficient, and uses the average value as a third low-frequency transformation coefficient, thereby determining a third low-frequency component corresponding to the third low-frequency transformation coefficient.
步骤430,将第三高频分量和第三低频分量融合为第三图像。 Step 430, fusing the third high-frequency component and the third low-frequency component into a third image.
电子设备可以将由第一高频分量和第二高频分量融合得到的第三高频分量和由第一低频分量和第二低频分量融合得到的第三低频分量进行融合,得到第三图像。作为一种可选的实施方式,可以分别对第三高频分量和第三低频分量进行NSCT逆变换,得到的结果为第三图像。The electronic device may fuse a third high-frequency component obtained by fusing the first high-frequency component and the second high-frequency component with a third low-frequency component obtained by fusing the first low-frequency component and the second low-frequency component to obtain a third image. As an optional implementation manner, NSCT inverse transformation may be performed on the third high-frequency component and the third low-frequency component respectively, and the obtained result is a third image.
在本公开实施例中,电子设备可以将增强处理后的第一图像分解为第一高频分量和第一低频分量,以及将增强处理后的第二图像分解为第二高频分量和第二低频分量,再将第一高频分量与第二高频分量进行融合得到第三高频分量,以及将第一低频分量与第二低频分量进行融合得到第三低频分量,最后将第三高频分量与第三低频分量进行融合得到第三图像,能够将增强处理后的第一图像与增强处理后的第二图像中的有效信息进行互补,从而解决逆光拍摄场景中曝光不足或过曝的问题。In an embodiment of the present disclosure, the electronic device may decompose the enhanced first image into the first high-frequency component and the first low-frequency component, and decompose the enhanced second image into the second high-frequency component and the second low-frequency component, then fuse the first high-frequency component and the second high-frequency component to obtain the third high-frequency component, and fuse the first low-frequency component and the second low-frequency component to obtain the third low-frequency component, and finally fuse the third high-frequency component and the third low-frequency component to obtain the third image, and can complement the effective information in the enhanced first image and the enhanced second image to solve the problem Underexposure or overexposure in backlit scenes.
应该理解的是,虽然图2-4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-4中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flow charts in FIGS. 2-4 are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 2-4 may include a plurality of sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these sub-steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
基于同一发明构思,作为对上述方法的实现,本公开实施例还提供了一种图像处理装置,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本公开实施例中的装置能够对应实现前述方法实施例中的全部内容。Based on the same inventive concept, as an implementation of the above-mentioned method, the embodiment of the present disclosure also provides an image processing device. The embodiment of the device corresponds to the embodiment of the method described above. For the convenience of reading, this embodiment of the device does not repeat the details in the embodiment of the method described above one by one, but it should be clear that the device in the embodiment of the present disclosure can correspondingly implement all the content of the method embodiment described above.
图5为本公开实施例提供的图像处理装置的结构框图,如图5所示,本公开实施例提供的图像处理装置500包括:FIG. 5 is a structural block diagram of an image processing device provided in an embodiment of the present disclosure. As shown in FIG. 5 , the image processing device 500 provided in an embodiment of the present disclosure includes:
检测模块510,将检测模块510配置成检测电子设备所处的拍摄场景的模块。The detection module 510 is configured to detect the shooting scene where the electronic device is located.
对焦采集模块520,将对焦采集模块520配置成用于若检测到电子设备处于逆光拍摄场景,通过摄像头采集对焦于逆光区域的第一图像,以及通过摄像头采集对焦于非逆光区域的第二图像的模块。The focus collection module 520 is configured to configure the focus collection module 520 as a module for collecting the first image focused on the backlight area through the camera and the second image focused on the non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene.
增强模块530,将增强模块530配置成分别对第一图像及第二图像进行增强处理的模块。The enhancement module 530, the enhancement module 530 is configured as a module for performing enhancement processing on the first image and the second image respectively.
融合模块540,将融合模块540配置成将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像的模块。The fusion module 540 is a module that configures the fusion module 540 to fuse the enhanced first image and the enhanced second image to obtain a third image.
作为本公开实施例一种可选的实施方式,图像处理装置500还包括:As an optional implementation manner of this embodiment of the present disclosure, the image processing apparatus 500 further includes:
对准模块,将对准模块配置成对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得增强处理后的第一图像的空间位置信息和增强处理后的第二图像的空间位置信息一致化的模块。The alignment module is configured to perform registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image.
还将融合模块540配置成将配准处理后的第一图像和配准处理后的第二图像进行融合,得到第三图像的模块。The fusion module 540 is also configured to fuse the registered first image and the registered second image to obtain a third image.
作为本公开实施例一种可选的实施方式,还将融合模块540配置成将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量;其中,第一高频分量对应增强处理后的第一图像中的第一高频区域,第一低频分量对应增强处理后的第一图像中的第一低频区域;第二高频分量对应增强处理后的第二图像中的第二高频区域,第二低频分量对应增强处理后的第二图像中的第二低频区域;将第一高频分量和第二高频 分量融合为第三高频分量,将第一低频分量和第二低频分量融合为第三低频分量;将所述第三高频分量和所述第三低频分量进行融合,以得到第三图像的模块。As an optional implementation of the embodiment of the present disclosure, the fusion module 540 is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency component in the enhanced second image. frequency region; the first high-frequency component and the second high-frequency component are fused into a third high-frequency component, and the first low-frequency component and the second low-frequency component are fused into a third low-frequency component; the third high-frequency component and the third low-frequency component are fused to obtain a module of a third image.
作为本公开实施例一种可选的实施方式,还将融合模块540配置成计算第一高频分量对应的第一模值和第二高频分量对应的第二模值;将第一模值与第二模值进行比较,并确定第一模值与第二模值中最大的模值,若最大的模值为第一模值,则将第一高频分量作为第三高频分量,若最大的模值为第二模值,则将第二高频分量作为第三高频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module 540 is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, then use the first high-frequency component as the third high-frequency component;
作为本公开实施例一种可选的实施方式,还将融合模块540配置成根据第一低频分量和第二低频分量确定平均低频分量,将平均低频分量作为第三低频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module 540 is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
作为本公开实施例一种可选的实施方式,还将所述融合模块配置成对增强处理后的第一图像进行非下采样轮廓波变换或快速非下采样轮廓波变换,得到第一高频分量和第一低频分量;对增强处理后的第二图像进行所述非下采样轮廓波变换或所述快速非下采样轮廓波变换,得到第二高频分量和第二低频分量的模块。As an optional implementation manner of the embodiment of the present disclosure, the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
作为本公开实施例一种可选的实施方式,还将检测模块510配置成获取通过摄像头采集的预览图像,计算预览图像的灰度值;若预览图像的灰度值大于预设灰度阈值,则确定电子设备处于逆光拍摄场景;若预览图像的灰度值小于或等于预设灰度阈值,则确定电子设备不处于逆光拍摄场景的模块。As an optional implementation of the embodiment of the present disclosure, the detection module 510 is also configured to obtain a preview image collected by the camera, and calculate the gray value of the preview image; if the gray value of the preview image is greater than the preset gray threshold, it is determined that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, it is determined that the electronic device is not in a backlight shooting scene.
作为本公开实施例一种可选的实施方式,还将增强模块530配置成对第一图像进行第一增强处理,对第二图像进行第二增强处理;其中,第一增强处理包括多尺度视网膜增强算法,第二增强处理包括同态滤波算法的模块。As an optional implementation of the embodiment of the present disclosure, the enhancement module 530 is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
作为本公开实施例一种可选的实施方式,还将对焦采集模块520配置成根据预设的区域尺寸将摄像头采集的预览图像划分为多个图像区域;计算各个图像区域对应的灰度值;将各个图像区域对应的灰度值与区域灰度阈值进行比较,将预览图像划分为逆光区域和非逆光区域的模块;其中,逆光区域指的预览图像中灰度值大于区域灰度阈值的区域,非逆光区域指的是预览图像中灰度值小于或等于区域灰度阈值的区域。As an optional implementation of the embodiment of the present disclosure, the focus acquisition module 520 is also configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each image area; compare the gray value corresponding to each image area with the area gray threshold value, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to the area in the preview image whose gray value is greater than the area gray threshold value, and the non-backlight area refers to the area in which the gray value in the preview image is less than or equal to the area gray value The region of the degree threshold.
本公开实施例提供的图像处理装置可以执行上述方法实施例提供的图像处理方法,其实现原理与技术效果类似,此处不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于电子设备中的处理器中,也可以以软件形式存储于电子设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。The image processing apparatus provided in the embodiments of the present disclosure can execute the image processing methods provided in the foregoing method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here. Each module in the above-mentioned image processing device may be fully or partially realized by software, hardware or a combination thereof. The above-mentioned modules can be embedded in or independent of the processor in the electronic device in the form of hardware, and can also be stored in the memory of the electronic device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
在一个实施例中,提供了一种电子设备,该电子设备可以是终端设备,其内部结构图可以如图6所示。该电子设备包括通过系统总线连接的处理器、存储器、通信接口、数据库、显示屏和输入装置。其中,该电子设备的处理器配置成提供计算和控制能力的模块。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该电子设备的通信接口配置成与外部的终端进行有线或无线方式的通信模块,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被处理器执行时以实现上述实施例提供的图像处理方法。该电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。In one embodiment, an electronic device is provided. The electronic device may be a terminal device, and its internal structure may be as shown in FIG. 6 . The electronic device includes a processor, a memory, a communication interface, a database, a display screen and an input device connected through a system bus. Wherein, the processor of the electronic device is configured as a module providing calculation and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer readable instructions. The internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium. The communication interface of the electronic device is configured as a wired or wireless communication module with an external terminal, and the wireless mode can be realized through WIFI, operator network, near field communication (NFC) or other technologies. When the computer-readable instructions are executed by the processor, the image processing method provided by the above-mentioned embodiments can be implemented. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the housing of the electronic device, or an external keyboard, touch pad or mouse.
本领域技术人员可以理解,图6中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 6 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation to the electronic device to which the disclosed solution is applied. The specific electronic device may include more or less components than those shown in the figure, or combine certain components, or have a different component arrangement.
在一个实施例中,本公开提供的图像处理装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图6所示的计算机设备上运行。计算机设备的存储器中可存储组成该电子设备的各个程序模块,比如,图5所示的检测模块510和对焦采集模块520。各个程序模块构成的计算机可读指令使得处理器执行本说明书中描述的本公开各个实施例的图像处理方法中的步骤。In one embodiment, the image processing apparatus provided in the present disclosure may be implemented in the form of computer-readable instructions, and the computer-readable instructions may be run on a computer device as shown in FIG. 6 . Various program modules constituting the electronic device can be stored in the memory of the computer device, for example, the detection module 510 and the focus acquisition module 520 shown in FIG. 5 . The computer-readable instructions constituted by the various program modules cause the processor to execute the steps in the image processing methods of the various embodiments of the present disclosure described in this specification.
在一个实施例中,提供了一种电子设备,包括存储器和一个或多个处理器,将所述存储器配置成存储计算机可读指令的模块;所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述方法实施例所述的图像处理方法的步骤。In one embodiment, an electronic device is provided, including a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in the above method embodiments.
本公开实施例提供的电子设备,可以实现上述方法实施例提供的图像处理方法,其实现原理与技术效果类似,此处不再赘述。The electronic device provided by the embodiment of the present disclosure can implement the image processing method provided by the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项所述的图像处理的步骤。One or more non-volatile storage media storing computer-readable instructions. When the computer-readable instructions are executed by one or more processors, one or more processors are made to perform the image processing steps described in any one of the above.
本公开实施例提供的计算机可读存储介质上存储的计算机可读指令,可以实现上述方法实施例提供的图像处理方法,其实现原理与技术效果类似,此处不再赘述。The computer-readable instructions stored on the computer-readable storage medium provided by the embodiments of the present disclosure can implement the image processing method provided by the above-mentioned method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成的,计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。Those of ordinary skill in the art can understand that the implementation of all or part of the processes in the above method embodiments can be completed by instructing related hardware through computer-readable instructions. The computer-readable instructions can be stored in a non-volatile computer-readable storage medium. When the computer-readable instructions are executed, they can include the processes of the embodiments of the above-mentioned methods. Wherein, any reference to storage, database or other media used in the various embodiments provided by the present disclosure may include at least one of non-volatile and volatile storage. Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM), among others.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be considered as within the scope of this specification.
以上实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。The above examples only express several implementations of the present disclosure, and the descriptions thereof are more specific and detailed, but should not be construed as limiting the scope of the patent for the invention. It should be noted that those skilled in the art can make several modifications and improvements without departing from the concept of the present disclosure, and these all belong to the protection scope of the present disclosure. Therefore, the scope of protection of the disclosed patent should be based on the appended claims.
工业实用性Industrial Applicability
本公开提供的图像处理方法,可有效解决在强光干扰或是逆光环境下电子设备进行拍摄时曝光不足或过曝的问题,提高在逆光拍摄场景下拍摄的图像的图像质量,具有很强的工业实用性。The image processing method provided by the present disclosure can effectively solve the problem of underexposure or overexposure when an electronic device is shooting in a strong light interference or backlight environment, improve the image quality of an image shot in a backlight shooting scene, and has strong industrial applicability.

Claims (20)

  1. 一种图像处理方法,应用于电子设备,所述方法包括:An image processing method applied to electronic equipment, the method comprising:
    检测所述电子设备所处的拍摄场景;Detecting the shooting scene where the electronic device is located;
    若检测到所述电子设备处于逆光拍摄场景,通过所述摄像头采集对焦于逆光区域的第一图像,以及通过所述摄像头采集对焦于非逆光区域的第二图像;If it is detected that the electronic device is in a backlight shooting scene, the first image focused on the backlight area is collected by the camera, and the second image focused on the non-backlight area is collected by the camera;
    分别对所述第一图像及所述第二图像进行增强处理;performing enhancement processing on the first image and the second image respectively;
    将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像。The enhanced first image and the enhanced second image are fused to obtain a third image.
  2. 根据权利要求1所述的方法,其中,在所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像之前,所述方法还包括:The method according to claim 1, wherein, before said fusing the enhanced first image and the enhanced second image to obtain the third image, said method further comprises:
    对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得所述增强处理后的第一图像的空间位置信息和所述增强处理后的第二图像的空间位置信息一致化;performing registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the enhanced first image and the spatial position information of the enhanced second image are consistent;
    所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像,包括:Said merging the enhanced first image and the enhanced second image to obtain a third image includes:
    将配准处理后的第一图像和配准处理后的第二图像进行融合,得到第三图像。The first image after registration processing and the second image after registration processing are fused to obtain a third image.
  3. 根据权利要求1或2所述的方法,其中,所述将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像,包括:The method according to claim 1 or 2, wherein said fusing the enhanced first image and the enhanced second image to obtain a third image comprises:
    将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量;其中,所述第一高频分量对应所述增强处理后的第一图像中的第一高频区域,所述第一低频分量对应所述增强处理后的第一图像中的第一低频区域;所述第二高频分量对应所述增强处理后的第二图像中的第二高频区域,所述第二低频分量对应所述增强处理后的第二图像中的第二低频区域;decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; the second high-frequency component corresponds to a second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to a second low-frequency region in the enhanced second image;
    将所述第一高频分量和所述第二高频分量融合为第三高频分量,将所述第一低频分量和所述第二低频分量融合为第三低频分量;fusing the first high frequency component and the second high frequency component into a third high frequency component, and fusing the first low frequency component and the second low frequency component into a third low frequency component;
    将所述第三高频分量和所述第三低频分量进行融合,以得到第三图像。Fusing the third high-frequency component and the third low-frequency component to obtain a third image.
  4. 根据权利要求3所述的方法,其中,所述将所述第一高频分量和所述第二高频分量融合为第三高频分量,包括:The method according to claim 3, wherein said fusing said first high frequency component and said second high frequency component into a third high frequency component comprises:
    计算所述第一高频分量对应的第一模值和所述第二高频分量对应的第二模值;calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component;
    将所述第一模值与所述第二模值进行比较,并确定所述第一模值与所述第二模值中最大的模值,若所述最大的模值为所述第一模值,则将所述第一高频分量作为所述第三高频分量,若所述最大的模值为所述第二模值,则将所述第二高频 分量作为所述第三高频分量。Comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value, if the largest modulus value is the first modulus value, the first high frequency component is used as the third high frequency component, and if the largest modulus value is the second modulus value, the second high frequency component is used as the third high frequency component.
  5. 根据权利要求3所述的方法,其中,所述将所述第一低频分量和所述第二低频分量融合为第三低频分量,包括:The method according to claim 3, wherein said fusing said first low frequency component and said second low frequency component into a third low frequency component comprises:
    根据所述第一低频分量和所述第二低频分量确定平均低频分量,将所述平均低频分量作为第三低频分量。An average low frequency component is determined according to the first low frequency component and the second low frequency component, and the average low frequency component is used as a third low frequency component.
  6. 根据权利要求3所述的方法,其中,所述将增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量,包括:The method according to claim 3, wherein said decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component comprises:
    对增强处理后的第一图像进行非下采样轮廓波变换或快速非下采样轮廓波变换,得到第一高频分量和第一低频分量;performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain a first high-frequency component and a first low-frequency component;
    对增强处理后的第二图像进行所述非下采样轮廓波变换或所述快速非下采样轮廓波变换,得到第二高频分量和第二低频分量。The non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation is performed on the enhanced second image to obtain a second high frequency component and a second low frequency component.
  7. 根据权利要求1所述的方法,其中,所述检测所述电子设备所处的拍摄场景,包括:The method according to claim 1, wherein the detecting the shooting scene where the electronic device is located comprises:
    获取通过摄像头采集的预览图像,计算所述预览图像的灰度值;Obtain a preview image collected by the camera, and calculate the gray value of the preview image;
    若所述预览图像的灰度值大于预设灰度阈值,则确定所述电子设备处于逆光拍摄场景;If the grayscale value of the preview image is greater than a preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene;
    若所述预览图像的灰度值小于或等于所述预设灰度阈值,则确定所述电子设备不处于逆光拍摄场景。If the grayscale value of the preview image is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene.
  8. 根据权利要求1所述的方法,其中,所述分别对所述第一图像及所述第二图像进行增强处理,包括:The method according to claim 1, wherein said respectively performing enhancement processing on said first image and said second image comprises:
    对所述第一图像进行第一增强处理,对所述第二图像进行第二增强处理;其中,所述第一增强处理包括多尺度视网膜增强算法,所述第二增强处理包括同态滤波算法。Performing a first enhancement process on the first image, and performing a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a homomorphic filtering algorithm.
  9. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    根据预设的区域尺寸将所述摄像头采集的预览图像划分为多个图像区域;dividing the preview image collected by the camera into multiple image areas according to a preset area size;
    计算各个所述图像区域对应的灰度值;Calculating the gray value corresponding to each of the image regions;
    将各个所述图像区域对应的灰度值与区域灰度阈值进行比较,以将所述预览图像划分为逆光区域和非逆光区域,其中,所述逆光区域指的是所述预览图像中灰度值大于区域灰度阈值的区域,所述非逆光区域指的是所述预览图像中灰度值小于或等于区域灰度阈值的区域。Comparing the grayscale value corresponding to each of the image regions with the grayscale threshold of the region to divide the preview image into a backlit region and a non-backlit region, wherein the backlit region refers to an area in the preview image whose grayscale value is greater than the regional grayscale threshold, and the non-backlit region refers to an area in the preview image whose grayscale value is less than or equal to the regional grayscale threshold.
  10. 一种图像处理装置,包括:An image processing device, comprising:
    检测模块,将所述检测模块配置成检测所述电子设备所处的拍摄场景的模块;A detection module, configured to detect the shooting scene where the electronic device is located;
    对焦采集模块,将所述对焦采集模块配置成若检测到所述电子设备处于逆光拍摄场景,通过所述摄像头采集对焦于逆光区域的第一图像,以及通过所述摄像头采集对焦于非逆光区域的第二图像的模块;A focus acquisition module, configuring the focus acquisition module as a module that collects a first image focused on a backlight area through the camera and acquires a second image focused on a non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene;
    增强模块,将所述增强模块配置成分别对所述第一图像及所述第二图像进行增强处理的模块;An enhancement module, configuring the enhancement module as a module for performing enhancement processing on the first image and the second image respectively;
    融合模块,将所述融合模块配置成将增强处理后的第一图像和增强处理后的第二图像进行融合,得到第三图像的模块。A fusion module, configured to fuse the enhanced first image with the enhanced second image to obtain a third image.
  11. 根据权利要求10所述的装置,其中,所述装置还包括:The device according to claim 10, wherein the device further comprises:
    对准模块,将所述对准模块配置成对增强处理后的第一图像和增强处理后的第二图像进行配准处理,从而使得所述增强处理后的第一图像的空间位置信息和所述增强处理后的第二图像的空间位置信息一致化的模块;An alignment module, configured to perform registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the enhanced first image and the spatial position information of the enhanced second image are consistent;
    还将所述融合模块配置成将配准处理后的第一图像和配准处理后的第二图像进行融合,得到第三图像的模块。The fusion module is further configured to fuse the registered first image and the registered second image to obtain a third image.
  12. 根据权利要求10或11所述的装置,其中:Apparatus according to claim 10 or 11, wherein:
    还将所述融合模块配置成增强处理后的第一图像分解为第一高频分量和第一低频分量,将增强处理后的第二图像分解为第二高频分量和第二低频分量;其中,所述第一高频分量对应所述增强处理后的第一图像中的第一高频区域,所述第一低频分量对应所述增强处理后的第一图像中的第一低频区域;所述第二高频分量对应所述增强处理后的第二图像中的第二高频区域,所述第二低频分量对应所述增强处理后的第二图像中的第二低频区域;将所述第一高频分量和所述第二高频分量融合为第三高频分量,将所述第一低频分量和所述第二低频分量融合为第三低频分量;将所述第三高频分量和所述第三低频分量进行融合,以得到第三图像的模块。The fusion module is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; the second high-frequency component corresponds to a second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to a second low-frequency region in the enhanced second image; The first high-frequency component and the second high-frequency component are fused into a third high-frequency component, the first low-frequency component and the second low-frequency component are fused into a third low-frequency component; the third high-frequency component is fused with the third low-frequency component to obtain a module of a third image.
  13. 根据权利要求12所述的装置,其中:The apparatus of claim 12, wherein:
    还将所述融合模块配置成计算所述第一高频分量对应的第一模值和所述第二高频分量对应的第二模值;将所述第一模值与所述第二模值进行比较,并确定所述第一模值与所述第二模值中最大的模值,若所述最大的模值为所述第一模值,则将所述第一高频分量作为所述第三高频分量,若所述最大的模值为所述第二模值,则将所述第二高频分量作为所述第三高频分量的模块。The fusion module is also configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, then use the first high-frequency component as the third high-frequency component; if the largest modulus value is the second modulus value, use the second high-frequency component as the module of the third high-frequency component.
  14. 根据权利要求12所述的装置,其中:The apparatus of claim 12, wherein:
    还将所述融合模块配置成根据所述第一低频分量和所述第二低频分量确定平均低频分量,将所述平均低频分量作为第三低频分量的模块。The fusion module is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of a third low frequency component.
  15. 根据权利要求12所述的装置,其中:The apparatus of claim 12, wherein:
    还将所述融合模块配置成对增强处理后的第一图像进行非下采样轮廓波变换或快速非下采样轮廓波变换,得到第一高频分量和第一低频分量;对增强处理后的第二图像进行所述非下采样轮廓波变换或所述快速非下采样轮廓波变换,得到第二高频分量和第二低频分量的模块。The fusion module is also configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
  16. 根据权利要求10所述的装置,其中:The apparatus of claim 10, wherein:
    还将所述检测模块配置成获取通过摄像头采集的预览图像,计算所述预览图像的灰度值;若所述预览图像的灰度值大于预设灰度阈值,则确定所述电子设备处于逆光拍摄场景;若所述预览图像的灰度值小于或等于所述预设灰度阈值,则确定所述电子设备不处于逆光拍摄场景的模块。The detection module is also configured to obtain a preview image collected by the camera, and calculate the grayscale value of the preview image; if the grayscale value of the preview image is greater than a preset grayscale threshold, then determine that the electronic device is in a backlight shooting scene; if the grayscale value of the preview image is less than or equal to the preset grayscale threshold, then determine that the electronic device is not in a backlight shooting scene.
  17. 根据权利要求10所述的装置,其中:The apparatus of claim 10, wherein:
    还将所述增强模块配置成对第一图像进行第一增强处理,对第二图像进行第二增强处理;其中,第一增强处理包括多尺度视网膜增强算法,第二增强处理包括同态滤波算法的模块。The enhancement module is further configured to perform a first enhancement process on the first image, and a second enhancement process on the second image; wherein the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
  18. 根据权利要求10所述的装置,其中:The apparatus of claim 10, wherein:
    还将所述对焦采集模块配置成根据预设的区域尺寸将所述摄像头采集的预览图像划分为多个图像区域;计算各个所述图像区域对应的灰度值;将各个所述图像区域对应的灰度值与区域灰度阈值进行比较,将所述预览图像划分为逆光区域和非逆光区域的模块;其中,所述逆光区域指的所述预览图像中灰度值大于区域灰度阈值的区域,所述非逆光区域指的是所述预览图像中灰度值小于或等于区域灰度阈值的区域。The focus acquisition module is also configured to divide the preview image collected by the camera into a plurality of image areas according to the preset area size; calculate the gray value corresponding to each of the image areas; compare the gray value corresponding to each of the image areas with a regional gray threshold, and divide the preview image into a backlit area and a non-backlit area; wherein, the backlit area refers to an area in the preview image whose gray value is greater than the regional gray threshold, and the non-backlit area refers to a gray value in the preview image that is less than or equal to the regional gray threshold. area.
  19. 一种电子设备,包括:存储器和一个或多个处理器,所述存储器中存储有计算机可读指令;所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-9任一项所述的图像处理方法的步骤。An electronic device, comprising: a memory and one or more processors, wherein computer-readable instructions are stored in the memory; when the computer-readable instructions are executed by the one or more processors, the one or more processors execute the steps of the image processing method according to any one of claims 1-9.
  20. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-9任一项所述的图像处理方法的步骤。One or more non-volatile computer-readable storage media storing computer-readable instructions, when the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps of the image processing method according to any one of claims 1-9.
PCT/CN2022/098717 2022-01-18 2022-06-14 Image processing method and apparatus, electronic device, and storage medium WO2023137956A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210057585.5 2022-01-18
CN202210057585.5A CN114418914A (en) 2022-01-18 2022-01-18 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023137956A1 true WO2023137956A1 (en) 2023-07-27

Family

ID=81274269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098717 WO2023137956A1 (en) 2022-01-18 2022-06-14 Image processing method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN114418914A (en)
WO (1) WO2023137956A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418914A (en) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115100081B (en) * 2022-08-24 2022-11-15 深圳佳弟子科技有限公司 LCD display screen gray scale image enhancement method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
CN107872616A (en) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 driving recording method and device
CN107871346A (en) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 Drive recorder
CN108650466A (en) * 2018-05-24 2018-10-12 努比亚技术有限公司 The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait
CN109064436A (en) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 Image interfusion method
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
US11211018B1 (en) * 2020-06-25 2021-12-28 Xianyang Caihong Optoelectronics Technology Co., Ltd Grayscale compensation method and apparatus of display device
CN114418914A (en) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN107872616A (en) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 driving recording method and device
CN107871346A (en) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 Drive recorder
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
CN108650466A (en) * 2018-05-24 2018-10-12 努比亚技术有限公司 The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait
CN109064436A (en) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 Image interfusion method
CN109300096A (en) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 A kind of multi-focus image fusing method and device
US11211018B1 (en) * 2020-06-25 2021-12-28 Xianyang Caihong Optoelectronics Technology Co., Ltd Grayscale compensation method and apparatus of display device
CN114418914A (en) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114418914A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN111641778B (en) Shooting method, device and equipment
CN111418201B (en) Shooting method and equipment
JP6803982B2 (en) Optical imaging method and equipment
JP6469678B2 (en) System and method for correcting image artifacts
US8964060B2 (en) Determining an image capture payload burst structure based on a metering image capture sweep
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109671106B (en) Image processing method, device and equipment
US8760537B2 (en) Capturing and rendering high dynamic range images
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2014099284A1 (en) Determining exposure times using split paxels
CN108616689B (en) Portrait-based high dynamic range image acquisition method, device and equipment
US9247152B2 (en) Determining image alignment failure
US9087391B2 (en) Determining an image capture payload burst structure
JP7136956B2 (en) Image processing method and device, terminal and storage medium
CN113132695B (en) Lens shading correction method and device and electronic equipment
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
Choi et al. A method for fast multi-exposure image fusion
WO2016026072A1 (en) Method, apparatus and computer program product for generation of extended dynamic range color images
WO2023151210A1 (en) Image processing method, electronic device and computer-readable storage medium
CN112785537A (en) Image processing method, device and storage medium
CN113870300A (en) Image processing method and device, electronic equipment and readable storage medium
US20240005521A1 (en) Photographing method and apparatus, medium and chip
WO2023236209A1 (en) Image processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921387

Country of ref document: EP

Kind code of ref document: A1