WO2023137956A1 - Procédé et appareil de traitement d'image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023137956A1
WO2023137956A1 PCT/CN2022/098717 CN2022098717W WO2023137956A1 WO 2023137956 A1 WO2023137956 A1 WO 2023137956A1 CN 2022098717 W CN2022098717 W CN 2022098717W WO 2023137956 A1 WO2023137956 A1 WO 2023137956A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frequency component
enhanced
low
electronic device
Prior art date
Application number
PCT/CN2022/098717
Other languages
English (en)
Chinese (zh)
Inventor
范文明
Original Assignee
上海闻泰信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰信息技术有限公司 filed Critical 上海闻泰信息技术有限公司
Publication of WO2023137956A1 publication Critical patent/WO2023137956A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to an image processing method, device, electronic equipment and storage medium.
  • an image processing method, device, electronic device, and storage medium are provided.
  • An image processing method applied to electronic equipment comprising:
  • the first image focused on the backlight area is collected by the camera, and the second image focused on the non-backlight area is collected by the camera;
  • the enhanced first image and the enhanced second image are fused to obtain a third image.
  • the method before said fusing the enhanced first image and the enhanced second image to obtain the third image, the method further includes: performing registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; said fusing the enhanced first image and the enhanced second image to obtain the third image includes: The processed second image is fused to obtain a third image.
  • the fusing the enhanced first image and the enhanced second image to obtain a third image includes: decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; In the second high-frequency region in the second image, the second low-frequency component corresponds to the second low-frequency region in the enhanced second image; fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component; fusing the third high-frequency component and the third low-frequency component to obtain a third image.
  • the fusing the first high-frequency component and the second high-frequency component into a third high-frequency component includes: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component; the third high frequency component.
  • the fusing the first low-frequency component and the second low-frequency component into a third low-frequency component includes: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component include: performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain a first high-frequency component and a first low-frequency component; performing non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on an enhanced second image to obtain a second high-frequency component and a second low-frequency component.
  • the detecting the shooting scene where the electronic device is located includes: acquiring a preview image collected by a camera, and calculating a grayscale value of the preview image; if the grayscale value of the preview image is greater than a preset grayscale threshold, determining that the electronic device is in a backlight shooting scene; if the grayscale value of the preview image is less than or equal to the preset grayscale threshold, determining that the electronic device is not in a backlight shooting scene.
  • performing enhancement processing on the first image and the second image respectively includes: performing first enhancement processing on the first image, and performing second enhancement processing on the second image; wherein, the first enhancement processing includes a multi-scale retinal enhancement algorithm, and the second enhancement processing includes a homomorphic filtering algorithm.
  • the method further includes: dividing the preview image captured by the camera into multiple image areas according to a preset area size; calculating the gray value corresponding to each of the image areas; comparing the gray value corresponding to each of the image areas with a regional gray threshold to divide the preview image into a backlit area and a non-backlit area, wherein the backlit area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlit area refers to the area in which the gray value of the preview image is less than or an area equal to the gray threshold of the area.
  • An image processing device comprising:
  • a detection module configured to detect the shooting scene where the electronic device is located
  • a focus acquisition module configuring the focus acquisition module as a module that collects a first image focused on a backlight area through the camera and acquires a second image focused on a non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene;
  • An enhancement module configuring the focus acquisition module as a module for respectively performing enhancement processing on the first image and the second image;
  • a fusion module configured to fuse the enhanced first image with the enhanced second image to obtain a third image.
  • the device further includes: an alignment module configured to perform registration processing on the enhanced first image and the enhanced second image, so as to make the spatial position information of the enhanced first image consistent with the spatial position information of the enhanced second image; and further configure the fusion module to fuse the registered first image and the registered second image to obtain a third image.
  • the fusion module is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to a first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to a first low-frequency region in the enhanced first image; the second high-frequency component corresponds to a second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the enhanced second image.
  • the fusion module is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, use the first high-frequency component as the third high-frequency component;
  • the fusion module is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
  • the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
  • the detection module is further configured to acquire a preview image collected by the camera, and calculate a gray value of the preview image; if the gray value of the preview image is greater than a preset gray threshold, then determine that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, then determine that the electronic device is not in a backlight shooting scene.
  • the enhancement module is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
  • the focus acquisition module is further configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each of the image areas; compare the gray value corresponding to each of the image areas with the area gray threshold, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to an area in the preview image whose gray value is greater than the area gray threshold, and the non-backlight area refers to the gray scale in the preview image Areas with values less than or equal to the area grayscale threshold.
  • An electronic device comprising a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in any one of the above.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors are made to execute the steps of any one of the image processing methods described above.
  • FIG. 1 is an application scene diagram of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 2 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 3 is a flowchart of steps of an image processing method provided by one or more embodiments of the present disclosure
  • FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure
  • Fig. 5 is a structural block diagram of an image processing device in one or more embodiments of the present disclosure.
  • Fig. 6 is a structural block diagram of an electronic device in one or more embodiments of the present disclosure.
  • first and second and the like in the specification and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first image and the second image are for distinguishing different images, not for describing a specific sequence of images.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present disclosure shall not be construed as being preferred or advantageous over other embodiments or designs. To be precise, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of "plurality” refers to two or more.
  • One solution includes using a flash for exposure compensation and subject brightness compensation through a flash. Since the brightness of the flash of an electronic device is insufficient, brightness compensation can only be performed on a subject that is shot at a close distance. When the subject is farther away from the camera, the effect of this solution is worse.
  • Another solution is to increase the dynamic range of the image through HDR (High-Dynamic Range, high-dynamic range image), reduce the bright or dark areas in the image captured by the camera, and increase the dynamic range of the image through HDR. Since this solution only selects over-exposed, over-dark and normal exposure images for fusion, the dynamic range of the obtained image is limited, and it cannot fully meet various complex backlight environments. In some complex backlight shooting scenes, the effect of this solution is not ideal.
  • the camera of the electronic device When the electronic device is in the backlight shooting scene for image capture, the camera of the electronic device will be directly or indirectly reflected by the light source, and the images collected by the camera will be different when the focus position of the camera is different.
  • the focus is in an area disturbed by strong light, that is, when focusing on the backlight area, the image collected by the camera has a high-brightness area, and this high-brightness area covers up part of the information in the image; image effects.
  • the image processing method provided in the present disclosure can be applied to the application environment shown in FIG. 1 , and the image processing method is applied to an image processing system.
  • the image processing system may include an electronic device 10, wherein the electronic device 10 may include but not limited to a mobile phone, a tablet computer, a wearable device, a notebook computer, a PC (Personal Computer, personal computer), a video camera, and the like.
  • the operating system of the above-mentioned electronic device 10 may include but not limited to Android (Android) operating system, IOS operating system, Symbian (Symbian) operating system, Black Berry (Blackberry) operating system, Windows Phone8 operating system, etc., the embodiments of the present disclosure are not limited.
  • the electronic device 10 After the electronic device 10 detects the shooting operation, it may collect images through a camera, and the camera may be a camera of the electronic device 10 .
  • the electronic device 10 may also collect images through cameras of other electronic devices establishing a communication connection with the electronic device 10, which is not limited here.
  • the electronic device 10 may analyze the image collected by the camera to determine that it is currently in a backlight shooting scene. As shown in FIG. 1 , the backlight shooting scene may be a scene where the direction of the camera faces a light source. In this backlight shooting scene, the electronic device 10 uses the camera to shoot, and the captured image may be overexposed in some areas and/or underexposed in some areas.
  • the electronic device 10 may detect the shooting scene where the electronic device 10 is currently located before detecting the shooting operation.
  • the method of detecting the current shooting scene may include but not limited to detection by a light sensor, detection of images collected by the camera after the camera is turned on and before a shooting operation is detected, etc., which is not limited here.
  • the electronic device 10 may detect the current light intensity through the light sensor, and determine the current shooting scene according to the light intensity. For example, if the light intensity is greater than the intensity threshold, the electronic device 10 determines that the current shooting scene is a backlight shooting scene; if the light intensity is less than or equal to the intensity threshold, the electronic device 10 determines that it is not currently in a backlight shooting scene.
  • the electronic device 10 When the electronic device 10 detects that it is currently in a backlight shooting scene, it can automatically switch to a processing mode for processing images collected in the backlight shooting scene, or output a prompt message to prompt the user to switch to the processing mode corresponding to the switching operation. After detecting the switching operation, the electronic device 10 switches to a processing mode for processing images collected in the backlight shooting scene.
  • the specific content of the image processing method disclosed in the embodiments of the present disclosure will be described in the following embodiments, and will not be explained too much here.
  • FIG. 2 is a flow chart of the steps of an image processing method provided by one or more embodiments of the present disclosure.
  • the image processing method can be applied to the above-mentioned electronic device, and may include the following steps:
  • Step 210 detecting the shooting scene where the electronic device is located.
  • the electronic device after the electronic device collects the preview image through the camera, it can detect the shooting scene where the electronic device is located according to the collected preview image, that is, detect whether the electronic device is in a backlit shooting scene.
  • the detection method may include detecting the gray value of the image pixel, detecting the RGB value of the image pixel, etc., which are not limited here.
  • the electronic device may acquire a preview image collected by the camera, and calculate the grayscale value of the preview image. If the grayscale value is greater than a preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene. The electronic device may continue to perform steps 220 to 240. If the grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene. The electronic device may perform image enhancement processing and/or filtering processing on the collected images.
  • the image enhancement processing may include contrast enhancement, Gamma ( ⁇ , gamma) correction, histogram equalization, histogram regulation, color image enhancement method based on HSV space, etc.
  • the filtering processing includes mean filtering, median filtering, Gaussian filtering, bilateral filtering, etc., which are not limited in the embodiments of the present disclosure.
  • the electronic device may calculate an average gray value of all pixels in the image collected by the camera, and compare the average gray value of all pixels in the image with a preset gray threshold. If the average grayscale value is greater than the preset grayscale threshold, it is determined that the electronic device is in a backlight shooting scene; if the average grayscale value is less than or equal to the preset grayscale threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • the electronic device may calculate the grayscale value of each pixel in the image collected by the camera, and compare the grayscale value of each pixel in the image with a preset grayscale threshold. Calculate the number of pixels whose grayscale value is greater than the preset grayscale threshold. If the number of pixels is greater than the preset number threshold, it is determined that the electronic device is in a backlight shooting scene. If the number of pixels is less than or equal to the preset number threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • Step 220 if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
  • the electronic device detects that the electronic device is in a backlit shooting scene, the first image focused on the backlit area is collected through the camera, and the second image focused on the non-backlit area is collected through the camera. There is no sequence relationship between the first image and the second image, and they can also be executed simultaneously.
  • the backlight area refers to an area that is affected by light and has a backlight effect
  • the non-backlight area refers to an area that does not have a backlight effect due to the light.
  • the electronic device can divide the preview image into multiple image areas according to the preset area size, for example, each image area includes 64 ⁇ 64 pixels, and calculate the gray value corresponding to each image area in the preview image, respectively compare the gray value corresponding to each image area with the area gray threshold, so as to divide the preview image into a backlight area and a non-backlight area.
  • the area refers to the area in the preview image whose gray value is less than or equal to the gray threshold of the area, that is, the area where the backlight effect does not appear due to the influence of light.
  • the electronic device collects the first image focused on the backlit area and the second image focused on the non-backlit area through the camera.
  • the camera can automatically collect the first image focused on the backlight area, and the camera can collect the second image focused on the non-backlight area, without requiring the user to find the focus area, simplifying the user's operation, and improving the convenience of the image processing method.
  • the electronic device when it detects that the electronic device is in a backlit shooting scene, it can send a prompt message to prompt the user to manually select the focus on the backlit area and the non-backlit area.
  • the camera When the focus selection operation for the backlit area is detected, the camera focuses on the backlit area and collects the first image according to the focus selection operation for the backlit area. The area that needs to be focused, and then collect the first image and the second image, so as to improve the accuracy of the focus of the collected image.
  • Step 230 performing enhancement processing on the first image and the second image respectively.
  • the electronic device may respectively perform enhancement processing on the first image and the second image to improve the image quality of the first image and the second image.
  • the first enhancement process can be performed on the first image, and the second enhancement process can be performed on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm (MSR, Multi-Scale Retinex), and the second enhancement process includes a homomorphic filtering algorithm.
  • MSR multi-scale retinal enhancement algorithm
  • the second enhancement process includes a homomorphic filtering algorithm.
  • the first image is collected when the camera focuses on the backlit area, and focusing on the backlit area will affect the clarity and contrast of the first image. Therefore, the electronic device can use a multi-scale retinal enhancement algorithm to enhance the contrast and clarity of the first image.
  • the multi-scale retinal enhancement algorithm determines multiple scales of the first image, performs Gaussian blur on the first image at multiple scales, and obtains multiple blurred images after blurring.
  • the enhanced first image is obtained based on the first image and the multiple blurred images, which can improve the image quality of the first image.
  • the second image is collected when the camera focuses on the non-backlit area, and focusing on the non-backlit area will cause more image noise in the second image. Therefore, the electronic device can use a homomorphic filtering algorithm to remove the noise in the second image.
  • the homomorphic filtering algorithm transforms the illumination-reflection model corresponding to the second image, and then passes the transformed result through a frequency domain filter to obtain a filtered result, and inversely transforms the filtered result to obtain an enhanced second image, which can improve the image quality of the second image.
  • Step 240 fusing the enhanced first image and the enhanced second image to obtain a third image.
  • the electronic device may fuse the enhanced first image and the enhanced second image to obtain a third image.
  • the third image may be an image finally presented on a display screen for viewing by a user, or may be an image stored in a memory.
  • the electronic device may divide the first image and the second image into image areas according to the number of pixels. For example, each image area includes 64 ⁇ 64 pixels. There is a one-to-one correspondence between each image area in the first image and each image area in the second image. For every two image areas corresponding to the first image and the second image, select the image area with higher image quality in the two image areas. , as the image area corresponding to the position in the third image, the same operation is performed on other image areas, and the third image is obtained after the fusion of all areas is completed.
  • the electronic device can detect the current environment.
  • the camera captures the first image focused on the backlight area and the second image focused on the non-backlight area.
  • underexposure or overexposure in the backlight shooting scene will cause the first image and the second image to lose part of the effective information.
  • the sharpness and contrast of the second image improve the image quality of the image captured in the backlight shooting scene. .
  • FIG. 3 is a flow chart of the steps of the image processing method provided by one or more embodiments of the present disclosure.
  • the image processing method can also be applied to the above-mentioned electronic device, and may include the following steps:
  • Step 310 detecting the shooting scene where the electronic device is located.
  • Step 320 if it is detected that the electronic device is in a backlit shooting scene, capture a first image focused on the backlit area through the camera, and capture a second image focused on the non-backlit area through the camera.
  • Step 330 performing enhancement processing on the first image and the second image respectively.
  • Steps 310-330 are the same as steps 210-230, and will not be repeated here.
  • Step 340 performing registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the first image is consistent with the spatial position information of the second image.
  • the electronic device may move when collecting the first image and the second image, there is a difference between the spatial position information of the first image and the spatial position information of the second image. Therefore, the electronic device may perform registration processing on the enhanced first image and the enhanced second image, for example, register key points in the enhanced first image and the enhanced second image, register image features in the enhanced first image and the enhanced second image, etc., so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image, so that the subsequent image fusion step can be performed.
  • the electronic device may perform affine transformation registration on the enhanced first image and the enhanced second image, first use feature matching between the enhanced first image and the enhanced second image as data to obtain a predicted affine transformation matrix, and register the enhanced first image with the enhanced second image according to the predicted affine transformation matrix, so that the spatial position information of the enhanced first image and the spatial position information of the enhanced second image are consistent.
  • the electronic device may perform feature matching between the enhanced first image and the enhanced second image, obtain a predicted radial transformation matrix for transforming the enhanced first image into the enhanced second image, and transform the enhanced first image according to the predicted affine transformation matrix, thereby obtaining a first image that is registered with the enhanced second image.
  • Step 350 merging the registered first image and the registered second image to obtain a third image.
  • the electronic device may fuse the registered first image and the registered second image to obtain a third image.
  • the first image after the registration processing and the second image after the registration processing are obtained after registration processing is performed on the basis of the first image after the enhancement processing and the second image after the enhancement processing, and the method of fusing the first image after the registration processing and the second image after the registration processing in step 350 can be the same as the method of fusing the first image after the enhancement processing and the second image after the enhancement processing in step 240, and will not be repeated here.
  • the electronic device can weight and calculate the pixel values (such as grayscale values or RGB (Red, Green, Blue, red, green, blue) values, etc.) After adding, the obtained target pixel value can be used as the pixel value of the corresponding pixel point in the third image, and the above-mentioned fusion operation can be performed on all corresponding pixel points in the registered first image and the registered second image to obtain the third image.
  • the pixel values such as grayscale values or RGB (Red, Green, Blue, red, green, blue) values, etc.
  • the electronic device may also perform registration processing on the first image and the second image, so as to facilitate the fusion of the first image and the second image, and improve the accuracy of image fusion.
  • FIG. 4 is a flow chart of steps for fusing the enhanced first image with the enhanced second image provided by one or more embodiments of the present disclosure.
  • the step of fusing the enhanced first image with the enhanced second image may include the following steps:
  • Step 410 decomposing the enhanced first image into a first high-frequency component and a first low-frequency component, and decomposing the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency region in the enhanced second image.
  • the electronic device may decompose the enhanced first image into a first high frequency component and a first low frequency component, and decompose the enhanced second image into a second high frequency component and a second low frequency component.
  • the electronic device may calculate the gray value of each pixel in the enhanced first image, so as to obtain the gray value change speed between each pixel in the enhanced first image and surrounding pixels, and decompose the enhanced first image into a first high frequency component and a first low frequency component according to the gray value change speed, the first high frequency component includes a plurality of images having the same image size as the enhanced first image, and the first low frequency component includes an image having the same image size as the enhanced first image.
  • the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, that is, the multiple images in the first high-frequency component correspond to different first high-frequency regions in the enhanced first image, and the first high-frequency region is an image region in the enhanced first image whose gray value change speed is greater than the first change speed threshold.
  • the first high frequency component may correspond to the edge region of the enhanced first image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region.
  • the first low-frequency component corresponds to the first low-frequency region in the enhanced first image, that is, the image in the first low-frequency component corresponds to the first low-frequency region in the enhanced first image, and the first low-frequency region is an image region in the enhanced first image whose grayscale value change speed is less than or equal to the first change speed threshold.
  • the first low-frequency component may correspond to the middle region of the enhanced first image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region.
  • the electronic device may calculate the gray value of each pixel in the enhanced second image, thereby obtaining the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and decompose the enhanced second image into a second high frequency component and a second low frequency component according to the gray value change speed.
  • the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, that is, the multiple images in the second high-frequency component correspond to different second high-frequency regions in the enhanced second image, and the second high-frequency region is an image region in the enhanced second image whose gray value change speed is greater than the second change speed threshold.
  • the second high frequency component may correspond to the edge region of the enhanced second image, because the gray value change speed of the edge region of the image is usually greater than the gray value change speed of the middle region.
  • the second low-frequency component corresponds to the second low-frequency region in the enhanced second image, that is, the image in the second low-frequency component corresponds to the second low-frequency region in the enhanced second image, and the second low-frequency region is an image region whose grayscale value change speed in the enhanced second image is less than or equal to the second change speed threshold.
  • the second low-frequency component may correspond to the middle region of the enhanced second image, because the grayscale value change speed of the middle region of the image is usually smaller than the grayscale value change speed of the edge region.
  • the first change speed threshold and the second change speed threshold may be equal in magnitude or unequal in magnitude.
  • the electronic device may perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced first image, thereby obtaining the first high-frequency component and the first low-frequency component, and the first high-frequency transformation coefficient corresponding to the first high-frequency component and the first low-frequency transformation coefficient corresponding to the first low-frequency component.
  • NSCT Nonsubsampled contourlet transform, non-subsampled contourlet transform
  • FNSCT Fast non-subsampled contourlet transform
  • the electronic device converts the enhanced first image into a first spectrogram
  • the first spectrogram can illustrate the gray value change speed between each pixel in the enhanced first image and the surrounding pixels, and perform multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced first image according to the first spectrogram, thereby obtaining one or more first bandpass subband images and a first lowpass subband image.
  • the image size of the first bandpass subband image is the same as that of the enhanced first image, and each first bandpass subband image corresponds to a different first high frequency region in the enhanced first image.
  • the image size of the first low-pass sub-band image is the same as that of the enhanced first image, the first low-pass sub-band image corresponds to the first low-frequency region in the enhanced first image, and the first low-pass sub-band image can be used as the first low-frequency component.
  • the electronic device then performs NSDFB (Nonsubsampled directional filter bank, non-subsampled directional filter bank) decomposition on each first bandpass subband image, and can decompose each first bandpass subband image into a plurality of first multi-directional subband images.
  • the image size of the first multi-directional subband image is the same as the image size of the enhanced first image, and the plurality of first direction subband images can be jointly used as the first high frequency component.
  • the first low-frequency transform coefficient corresponding to the first low-pass sub-band image is obtained.
  • NSDFB decomposition on the first bandpass sub-band image to obtain the first multi-directional sub-band image, and obtain the first high-frequency transformation coefficient corresponding to the first multi-directional sub-band image, since multiple first multi-directional sub-band images can be obtained, that is, the first high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple first multi-directional sub-band images.
  • the electronic device may also perform NSCT (Nonsubsampled contourlet transform, non-subsampled contourlet transform) forward transform or FNSCT (FastNonsubsampled contourlet transform, fast non-subsampled contourlet transform) forward transform on the enhanced second image, thereby obtaining a second high-frequency component and a second low-frequency component, and a second high-frequency transformation coefficient corresponding to the second high-frequency component and a second low-frequency transformation coefficient corresponding to the second low-frequency component.
  • NSCT Nonsubsampled contourlet transform, non-subsampled contourlet transform
  • FNSCT Fast non-subsampled contourlet transform
  • the electronic device converts the enhanced second image into a second spectrogram, and the second spectrogram can illustrate the gray value change speed between each pixel in the enhanced second image and surrounding pixels, and performs multiple NSP (Nonsubsampled Pyramid, non-subsampled tower filter) decomposition on the enhanced second image according to the second spectrogram, thereby obtaining one or more second bandpass subband images and a second lowpass subband image.
  • the image size of the second bandpass subband image is the same as that of the enhanced second image, and each second bandpass subband image corresponds to a different second high frequency region in the enhanced second image.
  • the image size of the second low-pass sub-band image is the same as that of the enhanced second image, the second low-pass sub-band image corresponds to the second low-frequency area in the enhanced second image, and the second low-pass sub-band image can be used as the second low-frequency component.
  • the electronic device then performs NSDFB (Nonsubsampled directional filter bank) decomposition on each second bandpass subband image, and can decompose each second bandpass subband image into a plurality of second multi-directional subband images.
  • the image size of the second multidirectional subband image is the same as the image size of the enhanced second image, and the plurality of second direction subband images can be jointly used as the second high frequency component.
  • the second low-frequency transformation coefficient corresponding to the second low-pass sub-band image is obtained.
  • NSDFB decomposition is performed on the second bandpass sub-band image to obtain the second multi-directional sub-band image, and the second high-frequency transformation coefficient corresponding to the second multi-directional sub-band image is obtained, since a plurality of second multi-directional sub-band images can be obtained, that is, the second high-frequency transformation coefficient can include multiple values, and the multiple values correspond to multiple second multi-directional sub-band images.
  • Step 420 fusing the first high-frequency component and the second high-frequency component into a third high-frequency component, and fusing the first low-frequency component and the second low-frequency component into a third low-frequency component.
  • the step of fusing the first high-frequency component and the second high-frequency component into a third high-frequency component may include: calculating a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; comparing the first modulus value with the second modulus value, and determining the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, the first high-frequency component is used as the third high-frequency component;
  • the first high-frequency component can correspond to a plurality of first modulus values
  • the plurality of first modulus values can correspond to a plurality of images in the first high-frequency component respectively. Since the first high-frequency region in the first image corresponds to the second high-frequency region in the second image, the plurality of first modulus values corresponding to the first high-frequency component corresponds to the plurality of second modulus values corresponding to the second high-frequency component.
  • the electronic device compares every two corresponding first modulus values with the second modulus value, and determines the largest modulus value among the first modulus value and the second modulus value corresponding to each two. If the largest modulus value is the first modulus value, the image of the first high-frequency component corresponding to the first modulus value is fused to the image of the third high-frequency component;
  • the electronic equipment can obtain the first high -frequency transformation coefficient and the second high -frequency transformation coefficient on the enhanced first image and the enhanced secondary image.
  • the direction of the direction is the image, so the fusion of the first high -frequency transformation coefficient and the second high -frequency variable change coefficient can be equivalent to integrating the first high frequency component and the second high frequency component.
  • the electronic device respectively takes absolute values of multiple values in the first high-frequency transformation coefficient to obtain multiple first modular values, respectively takes absolute values of multiple values in the second high-frequency transformation coefficient to obtain multiple second modular values, there is a one-to-one correspondence between the first and second modular values, compares every two corresponding first and second modular values, and determines the largest modulus value among the first and second corresponding first and second modulus values, and if the largest modulus value is the first modulus value, then use the value of the first high-frequency transformation coefficient corresponding to the first modulus value as the value of the third high-frequency component transformation coefficient , if the largest modulus value is the second modulus value, the value of the second high-frequency transform coefficient corresponding to the second modulus value is used as the value of the third high-frequency component transform coefficient, and all the first modulus values are compared with the second modulus value to obtain the third high-frequency component transform coefficient, thereby determining the third high-frequency component corresponding to the third high-frequency transform coefficient.
  • the step of fusing the first low-frequency component and the second low-frequency component into a third low-frequency component may include: determining an average low-frequency component according to the first low-frequency component and the second low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the electronic device may calculate the average low-frequency component of the first low-frequency component and the second low-frequency component, that is, calculate an average value of data corresponding to the first low-frequency component and data corresponding to the second low-frequency component, thereby obtaining the average low-frequency component, and using the average low-frequency component as the third low-frequency component.
  • the electronic device performs NSCT forward transform or FNSCT forward transform on the enhanced first image and the enhanced second image to obtain the first low-frequency transform coefficient and the second low-frequency transform coefficient, because the first low-frequency transform coefficient can correspond to the first low-pass subband image in the first low-frequency component, and the second low-frequency transform coefficient can correspond to the second low-pass sub-band image in the second low-frequency component, so fusing the first low-frequency transform coefficient and the second low-frequency transform coefficient can be equivalent to fusing the first low-frequency component and the second low-frequency component to obtain The third low-frequency transform coefficient of corresponds to the third low-frequency component.
  • the electronic device may fuse the first low-frequency transformation coefficient representing the first low-frequency component and the second low-frequency transformation coefficient representing the second low-frequency component obtained after NSCT forward transformation or FNSCT forward transformation of the first image and the second image, to obtain a third low-frequency transformation coefficient, and use the third low-frequency transformation coefficient to represent the third low-frequency component.
  • the electronic device calculates an average value of the first low-frequency transformation coefficient and the second low-frequency transformation coefficient, and uses the average value as a third low-frequency transformation coefficient, thereby determining a third low-frequency component corresponding to the third low-frequency transformation coefficient.
  • Step 430 fusing the third high-frequency component and the third low-frequency component into a third image.
  • the electronic device may fuse a third high-frequency component obtained by fusing the first high-frequency component and the second high-frequency component with a third low-frequency component obtained by fusing the first low-frequency component and the second low-frequency component to obtain a third image.
  • NSCT inverse transformation may be performed on the third high-frequency component and the third low-frequency component respectively, and the obtained result is a third image.
  • the electronic device may decompose the enhanced first image into the first high-frequency component and the first low-frequency component, and decompose the enhanced second image into the second high-frequency component and the second low-frequency component, then fuse the first high-frequency component and the second high-frequency component to obtain the third high-frequency component, and fuse the first low-frequency component and the second low-frequency component to obtain the third low-frequency component, and finally fuse the third high-frequency component and the third low-frequency component to obtain the third image, and can complement the effective information in the enhanced first image and the enhanced second image to solve the problem Underexposure or overexposure in backlit scenes.
  • steps in the flow charts in FIGS. 2-4 are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 2-4 may include a plurality of sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these sub-steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • the embodiment of the present disclosure also provides an image processing device.
  • the embodiment of the device corresponds to the embodiment of the method described above.
  • this embodiment of the device does not repeat the details in the embodiment of the method described above one by one, but it should be clear that the device in the embodiment of the present disclosure can correspondingly implement all the content of the method embodiment described above.
  • FIG. 5 is a structural block diagram of an image processing device provided in an embodiment of the present disclosure. As shown in FIG. 5 , the image processing device 500 provided in an embodiment of the present disclosure includes:
  • the detection module 510 is configured to detect the shooting scene where the electronic device is located.
  • the focus collection module 520 is configured to configure the focus collection module 520 as a module for collecting the first image focused on the backlight area through the camera and the second image focused on the non-backlight area through the camera if it is detected that the electronic device is in a backlight shooting scene.
  • the enhancement module 530 is configured as a module for performing enhancement processing on the first image and the second image respectively.
  • the fusion module 540 is a module that configures the fusion module 540 to fuse the enhanced first image and the enhanced second image to obtain a third image.
  • the image processing apparatus 500 further includes:
  • the alignment module is configured to perform registration processing on the enhanced first image and the enhanced second image, so that the spatial position information of the enhanced first image is consistent with the spatial position information of the enhanced second image.
  • the fusion module 540 is also configured to fuse the registered first image and the registered second image to obtain a third image.
  • the fusion module 540 is further configured to decompose the enhanced first image into a first high-frequency component and a first low-frequency component, and decompose the enhanced second image into a second high-frequency component and a second low-frequency component; wherein, the first high-frequency component corresponds to the first high-frequency region in the enhanced first image, and the first low-frequency component corresponds to the first low-frequency region in the enhanced first image; the second high-frequency component corresponds to the second high-frequency region in the enhanced second image, and the second low-frequency component corresponds to the second low-frequency component in the enhanced second image.
  • the first high-frequency component and the second high-frequency component are fused into a third high-frequency component
  • the first low-frequency component and the second low-frequency component are fused into a third low-frequency component
  • the third high-frequency component and the third low-frequency component are fused to obtain a module of a third image.
  • the fusion module 540 is further configured to calculate a first modulus value corresponding to the first high-frequency component and a second modulus value corresponding to the second high-frequency component; compare the first modulus value with the second modulus value, and determine the largest modulus value among the first modulus value and the second modulus value; if the largest modulus value is the first modulus value, then use the first high-frequency component as the third high-frequency component;
  • the fusion module 540 is further configured to determine an average low frequency component according to the first low frequency component and the second low frequency component, and use the average low frequency component as a module of the third low frequency component.
  • the fusion module is further configured to perform non-subsampling contourlet transformation or fast non-subsampling contourlet transformation on the enhanced first image to obtain the first high-frequency component and the first low-frequency component; perform the non-subsampling contourlet transformation or the fast non-subsampling contourlet transformation on the enhanced second image to obtain the second high-frequency component and the second low-frequency component.
  • the detection module 510 is also configured to obtain a preview image collected by the camera, and calculate the gray value of the preview image; if the gray value of the preview image is greater than the preset gray threshold, it is determined that the electronic device is in a backlight shooting scene; if the gray value of the preview image is less than or equal to the preset gray threshold, it is determined that the electronic device is not in a backlight shooting scene.
  • the enhancement module 530 is further configured to perform a first enhancement process on the first image, and perform a second enhancement process on the second image; wherein, the first enhancement process includes a multi-scale retinal enhancement algorithm, and the second enhancement process includes a module of a homomorphic filtering algorithm.
  • the focus acquisition module 520 is also configured to divide the preview image captured by the camera into multiple image areas according to the preset area size; calculate the gray value corresponding to each image area; compare the gray value corresponding to each image area with the area gray threshold value, and divide the preview image into a backlight area and a non-backlight area; wherein, the backlight area refers to the area in the preview image whose gray value is greater than the area gray threshold value, and the non-backlight area refers to the area in which the gray value in the preview image is less than or equal to the area gray value The region of the degree threshold.
  • the image processing apparatus provided in the embodiments of the present disclosure can execute the image processing methods provided in the foregoing method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
  • Each module in the above-mentioned image processing device may be fully or partially realized by software, hardware or a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the electronic device in the form of hardware, and can also be stored in the memory of the electronic device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • an electronic device is provided.
  • the electronic device may be a terminal device, and its internal structure may be as shown in FIG. 6 .
  • the electronic device includes a processor, a memory, a communication interface, a database, a display screen and an input device connected through a system bus.
  • the processor of the electronic device is configured as a module providing calculation and control capabilities.
  • the memory of the electronic device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the communication interface of the electronic device is configured as a wired or wireless communication module with an external terminal, and the wireless mode can be realized through WIFI, operator network, near field communication (NFC) or other technologies.
  • the computer-readable instructions are executed by the processor, the image processing method provided by the above-mentioned embodiments can be implemented.
  • the display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the housing of the electronic device, or an external keyboard, touch pad or mouse.
  • FIG. 6 is only a block diagram of a partial structure related to the disclosed solution, and does not constitute a limitation to the electronic device to which the disclosed solution is applied.
  • the specific electronic device may include more or less components than those shown in the figure, or combine certain components, or have a different component arrangement.
  • the image processing apparatus provided in the present disclosure may be implemented in the form of computer-readable instructions, and the computer-readable instructions may be run on a computer device as shown in FIG. 6 .
  • Various program modules constituting the electronic device can be stored in the memory of the computer device, for example, the detection module 510 and the focus acquisition module 520 shown in FIG. 5 .
  • the computer-readable instructions constituted by the various program modules cause the processor to execute the steps in the image processing methods of the various embodiments of the present disclosure described in this specification.
  • an electronic device including a memory and one or more processors, the memory is configured as a module storing computer-readable instructions; when the computer-readable instructions are executed by the processor, the one or more processors execute the steps of the image processing method described in the above method embodiments.
  • the electronic device provided by the embodiment of the present disclosure can implement the image processing method provided by the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors are made to perform the image processing steps described in any one of the above.
  • the computer-readable instructions stored on the computer-readable storage medium provided by the embodiments of the present disclosure can implement the image processing method provided by the above-mentioned method embodiments, and its implementation principle and technical effect are similar, and will not be repeated here.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the image processing method provided by the present disclosure can effectively solve the problem of underexposure or overexposure when an electronic device is shooting in a strong light interference or backlight environment, improve the image quality of an image shot in a backlight shooting scene, and has strong industrial applicability.

Abstract

La présente divulgation divulgue, dans les modes de réalisation, un procédé et un appareil de traitement d'image, un dispositif électronique et un support de stockage. Le procédé consiste : à détecter une scène de prise de vue dans laquelle se trouve le dispositif électronique; s'il est détecté que le dispositif électronique est dans une scène de prise de vue rétroéclairée, à collecter, au moyen d'une caméra, une première image focalisée sur une zone rétroéclairée et à collecter, au moyen de la caméra, une deuxième image focalisée sur une zone non rétroéclairée; à effectuer séparément un traitement d'amélioration sur la première image et la deuxième image; et à fusionner une première image améliorée et une deuxième image améliorée de façon à obtenir une troisième image. En mettant en œuvre les modes de réalisation de la présente divulgation, la qualité d'image d'une image prise dans une scène de prise de vue rétroéclairée peut être améliorée.
PCT/CN2022/098717 2022-01-18 2022-06-14 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage WO2023137956A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210057585.5 2022-01-18
CN202210057585.5A CN114418914A (zh) 2022-01-18 2022-01-18 图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023137956A1 true WO2023137956A1 (fr) 2023-07-27

Family

ID=81274269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098717 WO2023137956A1 (fr) 2022-01-18 2022-06-14 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114418914A (fr)
WO (1) WO2023137956A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418914A (zh) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN115100081B (zh) * 2022-08-24 2022-11-15 深圳佳弟子科技有限公司 Lcd显示屏灰阶图像增强方法、装置、设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331510A (zh) * 2016-10-31 2017-01-11 维沃移动通信有限公司 一种逆光拍照方法及移动终端
CN107241559A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 人像拍照方法、装置以及摄像设备
CN107871346A (zh) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 行车记录仪
CN107872616A (zh) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 行车记录方法及装置
CN108650466A (zh) * 2018-05-24 2018-10-12 努比亚技术有限公司 一种强光或逆光拍摄人像时提升照片宽容度的方法及电子设备
CN109064436A (zh) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 图像融合方法
CN109300096A (zh) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 一种多聚焦图像融合方法及装置
US11211018B1 (en) * 2020-06-25 2021-12-28 Xianyang Caihong Optoelectronics Technology Co., Ltd Grayscale compensation method and apparatus of display device
CN114418914A (zh) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331510A (zh) * 2016-10-31 2017-01-11 维沃移动通信有限公司 一种逆光拍照方法及移动终端
CN107871346A (zh) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 行车记录仪
CN107872616A (zh) * 2016-12-19 2018-04-03 珠海市杰理科技股份有限公司 行车记录方法及装置
CN107241559A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 人像拍照方法、装置以及摄像设备
CN108650466A (zh) * 2018-05-24 2018-10-12 努比亚技术有限公司 一种强光或逆光拍摄人像时提升照片宽容度的方法及电子设备
CN109064436A (zh) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 图像融合方法
CN109300096A (zh) * 2018-08-07 2019-02-01 北京智脉识别科技有限公司 一种多聚焦图像融合方法及装置
US11211018B1 (en) * 2020-06-25 2021-12-28 Xianyang Caihong Optoelectronics Technology Co., Ltd Grayscale compensation method and apparatus of display device
CN114418914A (zh) * 2022-01-18 2022-04-29 上海闻泰信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114418914A (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
CN111028189B (zh) 图像处理方法、装置、存储介质及电子设备
CN111641778B (zh) 一种拍摄方法、装置与设备
CN111418201B (zh) 一种拍摄方法及设备
JP6803982B2 (ja) 光学撮像方法および装置
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
CN109671106B (zh) 一种图像处理方法、装置与设备
JP6469678B2 (ja) 画像アーティファクトを補正するシステム及び方法
US8964060B2 (en) Determining an image capture payload burst structure based on a metering image capture sweep
US8760537B2 (en) Capturing and rendering high dynamic range images
KR101662846B1 (ko) 아웃 포커싱 촬영에서 빛망울 효과를 생성하기 위한 장치 및 방법
WO2023137956A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
CN108616689B (zh) 基于人像的高动态范围图像获取方法、装置及设备
US9247152B2 (en) Determining image alignment failure
US9087391B2 (en) Determining an image capture payload burst structure
JP7136956B2 (ja) 画像処理方法及び装置、端末並びに記憶媒体
CN113132695B (zh) 镜头阴影校正方法、装置及电子设备
CN110740266B (zh) 图像选帧方法、装置、存储介质及电子设备
Choi et al. A method for fast multi-exposure image fusion
WO2016026072A1 (fr) Procédé, appareil, et produit programme d'ordinateur pour la génération d'images couleur à plage dynamique étendue
WO2023151210A1 (fr) Procédé de traitement d'image, dispositif électronique et support de stockage lisible par ordinateur
CN112785537A (zh) 图像处理方法、装置以及存储介质
CN113870300A (zh) 图像处理方法、装置、电子设备及可读存储介质
CN112541868B (zh) 图像处理方法、装置、计算机设备和存储介质
US20240005521A1 (en) Photographing method and apparatus, medium and chip
WO2023236209A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921387

Country of ref document: EP

Kind code of ref document: A1