WO2023272506A1 - 图像处理方法及装置、可移动平台及存储介质 - Google Patents

图像处理方法及装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2023272506A1
WO2023272506A1 PCT/CN2021/103219 CN2021103219W WO2023272506A1 WO 2023272506 A1 WO2023272506 A1 WO 2023272506A1 CN 2021103219 W CN2021103219 W CN 2021103219W WO 2023272506 A1 WO2023272506 A1 WO 2023272506A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
illuminance
generate
target
Prior art date
Application number
PCT/CN2021/103219
Other languages
English (en)
French (fr)
Inventor
李恒杰
赵文军
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/103219 priority Critical patent/WO2023272506A1/zh
Publication of WO2023272506A1 publication Critical patent/WO2023272506A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of flight technology, and in particular to an image processing method, an image processing device, a movable platform, and a computer-readable storage medium.
  • images are generally processed by using temporal information or global information, such as high dynamic range image synthesis through multiple frames of images with different exposure values, or brightness correction through the brightness histogram of the entire image, etc., the algorithm complexity is average. Higher, which affects the image processing efficiency, resulting in lower real-time performance of image processing.
  • Embodiments of the present application provide an image processing method, an image processing device, a removable platform, and a computer-readable storage medium.
  • the image processing method of the embodiment of the present application includes acquiring a target image of at least a part of the captured image; decomposing the target image to generate an illuminance image and a reflection image; processing the illuminance image to generate a plurality of first intermediate images with different illuminance image; and compositing the reflection image and the first intermediate image to obtain a composite image.
  • the image processing device in the embodiment of the present application includes a processor and a memory, the memory is used to store instructions, and the processor invokes the instructions stored in the memory to implement the following operations: acquire the target of at least a partial area in the captured image image; decomposing the target image to generate an illuminance image and a reflection image; processing the illuminance image to generate a plurality of first intermediate images with different illuminance; and compositing the reflection image and the first intermediate image to obtain a composite after the image.
  • the mobile platform in the embodiment of the present application includes an image processing device, the image processing device includes a processor and a memory, the memory is used to store instructions, and the processor invokes the instructions stored in the memory to implement the following operations : Obtaining a target image of at least a part of the captured image; decomposing the target image to generate an illuminance image and a reflection image; processing the illuminance image to generate a plurality of first intermediate images with different illuminance; and synthesizing the reflection image and the first intermediate image to obtain a composite image.
  • the computer-readable storage medium in the embodiment of the present application includes instructions, which, when run on a computer, cause the computer to implement the image processing method.
  • the image processing method includes acquiring a target image of at least a part of the captured image; decomposing the target image to generate an illuminance image and a reflection image; processing the illuminance image to generate a plurality of first intermediate images with different illuminance; and Combining the reflection image and the first intermediate image to obtain a composite image.
  • the image processing method, image processing device, mobile platform, and computer-readable storage medium of the embodiments of the present application by acquiring the target image of at least a part of the captured image, and decomposing the image, an illuminance map containing the brightness of the image and an illuminance map containing For the reflection image of scene details, by processing the illumination image, multiple first intermediate images with different illuminance are obtained and combined with the reflection image, so as to obtain an image with no loss of detail and enhanced brightness, and because only the image that is not collected Part of it is processed, there is no need to wait for the image data output of the entire collected image to process the entire image, only the image data output of the target image can be processed, and the algorithm complexity is low, which is conducive to improving image processing efficiency.
  • the real-time nature of the processing is high.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 2 is a schematic plan view of a movable platform provided by an embodiment of the present application.
  • 3 to 7 are schematic diagrams of the principle of the image processing method provided by the embodiment of the present application.
  • FIG. 8 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 9 is a mapping relationship diagram of an image processing method provided by an embodiment of the present application.
  • Fig. 16 is a schematic block diagram of a computer provided by an embodiment of the present application.
  • connection means two or more, unless otherwise specifically defined.
  • connection should be interpreted in a broad sense, for example, it can be a mechanical connection or an electrical connection Or they can communicate with each other; they can be directly connected, or indirectly connected through an intermediary, and can be internal communication between two components or an interaction relationship between two components.
  • NPE Naturalness Preserved Enhancement Algorithm
  • LIME Light map-based low-light image enhancement algorithm
  • camera Some algorithms combined with the Camera Response Model.
  • image processing method comprises:
  • 011 Obtain a target image of at least a partial area in the collected image
  • the embodiment of the present application also provides a movable platform 100, the movable platform 100 includes an image processing device 10, that is, the image processing device 10 is set on the movable platform 100, the image processing device 10 includes a processor 11 and a memory 12, and the memory 12
  • the processor 11 invokes the instructions stored in the memory 12 to implement the following operations: acquire a target image of at least a part of the captured image; decompose the target image to generate an illuminance image and a reflection image; process the illuminance image to generate multiple a first intermediate image with different illuminance; and synthesizing the reflected image and the first intermediate image to obtain a synthesized image. That is to say, step 011 to step 014 may be implemented by the processor 11 .
  • the mobile platform 100 includes a drone, an unmanned vehicle, or a ground remote-controlled robot.
  • the image processing device 10 can be arranged on the fuselage of the unmanned aerial vehicle or the ground remote-controlled robot or on the remote controller of the unmanned aerial vehicle or the ground remote-controlled robot;
  • the image processing device 10 can be installed on the fuselage of the unmanned vehicle.
  • the mobile platform 100 is an unmanned aerial vehicle as an example for illustration.
  • the image processing device 10 is connected with the photographing device 20 to receive the collected images captured by the photographing device 20 .
  • the image sensor of the photographing device 20 receives the light of the scene for exposure, the image sensor can output line by line until the image data of all lines are output, and a captured image can be generated.
  • the image processing device 10 can generate the target image after the image sensor outputs the image data of predetermined lines (such as 2 lines, 3 lines, 5 lines, 9 lines, etc.), and the target image can be a part of the collected image or the target image is the collected image. image. Please refer to FIG. 3.
  • the subsequent image processing does not need to wait for all lines of image data from the image sensor to be output, and the image sensor outputs part of the image data to complete the part of the image.
  • the image processing of the image data of all rows is basically completed at the same time, which is beneficial to improve the image processing efficiency.
  • the image processing device 10 processes the target image.
  • the image processing device 10 can first detect the current ambient light brightness through the target image.
  • the target image can not be processed, but the target image can be directly output, which can also ensure a high image quality.
  • subsequent image processing is performed to improve image quality in low-brightness environments.
  • the image processing mode can be manually selected by the user. For example, when the user selects the night scene mode, the image processing device 10 will perform image processing on all target images, and when the user selects the day mode, the image processing device 10 will not perform image processing on the target images. Image processing, while directly outputting the target image.
  • the target image may be a single-channel image, such as single-channel image data in the Bayer (Bayer) domain, or a multi-channel image, such as three-channel image data in the RGB domain.
  • a single-channel image such as single-channel image data in the Bayer (Bayer) domain
  • a multi-channel image such as three-channel image data in the RGB domain.
  • the image processing device 10 After the image processing device 10 acquires the target image, it may decompose the target image based on the Retinex algorithm, so that the target image is decomposed into an illuminance image and a reflection image.
  • the Retinex theory shows that under different lighting conditions, the human eye can produce nearly consistent color perception. This property is called color constancy. This constancy is the result of joint processing by the retina and the cerebral cortex.
  • the target image P2 is composed of an illumination image P3 and a reflection image P4. Therefore, the target image P2 can be decomposed into an illumination image P3 and a reflection image P4.
  • the illumination image P3 represents the shooting scene corresponding to the target image P2
  • the illumination component of ambient light in can determine the dynamic range of the image and represent the brightness information of the shooting scene;
  • the reflection image P4 represents the reflection component of the shooting scene and can determine the edge detail information of the shooting scene.
  • the image processing device 10 processes the illuminance image P3 to generate a plurality of first intermediate images of different illuminances (the first intermediate image P31 to the first intermediate image P35 of 5 different illuminances shown in Fig. 5 example).
  • the illuminance image P3 adjusts the illuminance image P3 to generate multiple first intermediate images with different illuminance. For example, by increasing the pixel value of the illuminance image P3, a first intermediate image with higher illuminance is obtained, and by reducing the pixel value of the illuminance image P3, a first intermediate image with lower illuminance is obtained, thereby generating multiple first intermediate images with different illuminance intermediate image.
  • the processor 11 synthesizes the first intermediate image P31 to the first intermediate image P35 and the reflection image P4 to generate a synthesized image P6 (hereinafter referred to as the synthesized image P6 ).
  • each of the first intermediate image and the reflected image can generate a second intermediate image P5, and can generate second intermediate images P51 to P55 with different illuminances.
  • the first intermediate image P31 and the reflection image P4 are synthesized into the second intermediate image P51
  • the first intermediate image P32 and the reflection image P4 are synthesized into the second intermediate image P52
  • the first intermediate image P33 and the reflection image P4 are synthesized into the second intermediate image.
  • the image P53, the first intermediate image P34, and the reflection image P4 are synthesized into a second intermediate image P54
  • the first intermediate image P35 and the reflection image P4 are synthesized into a second intermediate image P55.
  • the second intermediate images P5 with different illuminances are then synthesized to generate a composite image P6 with enhanced illuminance, which increases the brightness of the image while retaining the detailed information corresponding to the reflected image P4, thereby improving the image quality.
  • the image processing device 10 and the mobile platform 100 of the embodiment of the present application by obtaining the target image of at least a part of the captured image and decomposing the image, an illuminance image containing image brightness and a reflection image containing scene details are generated.
  • Image through the processing of the illumination image, multiple first intermediate images with different illuminance are obtained and combined with the reflection image, so as to obtain an image with no loss of detail and enhanced brightness, and because only a part of the acquired image can be processed, There is no need to wait for the image data output of the entire collected image to process the entire image, only the image data output of the target image can be processed, and the algorithm complexity is low, which is conducive to improving image processing efficiency and real-time image processing higher.
  • the processor 11 invokes instructions to implement the following operations: adjust pixel values of the target image based on a preset mapping relationship. That is to say, step 015 may be implemented by the processor 11 .
  • the processor 11 can adjust the pixel value of the target image based on the preset mapping relationship, wherein, the mapping relationship is shown in FIG. 9, the abscissa is the pixel value of the target image, and the pixel The value is a normalized value (located between [0, 1]) to facilitate subsequent calculations, and the ordinate is the adjusted pixel value, which is also a normalized value.
  • the pixel value of the target image is 0.1, the corresponding adjusted pixel value is 0.32, and the adjustment ratio is 3.2, the pixel value of the target image is 0.2, the corresponding adjusted pixel value is 0.45, and the adjustment ratio is 2.25, you can see It can be seen that the lower the pixel value of the target image, the larger the adjustment ratio, that is, the adjustment ratio is negatively correlated with the pixel value of the target image.
  • the processor 11 After the processor 11 acquires the target image, according to the pixel value of the target image, first find the corresponding mapped pixel value in FIG. 9 and use the mapped pixel value to replace the pixel value of the target image (that is, adjust the pixel value of the target image according to the adjustment ratio value to the mapped pixel value), so as to determine the adjustment of each pixel value of the target image, and the lower the pixel value, the larger the adjustment ratio, so as to realize the low brightness of the target image that is prone to noise (such as pixel value less than 0.4) pixel to perform greater brightness enhancement, which is beneficial to improve the quality of the target image.
  • the processor 11 After the processor 11 acquires the target image, according to the pixel value of the target image, first find the corresponding mapped pixel value in FIG. 9 and use the mapped pixel value to replace the pixel value of the target image (that is, adjust the pixel value of the target image according to the adjustment ratio value to the mapped pixel value), so as to determine the adjustment of each
  • step 011 includes:
  • 0111 Determine the region of interest in the captured image
  • 0112 Obtain at least a part of the region of interest as a target image.
  • the processor 11 invokes instructions to implement the following operations: determine the region of interest according to user input; or detect a low-brightness region in the captured image whose brightness is less than a predetermined brightness, as the region of interest. That is to say, step 0111 and step 112 may be implemented by the processor 11 .
  • the region of interest may be the region determined by the user's selection operation in the preview image, indicating that the user is more interested in the content of the region. Therefore, the acquisition can be determined according to the user input.
  • image processing can be processed for the region of interest.
  • the captured image may include high-brightness areas (image areas whose brightness is greater than or equal to a predetermined brightness) and low-brightness areas (image areas whose brightness is less than a predetermined brightness). For high-brightness areas, problems such as noise may occur due to higher brightness The probability is low and the image quality is better.
  • the low-brightness area can be determined as the region of interest by detecting the low-brightness area of the collected image.
  • the captured image is detected by a detection frame with a preset size, and the image area covered by the detection frame whose average pixel value is smaller than a predetermined brightness threshold is determined as the region of interest.
  • the image processing device 10 can obtain at least a part of the region of interest as the target image for image processing. Since the image processing is only performed on the region of interest, the amount of image processing can be reduced while ensuring that the captured image after image processing conforms to the requirements of the user. demand.
  • step 012 includes:
  • 0122 Calculate the reflection image according to the preset first function, the illuminance image and the target image.
  • the processor 11 invokes instructions to implement the following operations: filter the target image to generate an illuminance image; and calculate a reflection image according to a preset first function, the illuminance image and the target image. That is to say, step 0121 and step 0122 may be implemented by the processor 11 .
  • the processor 11 can filter the target image to generate an illuminance image. Due to the difference in information contained in the illuminance image and the reflection image, by setting specific filtering parameters, the illuminance image The image data is filtered out to obtain the illuminance image.
  • the filtering algorithm can be a preset bilateral filtering algorithm to filter the target image. The bilateral filtering algorithm has better edge information retention and better noise reduction effect. Through low-pass filtering, the target image can be Low-frequency signals in the image are filtered out to generate an illumination image.
  • the image data of the output predetermined row is polled through a polling frame of a predetermined size (such as 3*3, 5*5 pixel size, etc.).
  • a polling frame of a predetermined size such as 3*3, 5*5 pixel size, etc.
  • the number of predetermined rows should not be too small, for example, the number of predetermined rows is greater than or equal to the number of rows corresponding to the predetermined size, so as to ensure that the polling frame has enough pixels for filtering.
  • the size of the target image can also be greater than or equal to the size of the polling frame (that is, the size of the target image can be determined according to the size of the polling frame), so that the polling The frame can filter the target image, and then complete the filtering of the entire target image by moving the polling frame to generate an illuminance image.
  • the reflective image can be calculated according to the target image, the illuminance image and the preset first function.
  • the target image can be decomposed into an illuminance image and a reflection image.
  • the image processing method also includes:
  • the processor 11 invokes instructions to implement the following operations: increase the pixel value of the reflection image based on a preset ratio. That is to say, step 016 may be executed by the processor 11 .
  • the reflection image contains the edge details, texture, color and other information of the target image. Therefore, the reflection image can be enhanced to improve the quality of edge details, texture, color and other information, enhance detail information and improve color saturation. .
  • the processor 11 can improve the reflection image as a whole by a preset ratio. Since the reflection image does not contain brightness information, there is no need to distinguish low-brightness pixel values, but each pixel value of the reflection image is increased based on a preset ratio.
  • step 013 includes:
  • 0131 Generate a plurality of first intermediate images with different illuminances according to a plurality of different preset exposure values and a second function.
  • the processor 11 invokes instructions to implement the following operations: generate a plurality of first intermediate images with different illuminances according to a plurality of different preset exposure values and the second function. That is to say, step 0131 may be executed by the processor 11 .
  • the image sensor when the image sensor outputs image data, it outputs based on the exposure value set in advance. By setting different exposure values, the image sensor can output images with different exposure degrees. Finally, by setting different exposure values, the image data output by the image sensor under different exposure values can be simulated, so as to generate a plurality of first intermediate images with different illuminances.
  • I k corresponding to different ev can be obtained. For example, if five different ev values are set, respectively 0.1, 0.2, 0.3, 0.4 and 0.5, five first intermediate images (the first intermediate image P31 to the first intermediate image P35 as shown in Figure 5) can be generated .
  • step 014 includes:
  • the processor 11 invokes instructions to implement the following operations: based on a preset third function, synthesize the reflected image and the first intermediate image to generate multiple second intermediate images; and based on the preset third function
  • the fourth function is to synthesize a plurality of second intermediate images to generate a synthesized image. That is to say, step 0141 and step 0142 may be executed by the processor 11 .
  • the second intermediate image includes the information of the enhanced reflection image and the illuminance image. Compared with the target image output by the image sensor, no matter the brightness, details, texture, color, etc. have been enhanced, and the image quality is better.
  • the processor 11 can perform mathematical transformation according to the first function of the target image P2, the illuminance image P3, and the reflected image P4 to obtain the corresponding second intermediate image generated according to the reflected image P4 and each first intermediate image.
  • the fourth function is L' is the synthesized image (that is, the synthesized image P6), L k is the second intermediate image (the value of Lk is a normalized pixel value, located in the interval [0,1]), and w k is the weight.
  • the weight of each second intermediate image can be determined according to the following formula: Among them, the illuminance of multiple second intermediate images is arranged from small to large to obtain L 1 , L 2 , L 3 , L 4 and L 5 , it can be seen that the second intermediate images L 1 and L 2 with lower illuminance The weight is related to its pixel value, the larger the pixel value, the higher the weight; and the opposite is true for the weights of the second intermediate images L 3 , L 4 and L 5 with higher illuminance, the lower the pixel value, the greater the weight , so as to better enhance the dark part of the synthesized image.
  • W k 1 ⁇ L k , and greater weight can be assigned to the dark part of each second intermediate image, so that the enhanced degree of the combined dark part is further increased.
  • the image processing method also includes:
  • the processor 11 invokes instructions to implement the following operations: obtain pixels in the synthesized image whose enhancement ratio is greater than a predetermined ratio; and adjust the enhancement ratio of the pixels to a predetermined ratio. That is to say, step 017 and step 018 may be implemented by the processor 11 .
  • the processor 11 may also obtain the ratio of the pixel values corresponding to the synthesized image and the target image (ie, the enhancement ratio), and then determine pixels whose enhancement ratio is greater than a predetermined ratio, and the predetermined ratio may be It's 10, 15, 20, etc. Because the enhancement degree of the pixel whose enhancement ratio is greater than the predetermined ratio is too large, it can be determined as an over-enhanced pixel.
  • the enhancement ratio of the over-enhanced pixel can be adjusted to the predetermined ratio, or lower than the predetermined ratio, so as to ensure that the composite image All the pixels of the image are enhanced to an appropriate degree, and there will be no pixels that are over-enhanced and cause distortion, thereby improving the quality of the synthesized image.
  • the final output image can be calculated.
  • O M*L'
  • M the enhancement ratio
  • L' the synthesized image.
  • the output image includes OR, OG, and OB
  • the embodiment of the present application also provides a computer-readable storage medium 300 containing instructions.
  • the instructions 302 are run on the computer 400, the computer 400 executes the image processing method of any of the above-mentioned embodiments.
  • the computer 400 when the instructions are executed by the computer 400, the computer 400 is made to perform the following steps:
  • 011 Obtain a target image of at least a partial area in the collected image
  • the schematic diagrams corresponding to the various embodiments include the time sequence of executing actions, which is only an exemplary description. According to needs, the time sequence before each executing action can be changed. At the same time, there is no contradiction between the various embodiments. In the case of conflicts, one or more embodiments may be combined or split to adapt to different application scenarios, and details are not described here.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法,图像处理方法包括(011)获取采集图像中至少部分区域的目标图像;(012)分解所述目标图像,以生成照度图像和反射图像;(013)处理所述照度图像,以生成多张不同照度的第一中间图像;及(014)合成所述反射图像和所述第一中间图像以得到合成后的图像。

Description

图像处理方法及装置、可移动平台及存储介质 技术领域
本申请涉及飞行技术领域,特别涉及一种图像处理方法、图像处理装置、可移动平台和计算机可读存储介质。
背景技术
在低亮环境下,如阴雨天或夜间场景等,拍摄的图像亮度一般也较低,容易出现噪声,图像质量较差。目前,一般会利用时域信息或全局信息来对图像进行处理,如通过多帧曝光值不同的图像进行高动态范围图像合成,或者通过整个图像的亮度直方图进行亮度校正等,算法复杂度均较高,影响了图像处理效率,导致图像处理的实时性较低。
发明内容
本申请的实施例提供一种图像处理方法、图像处理装置、可移动平台和计算机可读存储介质。
本申请实施例的图像处理方法包括获取采集图像中至少部分区域的目标图像;分解所述目标图像,以生成照度图像和反射图像;处理所述照度图像,以生成多张不同照度的第一中间图像;及合成所述反射图像和所述第一中间图像以得到合成后的图像。
本申请实施例的图像处理装置包括处理器和存储器,所述存储器用于存储指令,所述处理器调用所述存储器存储的所述指令用于实现以下操作:获取采集图像中至少部分区域的目标图像;分解所述目标图像,以生成照度图像和反射图像;处理所述照度图像,以生成多张不同照度的第一中间图像;及合成所述反射图像和所述第一中间图像以得到合成后的图像。
本申请实施例的可移动平台包括图像处理装置,所述图像处理装置包括处理器和存储器,所述存储器用于存储指令,所述处理器调用所述存储器存储的所述指令用于实现以下操作:获取采集图像中至少部分区域的目标图像;分解所述目标图像,以生成照度图像和反射图像;处理所述照度图像,以生成多张不同照度的第一中间图像;及合成所述反射图像和所述第一中间图像以得到合成后的图像。
本申请实施例的计算机可读存储介质包括指令,当所述指令在计算机上运行时,使得所述计算机实现图像处理方法。所述图像处理方法包括获取采集图像中至少部分区域的目标图像;分解所述目标图像,以生成照度图像和反射图像;处理所述照度图像,以生成多张不同照度的第一中间图像;及合成所述反射图像和所述第一中间图像以得到合成后的图像。
本申请实施例的图像处理方法、图像处理装置、可移动平台和计算机可读存储介质中,通过获取采集图像中至少部分区域的目标图像,并通过图像分解,生成包含图像亮度的照度图和包含场景细节的反射图像,通过对照度图像的处理,得到不同照度的多张第一中间图像并与反射图像合成,从而得到不损失细节且亮度增强后的图像,且由于可仅对为采集图像的一部分进行处理,无需等待整张采集图像的图像数据输出再对整张图像进行处理,只需目标图像的图像数据输出即可进行处理,算法复杂度较低,从而有利于提高图像处理 效率,图像处理的实时性较高。
附图说明
图1是本申请实施例提供的图像处理方法的流程示意图。
图2是本申请实施例提供的可移动平台的平面示意图。
图3至图7是本申请实施例提供的图像处理方法的原理示意图。
图8是本申请实施例提供的图像处理方法的流程示意图。
图9是本申请实施例提供的图像处理方法的映射关系图。
图10至图15是本申请实施例提供的图像处理方法的流程示意图。
图16是本申请实施例提供的计算机的模块示意图。
具体实施方式
下面详细描述本申请的实施方式,实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
目前,在阴雨天和晚上等低照度条件下,采集的图像亮度偏暗、对比度较低、图像灰度范围比较窄、容易产生颜色失真、并且常常含有大量的噪声。这些问题严重影响了图像的主观效果,同时影响后续使用该图像做计算的输出精度,降低了图像的应用价值。因此需要对图像做增强处理,改善其视觉效果,将其转换为更适合人眼观察或者计算机视觉系统处理的形式。针对低亮度图像,目前的常见增强算法如自然度保持增强算法(Naturalness Preserved Enhancement Algorithm,NPE)、基于光照贴图的低照度图像增强算法(Low-light Image Enhancement via Illumination Map Estimation,LIME)以及与相机响应模型(Camera Response Model)结合的一些算法。上述算法针对低光照图像增强时,需要图像的全局信息或者时域信息,导致算法复杂度较高,实时性较差。
请参阅图1和图2,为此,本申请实施例提供一种图像处理方法,图像处理方法包括:
011:获取采集图像中至少部分区域的目标图像;
012:分解目标图像,以生成照度图像和反射图像;
013:处理照度图像,以生成多张不同照度的第一中间图像;及
014:合成反射图像和第一中间图像以得到合成后的图像。
本申请实施例还提供一种可移动平台100,可移动平台100包括图像处理装置10,即图像处理装置10设置在可移动平台100上,图像处理装置10包括处理器11和存储器12,存储器12用于存储指令,处理器11调用存储器12存储的指令用于实现以下操作:获取采集图像中至少部分区域的目标图像;分解目标图像,以生成照度图像和反射图像;处理照 度图像,以生成多张不同照度的第一中间图像;及合成反射图像和第一中间图像以得到合成后的图像。也即是说,步骤011至步骤014可以由处理器11实现。
具体地,可移动平台100包括无人机、无人驾驶车辆、或地面遥控机器人。例如,可移动平台100为无人机或地面遥控机器人时,图像处理装置10可设置在无人机或地面遥控机器人的机身或者无人机或地面遥控机器人的遥控器上;可移动平台100为无人驾驶车辆时,图像处理装置10可设置在无人驾驶车辆的机身上。本申请以可移动平台100为无人机为例进行说明。
图像处理装置10与拍摄装置20连接,以接收拍摄装置20采集的采集图像。在拍摄装置20的图像传感器接收场景的光线进行曝光时,图像传感器可进行逐行输出,直至将所有行的图像数据输出后,可生成采集图像。
图像处理装置10在图像传感器输出预定行(如2行、3行、5行、9行等)的图像数据后,即可生成目标图像,目标图像可为采集图像的一部分或者目标图像即为采集图像。请参阅图3,优选地,目标图像P2为采集图像P1的一部分时,后续图像处理无需等待图像传感器的所有行图像数据均输出完成后才进行,图像传感器输出部分图像数据,就完成该部分图像数据的图像处理,在图像传感器输出所有行的图像数据后,所有行的图像数据也基本同时完成图像处理,从而有利于提升图像处理效率。
然后,图像处理装置10对目标图像进行处理,当然,图像处理装置10可首先通过目标图像检测当前的环境光亮度,在环境光亮度较高(如大于预定环境光亮度)时,则图像噪声较少,此时可不对目标图像进行处理,而直接输出目标图像,也可保证较高图像质量。而在环境光亮度较低时,再进行后续图像处理,以提升低亮环境下的图像质量。
在其他实施方式中,可由用户手动选择图像处理模式,如用户选择夜景模式时,图像处理装置10会对所有目标图像进行图像处理,而在用户选择白天模式时,图像处理装置10不对目标图像进行图像处理,而直接输出目标图像。
目标图像可以是单通道图像,如拜耳(Bayer)域的单通道图像数据,或者多通道图像,如RGB域的三通道图像数据。可以理解,RGB的三通道(如R、G、B三通道)和拜耳域的单通道Y存在函数关系,如Y=0.299R+0.587*G+0.114*B,对拜耳域的单通道Y的处理,可通过该函数关系转换为对RGB的三通道(如R、G、B三通道)的处理,本申请以对拜尔域的单通道图像数据的处理为例进行说明。
图像处理装置10在获取到目标图像后,可基于Retinex算法对目标图像进行分解,以使得目标图像分解为照度图像和反射图像。
其中,Retinex理论表明,在不同的光照条件下,人眼可以产生近乎一致的色彩感知,这种性质被称为颜色恒常性。这种恒常性是经过视网膜和大脑皮层共同处理后的结果。
请参阅图4,根据Retinex理论,目标图像P2由照度图像P3和反射图像P4组成,因此,可将目标图像P2分解为照度图像P3和反射图像P4,照度图像P3表示目标图像P2对应的拍摄场景中的环境光的照射分量,可确定图像的动态范围,表示拍摄场景的亮度信息;反射图像P4表示该拍摄场景的反射分量,可确定拍摄场景的边缘细节信息。
请参阅图5,然后图像处理装置10对照度图像P3进行处理,以生成多张不同照度的第一中间图像(以图5所示5个不同照度的第一中间图像P31至第一中间图像P35为例)。为了实现亮度增强,在得到照度图像P3后,通过对照度图像P3进行调整,以生成多张不 同照度的第一中间图像。例如,通过增加照度图像P3的像素值,得到照度较高的第一中间图像,通过减小照度图像P3的像素值,得到照度较低的第一中间图像,从而生成多张不同照度的第一中间图像。
请参阅图6,最后,处理器11将第一中间图像P31至第一中间图像P35和反射图像P4合成,生成合成后的图像P6(下称合成图像P6)。具体为:每个第一中间图像和反射图像均可生成一张第二中间图像P5,可生成不同照度的第二中间图像P51至第二中间图像P55。例如,第一中间图像P31和反射图像P4合成为第二中间图像P51、第一中间图像P32和反射图像P4合成为第二中间图像P52、第一中间图像P33和反射图像P4合成为第二中间图像P53、第一中间图像P34和反射图像P4合成为第二中间图像P54、第一中间图像P35和反射图像P4合成为第二中间图像P55。
请参阅图7,然后将不同照度的第二中间图像P5合成,以生成照度增强后的合成图像P6,提高图像的亮度的同时,保留反射图像P4对应的细节信息,从而提升图像质量。
本申请实施例的图像处理方法、图像处理装置10和可移动平台100中,通过获取采集图像中至少部分区域的目标图像,并通过图像分解,生成包含图像亮度的照度图像和包含场景细节的反射图像,通过对照度图像的处理,得到不同照度的多张第一中间图像并与反射图像合成,从而得到不损失细节且亮度增强后的图像,且由于可仅对为采集图像的一部分进行处理,无需等待整张采集图像的图像数据输出再对整张图像进行处理,只需目标图像的图像数据输出即可进行处理,算法复杂度较低,从而有利于提高图像处理效率,图像处理的实时性较高。
请参阅图2和8,在一些实施例中,还包括:
015:基于预设的映射关系,调整目标图像的像素值。
在一些实施例中,处理器11调用指令用于实现以下操作:基于预设的映射关系,调整目标图像的像素值。也即是说,步骤015可以由处理器11实现。
具体地,在分解目标图像前,处理器11可基于预设的映射关系,来调整目标图像的像素值,其中,映射关系如图9所示,横坐标为目标图像的像素值,且该像素值为归一化数值(位于区间[0,1]之间),以方便后续计算,纵坐标为调整后的像素值,同样为归一化数值。例如,目标图像的像素值为0.1,对应的调整后的像素值为0.32,调整比例为3.2,目标图像的像素值为0.2,对应的调整后的像素值为0.45,调整比例为2.25,可以看出,目标图像的像素值越低,调整比例越大,即调整比例和目标图像的像素值负相关。
处理器11在获取到目标图像后,根据目标图像的像素值,首先在图9中找到对应的映射像素值并使用映射像素值替换目标图像的像素值(即,根据调整比例调整目标图像的像素值至映射像素值),从而确定实现对目标图像的每个像素值的调整,且像素值越低,调整比例越大,从而实现对目标图像中,容易出现噪声的低亮(如像素值小于0.4)像素进行更大的亮度增强,有利于提升目标图像的质量。
请参阅图2和图10,在某些实施例中,步骤011包括:
0111:确定采集图像中的感兴趣区域;及
0112:获取感兴趣区域中至少部分区域,以作为目标图像。
在某些实施例中,处理器11调用指令用于实现以下操作:根据用户输入确定感兴趣区域;或检测采集图像中,亮度小于预定亮度的低亮区域,以作为感兴趣区域。也即是说, 步骤0111和步骤112可以由处理器11实现。
具体地,对于采集图像而言,存在感兴趣区域,感兴趣区域可以是用户在预览图像中的选择操作确定的区域,表示用户对该区域的内容更感兴趣,因此,可根据用户输入确定采集图像的感兴趣区域,图像处理可针对感兴趣区域进行处理。采集图像可能包括高亮区域(亮度大于或等于预定亮度的图像区域)和低亮区域(亮度小于预定亮度的图像区域),对于高亮区域而言,由于亮度较高,因此出现噪声等问题的几率较低,图像质量较好,因此,可通过检测采集图像的低亮区域,确定低亮区域为感兴趣区域。例如,以与预设尺寸的检测框检测采集图像,确定像素值均值小于预定亮度阈值的检测框覆盖的图像区域为感兴趣区域。
图像处理装置10可获取感兴趣区域的至少部分区域,以作为目标图像进行图像处理,由于图像处理仅针对感兴趣区域进行,从而降低图像处理量的同时,保证图像处理后的采集图像的符合用户的需求。
请参阅图2和图11,在某些实施例中,步骤012包括:
0121:对目标图像进行滤波,以生成照度图像;及
0122:根据预设的第一函数、照度图像和目标图像,计算反射图像。
在某些实施例中,处理器11调用指令用于实现以下操作:对目标图像进行滤波,以生成照度图像;及根据预设的第一函数、照度图像和目标图像,计算反射图像。也即是说,步骤0121和步骤0122可以由处理器11实现。
具体地,为了实现目标图像的分解,处理器11可通过对目标图像进行滤波,以生成照度图像,由于照度图像和反射图像包含的信息差异,可通过设定特定的滤波参数,将属于照度图像的图像数据筛选出来,从而得到照度图像。在一个实施例中,滤波算法可以是预设的双边滤波算法来对目标图像进行滤波,双边滤波算法具有较好的保留边缘信息,降噪效果较好的有点,通过低通滤波,可将目标图像中的低频信号筛选出来,从而生成照度图像。
在滤波时,通过预定尺寸(如3*3、5*5像素大小等)的轮询框,对输出的预定行的图像数据进行轮询,可以理解,由于轮询框的尺寸为3*3像素大小,为了保证轮询框内都存在图像数据,预定行的行数不能过小,如预定行的行数大于或等于预定尺寸对应的行数,从而保证轮询框具有足够像素进行滤波。
由于需要输出预定行的图像数据保证滤波的正常进行,因此,目标图像的尺寸也可以大于或等于轮询框的尺寸(即,目标图像的尺寸可根据轮询框的尺寸确定),使得轮询框能够对目标图像进行滤波,然后通过移动轮询框,完成整个目标图像的滤波,从而生成照度图像。
然后根据目标图像、照度图像及预设的第一函数,可计算出反射图像。可以理解,基于Retinex理论,目标图像可分解为照度图像和反射图像,具体可通过第一函数表示目标图像、照度图像和反射图像的关系,第一函数可以是:logR=logL-logI,其中,R为反射图像,L为目标图像,I为照度图像。因此,得到目标图像、照度图像和反射图像中任意两个,均可通过第一函数计算得到另外一个。如此,可将目标图像分解为照度图像和反射图像。
请参阅图2和12,在一些实施例中,图像处理方法还包括:
016:基于预设比例增加反射图像的像素值。
在一些实施例中,处理器11调用指令用于实现以下操作:基于预设比例增加反射图像的像素值。也即是说,步骤016可以由处理器11执行。
具体的,反射图像包含了目标图像的边缘细节、纹理、色彩等信息,因此,可通过对反射图像进行增强,以提升边缘细节、纹理、色彩等信息的质量,增强细节信息及提升色彩饱和度。处理器11可通过预设比例对反射图像整体进行提升,由于反射图像不包含亮度信息,因此,无需区分低亮像素值,而是对反射图像的每个像素值都基于预设比例进行增加即可实现反射图像的增强,可通过以下公式实现对反射图像的增强:R’=R*(1+F),其中,R’为增强后的反射图像,F为预设比例,F可以是1.1、1.2、1.4、1.5等,可根据对边缘细节、纹理、色彩等信息的需求确定,也可根据需要的图像类型确定,如拍摄虚化图像,背景及边缘都是虚化的,此时F可设置的较小,从而提升虚化效果。
请参阅图2和图13,在某些实施例中,步骤013包括:
0131:根据预设的多个不同的曝光值、及第二函数,生成多张不同照度的第一中间图像。
在某些实施例中,处理器11调用指令用于实现以下操作:根据预设的多个不同的曝光值、及第二函数,生成多张不同照度的第一中间图像。也即是说,步骤0131可以由处理器11执行。
具体地,图像传感器在输出图像数据时,是基于提前设定好的曝光值输出的,通过设置不同的曝光值,图像传感器能够输出不同曝光程度的图像,因此,处理器11在获取到照度图像后,可通过设定不同的曝光值,模拟不同曝光值下图像传感器输出的图像数据,从而生成不同多张不同照度的第一中间图像。具体可以是,处理器11根据第二函数、预设的多个不同的曝光值、及照度图像,生成多张不同照度的第一中间图像,其中,第二函数为I k=(1+ev)*(I+ev*(1-I)),其中,I k为第一中间图像,I为照度图像,ev为曝光值,通过设定不同ev值,可得到不同ev对应的I k。例如,设置5个不同的ev值,分别为0.1、0.2、0.3、0.4和0.5,则可生成5张第一中间图像(如图5所示的第一中间图像P31至第一中间图像P35)。
且根据ev*(1-I)可知,照度图像P3中像素值越低(即I越小,I为归一化的像素值,范围为[0,1]),ev*(1-I)则越大,从而使得照度图像中的暗部的增强程度更大,从而增强了照度图像P3中的暗部的细节及可视性。
请参阅图2和与14,在某些实施例中,步骤014包括:
0141:基于预设的第三函数,合成反射图像和第一中间图像,以生成多张第二中间图像;及
0142:基于预设的第四函数,合成多张第二中间图像,以生成合成后的图像。
在某些实施例中,处理器11调用指令用于实现以下操作:基于预设的第三函数,合成反射图像和第一中间图像,以生成多张第二中间图像;及基于预设的第四函数,合成多张第二中间图像,以生成合成后的图像。也即是说,步骤0141和步骤0142可以由处理器11执行。
具体地,请再次参阅图6和图7,在得到反射图像P4和多张照度不同的照度图像P31至照度图像P35后,需要重新合成反射图像P4和每个照度图像P3,以获得多张第二中间图像P51至第二中间图像P55,第二中间图像包含了增强后的反射图像和照度图像的信息。 相较于图像传感器输出的目标图像而言,不管是亮度,还是细节、纹理、色彩等均得到了增强,具有较好的图像质量。
请再次参阅图3,处理器11可先根据目标图像P2、照度图像P3和反射图像P4的第一函数,做数学变换以得到根据反射图像P4和每张第一中间图像生成对应的第二中间图像P5的第三函数,第三函数为L k=e R*I k,其中,L k为第二中间图像,R为反射图像,I k为第一中间图像。如此,根据第三函数、反射图像P4和第一中间图像P31至第一中间图像P35,即可生成第二中间图像P51至第二中间图像P55。
然后基于预设的第四函数,将多张第二中间图像(如第二中间图像P51至第二中间图像P55)进行合成,以得到增强后的合成图像P6。其中,第四函数为
Figure PCTCN2021103219-appb-000001
L’为合成后的图像(即合成图像P6),L k为第二中间图像(Lk的值为归一化像素值,位于区间[0,1]),w k为权重。
具体可根据以下公式确定每个第二中间图像的权重:
Figure PCTCN2021103219-appb-000002
其中,多张第二中间图像的照度从小到大排列,以得到L 1、L 2、L 3、L 4和L 5,可以看出,照度较低的第二中间图像L 1和L 2的权重与其像素值相关,像素值越大,权值越高;而对于而照度较高的第二中间图像L 3、L 4和L 5的权重则相反,像素值越低,权值反而越大,从而更好的增强合成后的图像的暗部。在其他实施方式中,W k=1-L k,可对每个第二中间图像的暗部均赋予更大的权重,使得合成后的暗部的增强程度进一步加大。
请参阅图2和图15,在某些实施例中,图像处理方法还包括:
017:获取合成后的图像中,增强比例大于预定比例的像素;及
018:将像素的增强比例调整至预定比例。
在某些实施例中,处理器11调用指令用于实现以下操作:获取合成后的图像中,增强比例大于预定比例的像素;及将像素的增强比例调整至预定比例。也即是说,步骤017和步骤018可以由处理器11实现。
具体地,为了防止图像增强过度,从而失真,处理器11还可获取合成后的图像和目标图像对应的像素值的比例(即增强比例),然后确定增强比例大于预定比例的像素,预定比例可以是10、15、20等。由于增强比例大于预定比例的像素增强程度过大,可确定为增强过度的像素,因此,可调整增强过度的像素的增强比例至预定比例,或者比预定比例更低,从而保证合成后的图像中的所有像素的增强程度合适,不会出现过度增强而导致失真的像素,从而提升合成后的图像的质量。
然后根据目标图像的每个像素对应的增强比例和目标图像,即可计算得到最终的输出图像。例如,O=M*L’,O为输出图像,M为增强比例,L’为合成后的图像。在其他实施方式中,对于三通道的目标图像而言,输出图像包括OR、OG和OB,可首先将目标图像的单通道值(即L’)转化为三通道值,如L’=0.299R+0.587*G+0.114*B,然后根据以下公式分别计算R通道图像OR=M*R,G通道图像OG=M*G,B通道图像OB=M*B,从而输出三通道的输出图像。
请参阅图16,本申请实施例还提供一种包含指令的计算机可读存储介质300,当指令 302在计算机400上运行时,计算机400执行上述任一实施例的图像处理方法。
例如,请结合图2,当指令被计算机400执行时,使得计算机400执行以下步骤:
011:获取采集图像中至少部分区域的目标图像;
012:分解目标图像,以生成照度图像和反射图像;
013:处理照度图像,以生成多张不同照度的第一中间图像;及
014:合成反射图像和第一中间图像以得到合成后的图像。
再例如,请结合图8,当指令被计算机400执行时,使得计算机400执行以下步骤:
015:基于预设的映射关系,调整目标图像的像素值。
可以理解,各个实施例对应的示意图中包含有执行动作的时序时,该时序仅为示例性说明,根据需要,各个执行动作之前的时序可以有变化,同时,各个实施例之间,在不矛盾冲突的情况下,可以结合或拆分为一个或多个实施例,以适应不同的应用场景,此处不做赘述。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“一个例子”、“具体示例”、或“一些示例”等的描述意指结合实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的程序的代码的模块、片段或部分,并且本申请的优选实施例的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的程序的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器11的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (35)

  1. 一种图像处理方法,其特征在于,包括:
    获取采集图像中至少部分区域的目标图像;
    分解所述目标图像,以生成照度图像和反射图像;
    处理所述照度图像,以生成多张不同照度的第一中间图像;及
    合成所述反射图像和所述第一中间图像以得到合成后的图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    基于预设的映射关系,调整所述目标图像的像素值。
  3. 根据权利要求2所述的图像处理方法,其特征在于,基于预设的映射关系,调整所述目标图像的像素值,包括:
    基于预设的映射关系,确定所述像素值对应的调整比例;
    根据所述调整比例调整所述像素值,所述调整比例和所述像素值负相关。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述分解所述目标图像,以生成照度图像和反射图像,包括:
    对所述目标图像进行滤波,以生成所述照度图像;及
    根据预设的第一函数、所述照度图像和所述目标图像,计算所述反射图像。
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述对所述目标图像进行滤波,以生成所述照度图像,包括:
    基于预设的双边滤波算法,对所述目标图像进行滤波,以生成所述照度图像,所述目标图像的尺寸根据所述双边滤波算法的轮询框的尺寸确定。
  6. 根据权利要求4所述的图像处理方法,其特征在于,所述预设的第一函数包括:logR=logL-logI,其中,所述R为所述反射图像,所述L为所述目标图像,所述I为所述照度图像。
  7. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    基于预设比例增加所述反射图像的像素值。
  8. 根据权利要求1所述的图像处理方法,其特征在于,所述处理所述照度图像,以生成多张不同照度的第一中间图像,包括:
    根据预设的多个不同的曝光值、及第二函数,生成多张不同照度的第一中间图像。
  9. 根据权利要求8所述的图像处理方法,其特征在于,所述第二函数包括:
    I k=(1+ev)*(I+ev*(1-I)),其中,所述I k为所述第一中间图像,所述I为照度图像,所述ev为所述曝光值。
  10. 根据权利要求1所述的图像处理方法,其特征在于,所述合成所述反射图像和所述第一中间图像,以生成合成后的图像,包括:
    基于预设的第三函数,合成所述反射图像和所述第一中间图像,以生成多张第二中间图像;及
    基于预设的第四函数,合成多张所述第二中间图像,以生成所述合成后的图像。
  11. 根据权利要求10所述的图像处理方法,其特征在于,所述第三函数包括:L k=e R*I k,其中,所述L k为所述第二中间图像,所述R为所述反射图像,所述I k为所述第一中间图像。
  12. 根据权利要求10所述的图像处理方法,其特征在于,所述第四函数包括:
    Figure PCTCN2021103219-appb-100001
    其中,所述L’为所述合成后的图像,所述L k为所述第二中间图像,所述w k为权重。
  13. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    获取所述合成后的图像中,增强比例大于预定比例的像素;及
    将所述像素的所述增强比例调整至所述预定比例。
  14. 根据权利要求1所述的图像处理方法,其特征在于,所述目标图像包括单通道图像数据或三通道图像数据。
  15. 根据权利要求1所述的图像处理方法,其特征在于,所述获取采集图像中至少部分区域的目标图像,包括:
    确定所述采集图像中的感兴趣区域;及
    获取所述感兴趣区域中至少部分区域,以作为所述目标图像。
  16. 根据权利要求15所述的图像处理方法,其特征在于,所述确定所述采集图像中的感兴趣区域,包括:
    根据用户输入确定所述感兴趣区域;或
    检测所述采集图像中,亮度小于预定亮度的低亮区域,以作为所述感兴趣区域。
  17. 一种图像处理装置,其特征在于,所述图像处理装置包括处理器和存储器,所述存储器用于存储指令,所述处理器调用所述存储器存储的所述指令用于实现以下操作:
    获取采集图像中至少部分区域的目标图像;分解所述目标图像,以生成照度图像和反射图像;处理所述照度图像,以生成多张不同照度的第一中间图像;及合成所述反射图像和所述第一中间图像以得到合成后的图像。
  18. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:基于预设的映射关系,调整所述目标图像的像素值。
  19. 根据权利要求18所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:基于预设的映射关系,确定所述像素值对应的调整比例;根据所述调整比例调整所述像素值,所述调整比例和所述像素值负相关。
  20. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:对所述目标图像进行滤波,以生成所述照度图像;及根据预设的第一函数、所述照度图像和所述目标图像,计算所述反射图像。
  21. 根据权利要求20所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:基于预设的双边滤波算法,对所述目标图像进行滤波,以生成所述照度图像,所述目标图像的尺寸根据所述双边滤波算法的轮询框的尺寸确定。
  22. 根据权利要求20所述的图像处理装置,其特征在于,所述预设的第一函数包括:logR=logL-logI,其中,所述R为所述反射图像,所述L为所述目标图像,所述I为所述照度图像。
  23. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:基于预设比例增加所述反射图像的像素值。
  24. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:根据预设的多个不同的曝光值、及第二函数,生成多张不同照度的第一中间图像。
  25. 根据权利要求24所述的图像处理装置,其特征在于,所述第二函数包括:I k=(1+ev)*(I+ev*(1-I)),其中,所述I k为所述第一中间图像,所述I为照度图像,所述ev为所述曝光值。
  26. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:基于预设的第三函数,合成所述反射图像和所述第一中间图像,以生成多张第二中间图像;及基于预设的第四函数,合成多张所述第二中间图像,以生成所述合成后的图像。
  27. 根据权利要求26所述的图像处理装置,其特征在于,所述第三函数包括:L k=e R*I k,其中,所述L k为所述第二中间图像,所述R为所述反射图像,所述I k为所述第一中间图像。
  28. 根据权利要求26所述的图像处理装置,其特征在于,所述第四函数包括:
    Figure PCTCN2021103219-appb-100002
    其中,所述L’为所述合成后的图像,所述L k为所述第二中间图像,所述w k为权重。
  29. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:获取所述合成后的图像中,增强比例大于预定比例的像素;及将所述像素的所述增强比例调整至所述预定比例。
  30. 根据权利要求17所述的图像处理装置,其特征在于,所述目标图像包括单通道图像数据或三通道图像数据。
  31. 根据权利要求17所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:确定所述采集图像中的感兴趣区域;及获取所述感兴趣区域中至少部分区域,以作为所述目标图像。
  32. 根据权利要求31所述的图像处理装置,其特征在于,所述处理器调用所述指令用于实现以下操作:根据用户输入确定所述感兴趣区域;或检测所述采集图像中,亮度小于预定亮度的低亮区域,以作为所述感兴趣区域。
  33. 一种可移动平台,其特征在于,所述可移动平台包括权利要求17至32任一项所述的图像处理装置。
  34. 根据权利要求33所述的可移动平台,其特征在于,所述可移动平台包括无人机、无人驾驶车辆、或地面遥控机器人。
  35. 一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机实现权利要求1-16任一项所述的图像处理方法。
PCT/CN2021/103219 2021-06-29 2021-06-29 图像处理方法及装置、可移动平台及存储介质 WO2023272506A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103219 WO2023272506A1 (zh) 2021-06-29 2021-06-29 图像处理方法及装置、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103219 WO2023272506A1 (zh) 2021-06-29 2021-06-29 图像处理方法及装置、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2023272506A1 true WO2023272506A1 (zh) 2023-01-05

Family

ID=84689829

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103219 WO2023272506A1 (zh) 2021-06-29 2021-06-29 图像处理方法及装置、可移动平台及存储介质

Country Status (1)

Country Link
WO (1) WO2023272506A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064426A (zh) * 2018-07-26 2018-12-21 电子科技大学 一种用于抑制低照度图像中眩光并增强图像的方法及装置
CN109218695A (zh) * 2017-06-30 2019-01-15 中国电信股份有限公司 视频图像增强方法、装置、分析系统及存储介质
CN110930341A (zh) * 2019-10-17 2020-03-27 杭州电子科技大学 一种基于图像融合的低光照图像增强方法
CN111127377A (zh) * 2019-12-20 2020-05-08 湖北工业大学 一种基于多图像融合Retinex的弱光增强方法
CN112734650A (zh) * 2019-10-14 2021-04-30 武汉科技大学 一种基于虚拟多曝光融合的不均匀光照图像增强方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218695A (zh) * 2017-06-30 2019-01-15 中国电信股份有限公司 视频图像增强方法、装置、分析系统及存储介质
CN109064426A (zh) * 2018-07-26 2018-12-21 电子科技大学 一种用于抑制低照度图像中眩光并增强图像的方法及装置
CN112734650A (zh) * 2019-10-14 2021-04-30 武汉科技大学 一种基于虚拟多曝光融合的不均匀光照图像增强方法
CN110930341A (zh) * 2019-10-17 2020-03-27 杭州电子科技大学 一种基于图像融合的低光照图像增强方法
CN111127377A (zh) * 2019-12-20 2020-05-08 湖北工业大学 一种基于多图像融合Retinex的弱光增强方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG WEI , WU GUI-MING; ZHAO DA-XING; LIU HONG-DI: "Multi Images Fusion Retinex for Low Light Image Enhancement ", OPTICS AND PRECISION ENGINEERING, vol. 28, no. 3, 15 March 2020 (2020-03-15), pages 736 - 744, XP093020303, ISSN: 1000-924X, DOI: 10.3788/OPE.20202803.0736 *
ZHANG YUSHUAI: "Low Illumination Image Enhancement Based on Multi-frame Fusion", CHINA MASTER'S' THESES FULL-TEXT DATABASE (ELECTRONIC JOURNAL)-INFORMATION & TECHNOLOGY), TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 February 2020 (2020-02-15), CN , XP093020296, ISSN: 1674-0246 *

Similar Documents

Publication Publication Date Title
CN110378859B (zh) 一种新的高动态范围图像生成方法
US11037278B2 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN110619593B (zh) 一种基于动态场景的双曝光视频成像系统
WO2022042049A1 (zh) 图像融合方法、图像融合模型的训练方法和装置
CN112734650B (zh) 一种基于虚拟多曝光融合的不均匀光照图像增强方法
JP7077395B2 (ja) 多重化高ダイナミックレンジ画像
CN104240194B (zh) 一种基于抛物线函数的低照度图像增强算法
CN106897981A (zh) 一种基于引导滤波的低照度图像增强方法
WO2022000397A1 (zh) 低照度图像增强方法、装置及计算机设备
CN107292830B (zh) 低照度图像增强及评价方法
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN110163807B (zh) 一种基于期望亮通道的低照度图像增强方法
CN110599418B (zh) 一种变换域融合的全局色调映射方法
CN111476732B (zh) 一种图像融合及去噪的方法及系统
CN111970432A (zh) 一种图像处理方法及图像处理装置
Gao et al. A new color contrast enhancement algorithm for robotic applications
WO2013114803A1 (ja) 画像処理装置及びその画像処理方法、並びにコンピュータ・プログラム、および画像処理システム
CN109035181A (zh) 一种基于图像平均亮度的宽动态范围图像处理方法
WO2023272506A1 (zh) 图像处理方法及装置、可移动平台及存储介质
Lang et al. A real-time high dynamic range intensified complementary metal oxide semiconductor camera based on FPGA
Raigonda et al. Haze Removal Of Underwater Images Using Fusion Technique
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
Terai et al. Color image contrast enhancement by retinex model
CN112017128B (zh) 一种图像自适应去雾方法
CN114240767A (zh) 一种基于曝光融合的图像宽动态范围处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21947463

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE