WO2020113407A1 - Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage - Google Patents

Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage Download PDF

Info

Publication number
WO2020113407A1
WO2020113407A1 PCT/CN2018/119113 CN2018119113W WO2020113407A1 WO 2020113407 A1 WO2020113407 A1 WO 2020113407A1 CN 2018119113 W CN2018119113 W CN 2018119113W WO 2020113407 A1 WO2020113407 A1 WO 2020113407A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
band image
band
fused
visible light
Prior art date
Application number
PCT/CN2018/119113
Other languages
English (en)
Chinese (zh)
Inventor
翁超
鄢蕾
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880065224.7A priority Critical patent/CN111247558A/zh
Priority to PCT/CN2018/119113 priority patent/WO2020113407A1/fr
Publication of WO2020113407A1 publication Critical patent/WO2020113407A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing technology, and in particular, to an image processing method, device, drone, system, and storage medium.
  • drones have become a popular research topic, and are widely used in plant protection, aerial photography, forest fire monitoring and other fields, bringing many conveniences to people's lives and work.
  • the image obtained in this way includes single information.
  • the infrared shooting lens is used to shoot the subject, and the infrared shooting lens can be obtained by infrared detection.
  • the infrared radiation information of the subject which can better reflect the temperature information of the subject, but the infrared shooting lens is not sensitive to the brightness change of the shooting scene, the imaging resolution is low, and the captured image cannot reflect the subject Detailed feature information.
  • a visible light shooting lens is used to shoot a subject.
  • the visible light shooting lens can obtain a higher resolution image, which can reflect the detailed feature information of the subject, but the visible light shooting lens cannot obtain the infrared radiation information of the subject.
  • the resulting image cannot reflect the temperature information of the subject. Therefore, how to obtain images with higher quality and richer information has become a research hotspot.
  • Embodiments of the present invention provide an image processing method, device, unmanned aerial vehicle, system, and storage medium, which can acquire higher-quality images.
  • an embodiment of the present invention provides an image processing method.
  • the method includes:
  • the first-band image after registration and the second-band image after registration are directly fused to obtain a target image.
  • an embodiment of the present invention provides an image processing device, including a memory and a processor:
  • the memory is used to store program instructions
  • the processor executes the program instructions stored in the memory. When the program instructions are executed, the processor is used to perform the following steps:
  • the first-band image after registration and the second-band image after registration are directly fused to obtain a target image.
  • an embodiment of the present invention provides a drone, including:
  • the power system installed on the fuselage is used to provide flight power
  • a processor used to obtain the first band image and the second band image
  • the first-band image after registration and the second-band image after registration are directly fused to obtain a target image.
  • an embodiment of the present invention provides a drone system.
  • the system includes: an intelligent terminal, an image capturing device, and a drone;
  • the intelligent terminal is used to send flight control instructions, and the flight control instructions are used to instruct the drone to fly according to the determined flight trajectory;
  • the drone is used to respond to the flight control instruction, control the drone to fly according to the flight trajectory, and control the image shooting device mounted on the drone to shoot;
  • the image capturing device is configured to acquire a first band image through an infrared shooting module included in the image capturing device, and acquire a second band image through a visible light capturing module included in the image capturing device; for the first band image and the second The band images are registered; the first band image after registration and the second band image after registration are directly fused to obtain a target image.
  • an embodiment of the present invention provides a computer storage medium that stores computer program instructions, which when executed are used to implement the image processing method described in the first aspect above.
  • the target image is obtained by directly fusing the first-band graphics and the second-band image after registration. No other processing is performed.
  • the fusion scheme is simple, which saves the time required for image fusion, thereby improving the efficiency of image fusion.
  • the target image includes the information of the first band image and the information of the second band image, and more information can be obtained from the target image, which improves the quality of the captured image.
  • FIG. 1 is a schematic structural diagram of a drone system provided by an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of obtaining a gradient field of an image to be fused provided by the implementation of the present invention
  • FIG. 5 is a schematic diagram of obtaining a gradient field of an image to be fused provided by an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a method for calculating color values of pixels in an image to be merged according to an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of an image processing device according to an embodiment of the present invention.
  • the implementation of the present invention proposes an image processing method.
  • the image processing method can be applied to a drone system.
  • An image capturing device is mounted on the drone in the drone system.
  • the image processing method After the first band image and the second band image captured by the shooting device are registered, no other processing is performed, and the target image is directly fused, which can save the time required for image fusion.
  • the target image includes both
  • the information of the one-band image includes the information of the second-band image, more information can be obtained from the target image, and the quality of the captured image is improved.
  • the embodiments of the present invention can be applied to the fields of military defense, remote sensing detection, environmental protection, traffic detection, or disaster detection. These fields are mainly based on aerial photography of drones to obtain environmental images, and the environmental images are analyzed and processed to obtain corresponding data. For example, in the field of environmental protection, the environment image of a certain area is obtained by drone shooting for an area. If the area is the area where a river is located, the environmental image of the area is analyzed to obtain information about the water quality of the river. According to the data of the river water quality, it can be judged whether the river is polluted.
  • the unmanned aerial system includes: an intelligent terminal 101, an unmanned aerial vehicle 102, and an image capturing device 103.
  • the smart terminal 101 may be a control terminal of a drone, specifically one or more of a remote control, a smart phone, a tablet computer, a laptop computer, a ground station, and a wearable device (watch, wristband) Species.
  • the unmanned aerial vehicle 102 may be a rotor-type unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, or a fixed-wing unmanned aerial vehicle.
  • the UAV 102 includes a power system, which is used to provide flight power for the UAV.
  • the power system may include one or more of a propeller, a motor, and an electric ESC.
  • the image capturing device 103 is used to capture an image when a shooting instruction is received.
  • the image capturing device is configured on the drone 102.
  • the drone 102 may further include a gimbal,
  • the image capturing device 103 is mounted on the drone 102 via a gimbal.
  • the gimbal is a multi-axis transmission and stabilization system.
  • the gimbal motor compensates the shooting angle of the image shooting device by adjusting the rotation angle of the rotating shaft, and prevents or reduces the image shooting device by setting an appropriate buffer mechanism shake.
  • the image shooting device 103 includes at least an infrared shooting module 1031 and a visible light shooting module 1032, wherein the infrared shooting module 1031 and the visible light shooting module 1032 have different shooting advantages.
  • the infrared shooting module 1031 can detect the infrared radiation information of the subject, and the captured image can better reflect the temperature information of the subject; the visible light shooting module 1032 can capture a higher resolution image, which can reflect the shooting Detailed feature information of the object.
  • the gimbal is a multi-axis transmission and stabilization system. The gimbal motor compensates the imaging angle of the imaging device by adjusting the rotation angle of the rotation axis, and prevents or reduces the jitter of the imaging device by setting an appropriate buffer mechanism.
  • the smart terminal 101 may also be configured with an interactive device for realizing human-computer interaction.
  • the interactive device may be one or more of a touch screen, a keyboard, keys, a joystick, and a pulsator.
  • a user interface may be provided on the interactive device, and during the flight of the drone, the user may set the shooting position through the user interface, for example, the user may enter the shooting position information on the user interface, or the user may also be in the unmanned A touch operation (such as a click operation or a sliding operation) for setting a shooting position is performed on the flight trajectory of the aircraft to set the shooting position.
  • the smart terminal 101 sets a shooting position according to one touch operation.
  • the smart terminal 101 after detecting the shooting position information input by the user, the smart terminal 101 sends the shooting position information to the image shooting device 103, and when the drone 102 flies to the shooting position, the The image capturing device 103 photographs the subject in the shooting position.
  • the drone 102 when the drone 102 flies to the shooting position, before shooting the subject in the shooting position, it can also detect whether the infrared shooting module 1031 and the visible light shooting module included in the image shooting device 103 Whether 1032 is in a registration state in position: if it is in a registration state, the infrared shooting module 1031 and the visible light shooting module 1032 shoot the subject in the shooting position; if it is not in the registration state, it may not The above-mentioned shooting operation is performed, and prompt information can be output for prompting the infrared shooting module 1031 and the visible light shooting module 1032 to be registered.
  • the infrared shooting module 1031 shoots the subject in the shooting position to obtain the first band image
  • the visible light module 1032 shoots the subject at the shooting position to obtain the second band image.
  • the image shooting device 103 may Perform registration processing on the acquired first band image and second band image, and directly fuse the registered first band image and second band image to obtain a target image.
  • the registration process mentioned here refers to the processing of the acquired first band image and second band image, such as rotation, cropping, etc.
  • the registration process at the above position refers to the infrared
  • the physical structure of the shooting module 1031 and the visible light shooting module 1032 is adjusted.
  • the image capturing device 103 may also send the first band image and the second band image to the smart terminal 101 or the drone 102, and the smart terminal 101 or the drone performs the above fusion operation to obtain the target image .
  • the target image includes both the information of the first band image and the information of the second band image. More information can be obtained from the target image, which improves the information diversity of the captured images and matches
  • the calibrated first-band image and second-band image are directly fused without any other processing, saving image fusion time and improving image fusion efficiency.
  • FIG. 2 is an image processing method provided by an embodiment of the present invention.
  • the image processing method may be applied to the above-mentioned drone system, and is specifically applied to an image capturing device.
  • the image processing method may be composed of the image The shooting device executes.
  • the image processing method shown in FIG. 2 may include:
  • Step S201 Acquire a first band image and a second band image.
  • the first band image and the second band image are obtained by two different shooting modules shooting a subject that contains the same object, that is, the first band image and all
  • the second band image contains the same image element, but the information of the same image element that can be reflected by the first band image and the second band image is different, for example, the first band image focuses on the temperature information of the subject , The second band image focuses on reflecting detailed feature information of the photographed object.
  • the method for acquiring the first band image and the second band image may be obtained by the image capturing device capturing the subject, or the image capturing device may be sent by another device.
  • the first band image and the second band image may be captured by a camera capable of capturing multiple band signals.
  • the image capture device includes an infrared capture module and a visible light capture module, the first band image may be an infrared image captured by the infrared capture module, and the second band image may be the visible light Visible light image captured by the shooting module.
  • the infrared shooting module can capture infrared signals with a wavelength of 10 -3 to 7.8 ⁇ 10 -7 m, and the infrared shooting module can detect infrared radiation information of the shooting object, so the first waveband The image can better reflect the temperature information of the shooting object; the visible light shooting module can capture the visible light signal with a wavelength of (78 ⁇ 3.8) ⁇ 10 -6 cm, and the visible light shooting module can shoot a higher resolution image, Therefore, the second band image can reflect the detailed feature information of the shooting object.
  • the image capturing device may compare the first band image and the second band image based on the feature information of the first band image and the feature information of the second band image Perform alignment processing.
  • a manner of performing alignment processing on the first band image and the second band image may be: : Acquiring feature information of the first band image and feature information of the second band image; determining a first offset of feature information of the first band image relative to feature information of the second band image; The first offset adjusts the first band image.
  • the image capturing device can acquire the feature information of the first band image and the feature information of the second band image, compare the feature information of the first band image with the feature information of the second band image, and determine that the feature information of the first band image is relatively
  • the first offset of the feature information of the second band image mainly refers to the position offset of the feature point
  • the first band image is adjusted according to the first offset to obtain the adjusted first
  • the first band image is stretched horizontally or vertically according to the first offset, or the first band image is horizontally or vertically indented to achieve the adjusted first band image and the second
  • the band images are aligned. Further, after the registration processing of the adjusted first band image and the second band image is performed, the fusion is performed directly to obtain the target image.
  • a manner of performing alignment processing on the first band image and the second band image may also be It is: acquiring the feature information of the first band image and the feature information of the second band image; determining the second offset of the feature information of the second band image relative to the feature information of the first band image; Adjust the second band image according to the second offset.
  • the image capturing device can acquire the feature information of the first band image and the feature information of the second band image, compare the feature information of the first band image with the feature information of the second band image, and determine that the feature information of the second band image is relatively
  • the second offset of the feature information of the first band image mainly refers to the position offset of the feature point
  • the second band image is adjusted according to the second offset to obtain the adjusted first
  • Two intermediate images for example, the second band image is stretched horizontally or vertically according to the first offset, or the second band image is horizontally or vertically indented to obtain a second intermediate image to achieve the adjusted first
  • the one-band image is aligned with the second-band image. Further, after the registration processing of the adjusted first-band image and the second-band image is performed, the fusion is performed directly to obtain the target image.
  • Step S202 Register the first band image and the second band image.
  • the first band image and the second band image are respectively taken by an infrared camera module and a visible light camera module, because the infrared camera module and the visible light camera module are in position, and/or are taken.
  • the difference in parameters leads to a difference between the first band image and the second band, such as different sizes of the two images and different resolutions of the two images. Therefore, in order to ensure the accuracy of image fusion, the image Before the fusion, the first band image and the second band image need to be registered.
  • the registering the first band image and the second band image includes: based on the calibration parameters of the infrared shooting module and the calibration parameters of the visible light shooting module, the first band The image and the second band image are registered.
  • the calibration parameters include internal parameters, external parameters, and distortion parameters of the camera module.
  • the internal parameters refer to parameters related to the characteristics of the camera module, including the focal length and pixel size of the camera module.
  • the external parameters refer to the camera module in the world.
  • the parameters in the coordinate system include the position and rotation direction of the camera module.
  • the calibration parameters are calibrated for the infrared shooting module and the visible light shooting module before the infrared shooting module and the visible light shooting module shoot.
  • the method of performing parameter calibration on the infrared shooting module and the visible light shooting module separately may include: acquiring a sample image for calibration parameters; the infrared shooting module and the visible light shooting module The sample image is taken to obtain an infrared image and a visible light image; the infrared image and the visible light image are analyzed and processed, and when the registration rule is satisfied between the infrared image and the visible light image, based on the infrared image and the The visible light image calculates the parameters of the infrared shooting module and the visible light shooting module, and takes the parameters as their respective calibration parameters.
  • the shooting parameters of the infrared shooting module and the visible light shooting module can be adjusted, and the sample image is photographed again until the infrared image and the visible light image
  • the registration rules are met.
  • the registration rule may mean that the infrared image and the visible light image have the same resolution, and the same shooting object has the same position in the infrared image and the visible light image.
  • the above is only a method for calibrating parameters of a behavioral infrared camera module and a visible light camera module provided by an embodiment of the present invention.
  • the image camera may also set the infrared camera module and the camera by other methods.
  • the calibration parameters of the visible light shooting module are described.
  • the image shooting device may store the calibration parameters of the infrared shooting module and the visible light shooting module for subsequent use The calibration parameters of the two register the first band image and the second band image.
  • step S202 may be: acquiring calibration parameters of the infrared camera module and calibration parameters of the visible light camera module; and adjusting the first band according to the calibration parameters of the infrared camera module Adjust the image, and/or adjust the second-band image according to the calibration parameters of the visible light shooting module; wherein, the adjustment operation includes one or more of the following: rotation, zoom, translation, and crop.
  • the adjusting operation of the first band image according to the calibration parameters of the infrared camera module may include: acquiring the internal parameter matrix and distortion coefficient included in the calibration parameters of the infrared camera module, and according to the internal parameter matrix and the distortion The coefficient is calculated to obtain a rotation vector and a translation vector of the first band image, and the rotation band and the translation vector of the first band image are used to rotate or translate the first band.
  • the adjustment operation on the second band image according to the calibration parameters of the visible light shooting module also uses the same method as described above to implement the adjustment operation on the second band image.
  • the first-band image and the second-band image are registered, so that the registered first The resolution of the band image and the second band image are the same, and the position of the same subject in the registered first band image and the second band image is the same, so that the subsequent based on the first band image and the first The quality of the fusion image obtained from the two-band image is high.
  • the infrared shooting module and the visible light shooting module can be physically registered before the infrared shooting module and the visible light shooting module shoot.
  • Step S203 Directly fuse the registered first band image and the registered second band image to obtain a target image.
  • the first-band image after registration and the second-band image after registration are directly fused without any other processing.
  • the fusion scheme is simple, saves image fusion time, and can also make the target image obtained by fusion Including both infrared effects and visible light effects.
  • a Poisson fusion algorithm may be used to directly fuse the registered first band image and the registered second band image to obtain the target image.
  • the first band image and the second band image after registration may also be fused through a fusion method based on a weighted average, a fusion algorithm based on an absolute value being large, and the like.
  • the first-band image after registration and the second-band image after registration are directly fused to obtain a target image, which includes: combining the first-band image after registration with all Performing superposition processing on the second-band image after registration to obtain an image to be fused; obtaining the color value of each pixel in the image to be fused; based on the color value of each pixel in the image to be fused
  • the image to be fused is rendered, and the rendered image to be fused is determined as the target image.
  • the color value of each pixel in the image to be fused is obtained
  • the general step is to calculate the divergence value of each pixel of the image to be fused, and then calculate the color value of each pixel in the image to be fused according to the divergence value of each pixel and the coefficient matrix of the image to be fused.
  • the color value of each pixel is obtained based on some feature information of the image to be fused, the feature information of the first band image and the second band image are integrated into the image to be fused, so the color of each pixel is By rendering the fused image, a fused image including both the information of the first band image and the information of the second band image can be obtained.
  • the target image by registering the acquired first band image and the second band image, and then fusing the registered first band image and second band image directly, Obtain the target image.
  • the target image is obtained by directly fusing the first band image and the second band image after registration. No other processing is performed.
  • the fusion scheme is simple and saves the time required for image fusion, thereby improving the image fusion. effectiveness.
  • the target image includes the information of the first band image and the information of the second band image, and more information can be obtained from the target image, which improves the quality of the captured image.
  • FIG. 3 is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • the image processing method may be applied to the drone system shown in FIG. 1.
  • the The man-machine system includes an image capture device, the image capture device includes an infrared capture module and a visible light capture module, the image captured by the infrared capture module is a first band image, and the image captured by the visible light capture module is a visible light image .
  • the first band image is an infrared image, which may include:
  • Step S301 Register the infrared shooting module with the visible light shooting module based on the position of the infrared shooting module and the position of the visible light shooting module.
  • the infrared shooting module and the visible light shooting module can be used before the infrared shooting module Register with the visible light shooting module on the physical structure.
  • the registering the infrared camera module and the visible light module on the physical structure includes registering the infrared camera module and the visible light module based on the position of the infrared camera module and the position of the visible light camera module.
  • the criterion for determining that the infrared camera module and the visible light camera module have been physically registered is that the infrared camera module and the visible light camera module satisfy the central horizontal distribution, and the infrared camera module and the visible light camera module The position difference of is less than the preset position difference. It is understandable that the position difference between the infrared camera module and the visible light camera module is smaller than the preset position difference value to ensure that the field of view (FOV) of the infrared camera module can cover the FOV of the visible light camera module, and There is no interference between the FOV of the infrared camera module and the FOV of the visible light camera module.
  • FOV field of view
  • the registering the infrared camera module with the visible light camera module based on the position of the infrared camera module and the position of the visible light camera module includes: Calculating the position difference between the infrared shooting module and the visible light shooting module of the position of the image shooting device and the position of the visible light shooting module relative to the image shooting device; if the position difference is greater than or If it is equal to the preset position difference value, the adjustment of the position of the infrared shooting module or the position of the visible light shooting module is triggered, so that the position difference value is smaller than the preset position difference value.
  • the registering the infrared camera module with the visible light camera module based on the position of the infrared camera module and the position of the visible light camera module further includes: Whether the horizontal distribution condition is satisfied between the position and the position of the visible light shooting module; if the horizontal distribution condition is not satisfied between the position of the infrared shooting module and the position of the visible light shooting module, the adjustment of the infrared shooting module is triggered
  • the position or the position of the visible light shooting module is such that the center horizontal distribution condition is satisfied between the infrared shooting module and the visible light shooting module.
  • the infrared camera module and the visible light camera module are registered, that is, the infrared camera module and the image camera device are detected. Whether the central horizontal distribution condition is met between the visible light shooting modules, and/or whether the relative positions of the infrared shooting module and the visible light shooting module on the image shooting device are less than or equal to a preset position difference.
  • the position difference When it is detected that the central horizontal distribution condition is not satisfied between the infrared camera module and the visible light camera module on the image camera device, and/or the relative position of the infrared camera module and the visible light camera module on the image camera device is greater than the When the position difference is set, it indicates that the infrared camera module and the visible light camera module are not registered in structure, and the infrared camera module and/or the visible light camera module need to be adjusted.
  • a prompt message may be output, and the prompt message may include an adjustment method for the infrared camera module and/or the visible light camera module, such as the prompt message This includes adjusting the infrared camera module to the left by 5 mm.
  • the prompt information is used to prompt the user to adjust the infrared camera module and/or the visible light camera module, so that the infrared camera module and the visible light camera module can be registered.
  • the image camera may adjust the position of the infrared camera module and/or the visible light camera module to enable the infrared camera module and the visible light camera module to register .
  • the position difference When it is detected that the central horizontal distribution condition is satisfied between the infrared camera module and the visible light camera module on the image camera device, and/or the relative position of the infrared camera module and the visible light camera module on the image camera device is less than or equal to
  • the position difference is preset, it indicates that the infrared shooting module and the visible light shooting module have been structurally registered. At this time, they can receive the shooting instruction sent by the smart terminal or the shooting instruction sent by the user to the image shooting device.
  • the infrared shooting module is triggered to shoot to obtain the first band image
  • the visible light shooting module is triggered to shoot Get the second band image.
  • Step S302 Acquire the first band image and the second band image.
  • Step S303 Register the first band image and the second band image based on the calibration parameters of the infrared shooting module and the calibration parameters of the visible light shooting module.
  • step S302 and the step S303 have been described in detail in the embodiment shown in FIG. 2 and will not be repeated here.
  • Step S304 Perform overlay processing on the registered first band image and the registered second band image to obtain an image to be fused.
  • Step S305 Obtain the color value of each pixel in the image to be fused.
  • Step S306 Render the image to be fused based on the color value of each pixel in the image to be fused, and determine the rendered image to be fused as the target image.
  • steps S304-S306 are processes of performing fusion processing on the first band image and the second band image using a Poisson fusion algorithm.
  • the main idea of the Poisson fusion algorithm is to reconstruct the image pixels in the composite area by interpolation based on the gradient information of the source image and the boundary information of the target image.
  • the source image may refer to any one of the first band image and the second band image
  • the target image refers to the other one of the first band image and the second band image.
  • the image pixels of the synthesized area can be understood as the recalculation of the color value of each pixel in the image to be fused
  • step S305 obtaining the color value of each pixel in the image to be fused includes: obtaining a gradient field of the image to be fused; calculating the image to be fused based on the gradient field of the image to be fused The divergence value of each pixel; based on the divergence value of each pixel in the image to be fused and the color value calculation rule, determine the color value of each pixel in the image to be fused.
  • various image processing such as image enhancement, image fusion, and image edge detection and segmentation are done in the gradient domain of the image. Poisson fusion algorithm is no exception for image fusion.
  • the gradient field of the image to be fused must first be obtained.
  • the method of acquiring the gradient field of the image to be fused may be determined based on the gradient field of the first band image after registration and the gradient field of the second band image after registration.
  • the step of acquiring the gradient field of the image to be fused includes steps S41-S43 shown in FIG. 4:
  • the image capturing device can obtain the first intermediate gradient field and the second intermediate gradient field by a differential method.
  • the above-mentioned method for acquiring the gradient field of the image to be fused is mainly applied when the sizes of the first-band image after registration and the second-band image after registration are different.
  • the masking process is to obtain the first gradient field and the second gradient field of the same size, so that the first gradient field and the second gradient field can be directly superimposed to obtain the gradient field of the image to be fused.
  • FIG. 5 is a schematic diagram of obtaining a gradient field to be fused according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of obtaining a gradient field to be fused according to an embodiment of the present invention.
  • 501 is gradient processing performed on the registered first-band image, and the obtained first intermediate Gradient field
  • 502 is the gradient processing of the registered second-band image
  • the second intermediate gradient field is obtained. It can be seen that 501 and 502 are different in size.
  • Mask processing is performed on 501 and 502, and 502 is performed Masking: fill the part 5020 where 502 and 501 are different, fill 5020 with 0, fill 502 with 1; mask 501: subtract 5010 from 501 with the same size as 502, and replace The fill of the 5010 part is 0, and the fill of the 501 of the remaining part is 1.
  • the portion filled with 1 indicates that the original gradient field is retained, and the portion marked with 0 indicates that the gradient field needs to be changed.
  • the masked 501 and the masked 502 Directly superimpose the gradient field of the image to be fused, such as 503. Since the 501 after masking is the same size as the 502 after masking, 503 can also be regarded as a gradient of 501 and 502 after masking.
  • the field covers the gradient field filled with 0.
  • the method of acquiring the gradient field of the image to be fused is to convert the first intermediate gradient field Or the second intermediate gradient field is used as the gradient field of the image to be fused.
  • the image capturing device may perform calculation of the divergence value of each pixel in the image to be fused based on the gradient field of the image to be fused, specifically The gradient of each pixel is determined based on the gradient field of the image to be fused, and then the gradient of each pixel is derived, and the divergence value of each pixel is obtained.
  • the image capturing device may execute a calculation rule based on the divergence value and color value of each pixel in the image to be fused to determine the to-be-fused The color value of each pixel in the image.
  • the color value calculation rule refers to a rule for calculating the color value of a pixel, and the color calculation rule may be a calculation formula or other rules.
  • x can be calculated if A and b and other constraints are known.
  • the method for calculating the color value of each pixel in the image to be fused based on the divergence value of each pixel in the frankincense to be fused and the color calculation rule includes steps S61-S63 as shown in FIG. 6 :
  • Step S61 Determine fusion constraints
  • Step S62 Obtain the coefficient matrix of the image to be fused
  • Step S63 Substitute the divergence value of each pixel in the image to be fused and the coefficient matrix of the image to be fused into the color value calculation rule, and combine the fusion constraint conditions to calculate the image to be fused The color value of each pixel in.
  • the fusion constraint condition in the embodiment of the present invention refers to the color value of each pixel around the image to be fused.
  • the color value of each pixel around the image to be fused may be determined according to the color value of each pixel at the periphery of the first band image after registration, or may be based on the periphery of the second band image after registration The color value of each pixel at is determined.
  • the method for determining the coefficient matrix of the image to be fused may be: listing each Poisson equation related to the image to be imaged according to the divergence value of each pixel of the image to be fused; constructing the coefficient of the image to be fused according to each Poisson equation matrix.
  • the infrared photographing module and the visible light photographing module are physically registered, and then the first band image and the first band image are acquired through the infrared photographing module and the visible light photographing module after the physical structure is registered.
  • the second-band image further, the first-band image and the second-band image are registered by an algorithm, and then the registered first-band image and the second-band image are directly fused to obtain the target image,
  • the target image is obtained by directly fusing the first band image and the second band image after registration, and no other processing is performed.
  • the fusion scheme is simple, saving the time required for image fusion, thereby improving the efficiency of image fusion.
  • the target image includes the information of the first band image and the information of the second band image, and more information can be obtained from the target image, which improves the quality of the captured image.
  • FIG. 7 is a schematic structural diagram of an image processing device according to an embodiment of the present invention.
  • the image processing device may include a processor 701 and a memory 702, and the processor 701 It is connected to the memory 702 through a bus 703, and the memory 702 is used to store program instructions.
  • the memory 702 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 702 may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory), solid-state drive (SSD), etc.; the memory 702 may also include a combination of the aforementioned types of memory.
  • volatile memory volatile memory
  • non-volatile memory non-volatile memory
  • flash memory flash memory
  • SSD solid-state drive
  • the processor 701 may be a central processing unit (Central Processing Unit, CPU).
  • the processor 701 may further include a hardware chip.
  • the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or the like.
  • the PLD may be a field-programmable gate array (field-programmable gate array, FPGA), a general-purpose array logic (generic array logic, GAL), or the like.
  • the processor 701 may also be a combination of the above structures.
  • the memory 702 is used to store a computer program, and the computer program includes program instructions, and the processor 701 is used to execute the program instructions stored in the memory 702 to implement the above-described embodiment shown in FIG. 2 Steps of the corresponding method.
  • the processor 701 is used to execute program instructions stored in the memory 702 to implement the corresponding method in the embodiment shown in FIG. 2 above, the processor 701 is configured to call the Executed during program instructions: acquiring the first band image and the second band image; registering the first band image and the second band image; registering the registered first band image and the registered second band image The band image is directly fused to obtain the target image.
  • the processor 701 performs the following operations when directly fusing the registered first-band image and the registered second-band image: the registered first band Superimposing the image and the registered second-band image to obtain an image to be fused; obtaining the color value of each pixel in the image to be fused; based on the color value of each pixel in the image to be fused Rendering the image to be fused, and determining the rendered image to be fused as the target image.
  • the processor 701 when acquiring the color value of each pixel in the image to be fused, performs the following operations: acquiring a gradient field of the image to be fused; based on the gradient of the image to be fused The field calculates the divergence value of each pixel in the image to be fused; based on the divergence value of each pixel in the image to be fused and the color value calculation rule, determines the value of each pixel in the image to be fused Color value.
  • the processor 701 when acquiring the gradient field of the image to be fused, performs the following operations: performing gradient processing on the registered first band image to obtain a first intermediate gradient field, and Performing gradient processing on the registered second-band image to obtain a second intermediate gradient field; performing mask processing on the first intermediate gradient field to obtain a first gradient field, and performing a second intermediate gradient field Masking is performed to obtain a second gradient field; the first gradient field and the second gradient field are superimposed to obtain a gradient field of the image to be fused.
  • the processor 701 executes when calculating the color value of each pixel in the image to be fused based on the divergence value and color value calculation rule of each pixel in the image to be fused Perform the following operations: determine fusion constraints; obtain the coefficient matrix of the image to be fused; substitute the divergence value of each pixel in the image to be fused and the coefficient matrix of the image to be fused into the color value calculation rule In combination with the fusion constraint condition, the color value of each pixel in the image to be fused is calculated.
  • the first band image is an infrared image
  • the second band image is a visible light image
  • the infrared image is acquired by an infrared shooting module provided on the image capturing device
  • the visible light image is Obtained by the visible light shooting module provided on the image shooting device.
  • the processor 701 when registering the first band image and the second band image, performs the following operations: based on the calibration parameters of the infrared camera module and the visible light camera module's The calibration parameters register the first band image and the second band image.
  • the processor 701 performs the following when registering the first band image and the second band image based on the calibration parameters of the infrared camera module and the calibration parameters of the visible light camera module Operation: obtaining calibration parameters of the infrared shooting module and calibration parameters of the visible light shooting module; adjusting the first band image according to the calibration parameters of the infrared shooting module, and/or according to the visible light shooting module
  • the calibration parameter of is used to perform an adjustment operation on the second band image; wherein, the adjustment operation includes one or more of the following: rotation, scaling, translation, and cropping.
  • the processor 701 when the processor 701 invokes the program instruction, it is also used to execute: based on the position of the infrared shooting module and the position of the visible light shooting module, the infrared shooting module and the visible light shooting module Perform registration.
  • the processor 701 performs the following operations when registering the infrared camera module and the visible light camera module based on the position of the infrared camera module and the position of the visible light camera module: Calculating the position difference between the infrared shooting module and the visible light shooting module relative to the position of the infrared shooting module relative to the image shooting device and the position of the visible light shooting module relative to the image shooting device; If the position difference value is greater than or equal to a preset position difference value, triggering to adjust the position of the infrared shooting module or the position of the visible light shooting module so that the position difference value is less than the preset position difference value.
  • the processor 701 when the processor 701 invokes the program instruction, it is also used to execute: determine whether a horizontal distribution condition is satisfied between the position of the infrared shooting module and the position of the visible light shooting module; if the infrared If the horizontal distribution condition is not satisfied between the position of the shooting module and the position of the visible light shooting module, the adjustment of the position of the infrared shooting module or the position of the visible light shooting module is triggered, so that the infrared shooting module and the visible light shooting The central horizontal distribution conditions are satisfied between modules.
  • the processor 701 when the processor 701 invokes the program instruction, it is also used to execute: based on the feature information of the first band image and the feature information of the second band image, the first band image Perform alignment processing with the second band image.
  • the processor 701 performs alignment processing on the first band image and the second band image based on the feature information of the first band image and the feature information of the second band image When performing the following operations: acquiring the feature information of the first band image and the feature information of the second band image; determining the first feature information of the first band image relative to the feature information of the second band image Offset; adjust the first band image according to the first offset.
  • the processor 701 aligns the first band image and the second band image based on the feature information of the first band image and the feature information of the second band image .
  • An embodiment of the present invention provides a drone including: a fuselage; a power system provided on the fuselage for providing flight power; and an image capturing device installed on the fuselage
  • the processor is used to obtain the first band image and the second band image; register the first band image and the second band image; register the first band image and the registered
  • the second band image is directly fused to obtain the target image.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the implementation corresponding to FIG. 2 or FIG. 3 of the present invention is implemented.
  • the image processing method described in the example can also implement the image processing device of the embodiment corresponding to the present invention described in FIG. 7, and details are not described herein again.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un dispositif de traitement d'image, un aéronef sans pilote, un système de traitement d'image et un support de stockage. Le procédé comprend les étapes consistant à : acquérir une première image de bande et une seconde image de bande (S201) ; enregistrer la première image de bande et la seconde image de bande (S202) ; et réaliser directement un traitement de fusion sur la première image de bande enregistrée et la seconde image de bande enregistrée pour obtenir une image cible (S203). Le procédé est simple en termes d'opération de fusion et peut acquérir des images de qualité supérieure.
PCT/CN2018/119113 2018-12-04 2018-12-04 Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage WO2020113407A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880065224.7A CN111247558A (zh) 2018-12-04 2018-12-04 一种图像处理方法、设备、无人机、系统及存储介质
PCT/CN2018/119113 WO2020113407A1 (fr) 2018-12-04 2018-12-04 Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119113 WO2020113407A1 (fr) 2018-12-04 2018-12-04 Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage

Publications (1)

Publication Number Publication Date
WO2020113407A1 true WO2020113407A1 (fr) 2020-06-11

Family

ID=70879051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119113 WO2020113407A1 (fr) 2018-12-04 2018-12-04 Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage

Country Status (2)

Country Link
CN (1) CN111247558A (fr)
WO (1) WO2020113407A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253173A1 (fr) * 2020-06-15 2021-12-23 深圳市大疆创新科技有限公司 Procédé et appareil de traitement d'image, et système d'inspection
CN112163483A (zh) * 2020-09-16 2021-01-01 浙江大学 一种目标数量检测系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2381417A1 (fr) * 2010-04-23 2011-10-26 Flir Systems AB Amélioration de la résolution et du contraste à infrarouge avec fusion
CN108182698A (zh) * 2017-12-18 2018-06-19 凯迈(洛阳)测控有限公司 一种机载光电红外图像和可见光图像的融合方法
CN108230281A (zh) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 遥感图像处理方法、装置和电子设备
CN108510447A (zh) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 一种图像融合方法及装置
CN108886577A (zh) * 2017-10-30 2018-11-23 深圳市大疆创新科技有限公司 拍摄控制方法、拍摄设备及无人机

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982518A (zh) * 2012-11-06 2013-03-20 扬州万方电子技术有限责任公司 红外与可见光动态图像的融合方法及装置
US9996913B2 (en) * 2014-04-07 2018-06-12 Bae Systems Information And Electronic Systems Integration Inc. Contrast based image fusion
CN106960428A (zh) * 2016-01-12 2017-07-18 浙江大立科技股份有限公司 可见光和红外双波段图像融合增强方法
CN107230199A (zh) * 2017-06-23 2017-10-03 歌尔科技有限公司 图像处理方法、装置和增强现实设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2381417A1 (fr) * 2010-04-23 2011-10-26 Flir Systems AB Amélioration de la résolution et du contraste à infrarouge avec fusion
CN108230281A (zh) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 遥感图像处理方法、装置和电子设备
CN108510447A (zh) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 一种图像融合方法及装置
CN108886577A (zh) * 2017-10-30 2018-11-23 深圳市大疆创新科技有限公司 拍摄控制方法、拍摄设备及无人机
CN108182698A (zh) * 2017-12-18 2018-06-19 凯迈(洛阳)测控有限公司 一种机载光电红外图像和可见光图像的融合方法

Also Published As

Publication number Publication date
CN111247558A (zh) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2020113408A1 (fr) Procédé et dispositif de traitement d'image, véhicule aérien sans pilote, système et support de stockage
US11455709B2 (en) Methods and systems for image processing
US9686539B1 (en) Camera pair calibration using non-standard calibration objects
CN112311965B (zh) 虚拟拍摄方法、装置、系统及存储介质
WO2021184302A1 (fr) Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage
KR20140090775A (ko) 어안 렌즈를 사용하여 얻은 왜곡영상에 대한 보정방법 및 이를 구현하기 위한 영상 디스플레이 시스템
US20220182582A1 (en) Image processing method and apparatus, device and storage medium
JP2019012881A (ja) 撮像制御装置及びその制御方法
CN114972023A (zh) 图像拼接处理方法、装置、设备及计算机存储介质
WO2020113407A1 (fr) Procédé et dispositif de traitement d'image, aéronef sans pilote, système de traitement d'image et support de stockage
US20190236764A1 (en) Voronoi Cropping of Images for Post Field Generation
WO2021168804A1 (fr) Procédé de traitement d'image, appareil de traitement d'image et programme de traitement d'image
KR20210066366A (ko) 영상 복원 방법 및 장치
CN108737743B (zh) 基于图像拼接的视频拼接装置及视频拼接方法
CN110650288B (zh) 对焦控制方法和装置、电子设备、计算机可读存储介质
CN113159229B (zh) 图像融合方法、电子设备及相关产品
US11734877B2 (en) Method and device for restoring image obtained from array camera
US20200349689A1 (en) Image processing method and device, unmanned aerial vehicle, system and storage medium
WO2021056538A1 (fr) Procédé et dispositif de traitement d'image
WO2020232672A1 (fr) Procédé et appareil de recadrage d'image, et appareil photographique
US20200288066A1 (en) Delivery of notifications for feedback over visual quality of images
WO2019227438A1 (fr) Procédé et dispositif de traitement d'image, aéronef, système et support de stockage
US11636708B2 (en) Face detection in spherical images
KR20180111991A (ko) 화상 처리 장치, 화상 처리 방법 및 화상 처리 시스템
CN109118460B (zh) 一种分光偏振光谱信息同步处理方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942426

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18942426

Country of ref document: EP

Kind code of ref document: A1