WO2020113404A1 - 目标图像的获取方法、拍摄装置和无人机 - Google Patents

目标图像的获取方法、拍摄装置和无人机 Download PDF

Info

Publication number
WO2020113404A1
WO2020113404A1 PCT/CN2018/119078 CN2018119078W WO2020113404A1 WO 2020113404 A1 WO2020113404 A1 WO 2020113404A1 CN 2018119078 W CN2018119078 W CN 2018119078W WO 2020113404 A1 WO2020113404 A1 WO 2020113404A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
visible light
pixel
light image
coefficient
Prior art date
Application number
PCT/CN2018/119078
Other languages
English (en)
French (fr)
Inventor
翁超
鄢蕾
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880069889.5A priority Critical patent/CN111433810A/zh
Priority to PCT/CN2018/119078 priority patent/WO2020113404A1/zh
Publication of WO2020113404A1 publication Critical patent/WO2020113404A1/zh
Priority to US16/928,874 priority patent/US11328188B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of image processing, and in particular, to a target image acquisition method, a shooting device, and a drone.
  • Visible light images can obtain rich target texture information
  • infrared images can obtain target temperature information.
  • the visible light image and the infrared image are fused to obtain It displays the fusion image of both target texture information and target temperature information. This method is widely used in remote sensing and target monitoring (such as electric cruise) and other fields.
  • the high-pass filtering of the spatial filter is used to extract pixel data representing the high-altitude frequency content of the visible light image from the visible light image. This method has high requirements on the pixel data and the processing process is complicated.
  • the invention provides a target image acquisition method, a shooting device and a drone.
  • a method for acquiring a target image including:
  • a photographing device comprising:
  • a controller which is electrically coupled to the visible light image acquisition module and the infrared image acquisition module;
  • the visible light image acquisition module and the infrared image acquisition module aiming at the visible light image and infrared image of the target taken at the same time
  • the controller is used to obtain the visible light image and the infrared image captured by the visible light image collection module and the infrared image collection module for the target at the same time, and perform weighted fusion on the visible light image and the infrared image to obtain a fusion image ; According to the fusion image, obtain the target image.
  • a drone comprising:
  • a shooting device mounted on the rack includes a visible light image acquisition module and an infrared image acquisition module;
  • a controller which is electrically coupled to the visible light image acquisition module and the infrared image acquisition module;
  • the visible light image acquisition module and the infrared image acquisition module aiming at the visible light image and infrared image of the target taken at the same time
  • the controller is used to obtain the visible light image and the infrared image captured by the visible light image collection module and the infrared image collection module for the target at the same time, and perform weighted fusion on the visible light image and the infrared image to obtain a fusion image ; According to the fusion image, obtain the target image.
  • the visible light image and infrared image of the target are weighted and fused so that the target image retains the visible light details and temperature information of the target at the same time, and the weighted fusion process does not require visible light image and/or infrared
  • This method has the advantages of simple solution and obvious fusion effect.
  • FIG. 1 is a method flowchart of a method for acquiring a target image in an embodiment of the invention
  • FIG. 2 is a specific flowchart of a method for acquiring a target image in the embodiment shown in FIG. 1;
  • FIG. 3 is another specific flowchart of the method for acquiring the target image in the embodiment shown in FIG. 1;
  • FIG. 4 is a structural block diagram of a shooting device in an embodiment of the present invention.
  • FIG. 5 is a schematic structural view of a drone in an embodiment of the invention.
  • FIG. 6 is a structural block diagram of the drone in the embodiment shown in FIG. 5.
  • FIG. 1 is a method flowchart of a target image acquisition method in an embodiment of the present invention.
  • a method for acquiring a target image provided by an embodiment of the present invention may include the following steps:
  • Step S101 Obtain a visible light image and an infrared image captured by the shooting device for the target at the same time;
  • the photographing device of this embodiment includes a visible light image acquisition module and an infrared image acquisition module, wherein the visible light image acquisition module is used to acquire the visible light image of the target, and the infrared image acquisition module is used to acquire the infrared image of the target.
  • the shooting device can control the visible light image acquisition module and the infrared image acquisition module to simultaneously shoot to obtain the visible light image and the infrared image of the same target at the same time.
  • the target In the application scenario of power cruise, the target is the power line. In an application scenario of fire detection, the target may be a forest or a mountain.
  • the visible light image acquisition module and the infrared image acquisition module have different positions relative to the target, and relative rotation, scaling, translation, etc. may occur between the visible light image and the infrared image, resulting in the inability to directly convert the visible light image and the infrared image Perform fusion.
  • the visible light image and/or infrared image need to be registered so that the visible light image and the infrared image have the same resolution, and the same target is in the visible light image and the infrared image
  • the visible light image and the infrared image are fused.
  • the visible light image and/or the infrared image are adjusted according to preset calibration parameters.
  • the preset calibration parameters are determined by the type of the shooting device.
  • the preset calibration parameters may include at least one of rotation parameters, zoom parameters, translation parameters, and cropping parameters.
  • the optical axes of the visible light image acquisition module and the infrared image acquisition module are set coaxially. The visible light image acquired by the visible light image acquisition module and the infrared image acquired by the infrared image acquisition module do not appear to be relatively shifted, and there is no need to perform the visible light image and infrared image. Pan processing.
  • the focal lengths of the visible light image acquisition module and the infrared image acquisition module are the same, and the size (width and height) of the same target in the visible light image acquired by the visible light image acquisition module and the infrared image acquired by the infrared image acquisition module are approximately equal, That is, the resolutions of the visible light image and the infrared image are approximately equal, and there is no need to zoom the visible light image and the infrared image.
  • the preset calibration parameters include rotation parameters.
  • the visible light image and/or the infrared image are rotated according to the rotation parameters, so that the corresponding shooting angles of the visible light image and the infrared image are approximately the same.
  • the rotation parameter one of the visible light image and the infrared image is rotated so that the shooting angles corresponding to the visible light image and the infrared image are approximately the same.
  • the visible light image and the infrared image are simultaneously rotated according to the rotation parameters, so that the corresponding shooting angles of the visible light image and the infrared image are approximately the same.
  • the shooting angles corresponding to the visible light image and the infrared image are substantially the same means that there is no deviation between the shooting angle of the visible light image and the infrared image, or the shooting angle of the visible light image corresponds to the infrared image The deviation between the shooting angles is within the allowable deviation.
  • the preset calibration parameters include scaling parameters.
  • the visible light image and/or infrared image are zoomed according to the zoom parameters, so that the size of the same target in the visible light image and the infrared image (that is, the visible image It is approximately equal to the resolution of the infrared image).
  • the zoom parameters one of the visible light image and the infrared image is zoomed, so that the resolution of the visible light image and the infrared image is approximately the same.
  • the visible light image and the infrared image are simultaneously zoomed, so that the resolutions of the visible light image and the infrared image are approximately the same.
  • the resolutions of the visible light image and the infrared image are substantially equal means that there is no deviation between the resolution of the visible light image and the infrared image, or the resolution of the visible light image and the infrared image The deviation between is within the allowable deviation.
  • the preset calibration parameters include translation parameters.
  • the visible light image and/or infrared image according to the preset calibration parameters specifically performing the translation processing on the visible light image and/or infrared image according to the translation parameters, so that the positions of the same target in the visible light image and the infrared image substantially coincide.
  • one of the visible light image and the infrared image is subjected to translation processing so that the positions of the same target in the visible light image and the infrared image substantially overlap.
  • the visible light image and the infrared image are simultaneously translated, so that the positions of the same target in the visible light image and the infrared image substantially coincide.
  • the positions of the same target in the visible light image and the infrared image substantially overlap means that the position of the target in the visible light image coincides with the position of the target in the infrared image, or the same target in the visible light image and infrared
  • the position in the image substantially coincides with that the deviation between the position of the target in the visible light image and the position of the target in the infrared image is within the allowable deviation range.
  • the preset calibration parameters include cropping parameters. Adjusting the visible light image and/or infrared image according to preset calibration parameters, and specifically cutting the visible light image and/or infrared image according to the cropping parameters, so that the visible light image and the infrared image retain approximately the same target area.
  • the cropping parameters one of the visible light image and the infrared image is cropped so that the visible light image and the infrared image retain approximately the same target area.
  • the visible light image and the infrared image are simultaneously cropped, so that the visible light image and the infrared image retain approximately the same target area.
  • the target area where the visible light image and the infrared image remain approximately the same means that the target area of the visible light image and the target area of the infrared image are completely the same, or between the target area of the visible light image and the target area of the infrared image The deviation is within the allowable deviation range.
  • Step S102 performing weighted fusion on the visible light image and the infrared image to obtain a fusion image
  • the pixel value of each pixel in the visible light image and the pixel value of the pixel in the infrared image are fused.
  • the first coefficient is used to characterize the fusion weight of each pixel in the visible light image
  • the second coefficient is used to characterize the fusion weight of each pixel in the infrared image.
  • the fusion image obtained by fusion contains both the visible light information (ie, texture information) of the target and the temperature information of the target. Compared with the independent visible light image and infrared image, the fusion image information is more abundant and meets the needs of specific fields, such as In the field of electric cruise, the target is the power line.
  • the visible light image of the power line makes it easier to identify the appearance of the power line, which is helpful for judging the damaged line in the power line and knowing the specific location of the damaged line.
  • the visible light image through the power line cannot be recognized; while the infrared image through the power line can easily identify the temperature inside the power line to determine whether there is damage inside the line (
  • the damaged line heats abnormally (such as the temperature is higher than the normal working line temperature, or the damaged line will not heat, the temperature is lower than the normal working line temperature), but the infrared image through the power line is more interpretable Poor, it takes more time and effort to learn the specific location of the damaged line.
  • the fused image of this embodiment contains both the visible light information of the electric wire and the temperature information of the electric wire, so that it is easier to determine the damaged and externally damaged lines in the power line, and to know the specific location of the damaged line. much easier.
  • the pixel value of each pixel in the fused image is the product of the first coefficient and the pixel value of the pixel in the visible light image and the product of the second coefficient and the pixel value of the pixel in the infrared image
  • the sum, that is, the pixel value of each pixel in the fused image the first coefficient of the pixel * the pixel value of the pixel in the visible light image + the second coefficient of the pixel * the pixel in the infrared image
  • the pixel value of is achieved by a simple weighted fusion method, which realizes the fusion of visible light images and infrared images. The fusion process does not require more complicated steps such as target detection and image processing.
  • the first coefficient and the second coefficient can be set according to the fusion requirements. For example, in some examples, the sum of the first coefficient of each pixel in the visible light image and the second coefficient of the pixel in the infrared image is 1, so The setting method can reduce the loss of pixels, and the information of each pixel in the fused image will not be diluted.
  • the sum of the first coefficient of some pixels in the visible light image and the second coefficient of the pixel in the infrared image is 1, while the first coefficient of the other pixels in the visible light image and the pixel in the infrared image
  • the sum of the second coefficients of the points is not 1.
  • the non-target area in the visible light image and the non-target area in the infrared image can be diluted, such as electric cruise In the field, more attention is paid to the visible light information of the subject with a larger temperature, and less attention is paid to the visible light information of the subject with a lower temperature.
  • the first pixel corresponding to the subject with the lower temperature The second coefficient in the infrared image of the pixel corresponding to the subject with the lower temperature can be set smaller, and the sum of the two is less than 1.
  • both the first coefficient and the second coefficient are preset coefficients.
  • the first coefficient of each pixel in the visible light image is 0.5
  • the second coefficient of the pixel in the infrared image is also 0.5, ensuring that the proportion of the visible light information and the temperature information retained in the fused image is equal.
  • the first coefficient of each pixel in the visible light image and the second coefficient of the pixel in the infrared image can also be set to other sizes, for example, the first coefficient of each pixel in the visible light image is 0.6, infrared
  • the second coefficient of the pixel in the image is 0.4, or the first coefficient of each pixel in the visible light image is 0.4, the second coefficient of the pixel in the infrared image is 0.6, and so on.
  • the first coefficient of each pixel in the visible light image and the second coefficient of the pixel in the infrared image can be set according to the target area and the non-target area (such as the background area) in the visible light image and the infrared image.
  • the first coefficient of each pixel in the visible image and the second coefficient of the pixel in the infrared image When setting the first coefficient of each pixel in the visible image and the second coefficient of the pixel in the infrared image according to the target area and the non-target area in the visible image and infrared image, first determine that the target is in the visible image The first position area of the target and determine the second position area of the target in the infrared image; then set the second coefficient size of each pixel in the second position area to be greater than the first coefficient size of the corresponding pixel in the first position area , So that the target area in the fusion image more reflects the temperature information of the target; or, the first coefficient of the corresponding pixel in the first position area is set to be greater than the second coefficient of each pixel in the second position area, so that The target area in the fusion image more reflects the visible light information of the target.
  • the method of determining the first position area of the target in the visible light image and the second position area of the target in the infrared image can be implemented by existing algorithms, such as manually framing or clicking on the target and infrared image in the visible light image
  • the target in the target is determined by the existing target detection algorithm in the first position area of the target in the visible light image and the second position area of the target in the infrared image.
  • the size of the second coefficient of each pixel in the second position area can be set to be larger than the size of the first coefficient of the corresponding pixel in the first position area, so that the fused image More reflect the internal temperature of the power line, so as to determine whether the line is damaged.
  • the size of the first coefficient of each pixel in the visible light image other than the first position area is equal to the size of the second coefficient of the pixel in the infrared image, to Reduce the difficulty of integration.
  • the size of the first coefficient of each pixel in the area other than the first position area in the visible light image and the size of the second coefficient of the pixel in the infrared image may also be set to be unequal.
  • the size of the first coefficient of each pixel in the area other than the first position area in the visible light image can be set to be smaller than that of the pixel in the first position area
  • the first coefficient size, and the first coefficient size of each pixel in the infrared image other than the second position area is set to be smaller than the second coefficient size of the pixel point in the second position area.
  • the first coefficients of the pixels in the visible light image have the same size.
  • the second coefficients of each pixel in the infrared image are also equal in size, the first coefficients of each pixel in the visible image are set to the same size, and the second coefficients of each pixel in the infrared image are also set to the same.
  • the size can reduce the complexity of weighted fusion.
  • the first coefficient size of each pixel in the visible light image can be set to be larger than the second coefficient size of each pixel in the infrared image.
  • the first coefficient of each pixel in the visible light image A coefficient is 0.7, and the second coefficient of each pixel in the infrared image is 0.3; for another example, the first coefficient of each pixel in the visible image is 0.8, the second coefficient of each pixel in the infrared image is 0.2, and so on.
  • the first coefficient size of each pixel in the visible light image can be set to be smaller than the second coefficient size of each pixel in the infrared image.
  • the first coefficient of each pixel in the visible light image A coefficient is 0.4, and the second coefficient of each pixel in the infrared image is 0.6; for another example, the first coefficient of each pixel in the visible image is 0.3, the second coefficient of each pixel in the infrared image is 0.7, and so on. Further optionally, the first coefficient of each pixel in the visible light image is equal to the second coefficient of the pixel in the infrared image. In this embodiment, the first coefficient of each pixel in the visible light image is 0.5, and the infrared image The second coefficient of each pixel in is also 0.5.
  • the size of the first coefficient of each pixel in the visible light image is at least partially unequal.
  • the first coefficient of pixels in the target area in the visible light image may be set to be greater than the first coefficients of pixels in the non-target area in the visible light image, thereby diluting the visible light information in the non-target area.
  • the first coefficients of each pixel in the target area in the visible light image can be set to the same size, and the first coefficients of each pixel in the non-target area in the visible light image can also be set to the same size. It can be understood that in other embodiments, the first coefficients of the pixels in the target area in the visible light image may also be set to different sizes, and the first coefficients of the pixels in the non-target area in the visible light image may also be set to different sizes.
  • the size of the second coefficient of each pixel in the infrared image is at least partially unequal.
  • the first coefficient of the pixel in the target area in the infrared image can be set to be larger than the first coefficient of the pixel in the non-target area in the infrared image.
  • the first coefficients of each pixel in the target area in the infrared image can be set to the same size, and the first coefficients of each pixel in the non-target area in the infrared image can also be set to the same size.
  • the first coefficients of the pixels in the target area in the infrared image may also be set to different sizes, and the first coefficients of the pixels in the non-target area in the infrared image may also be set to different sizes.
  • Step S103 Obtain a target image according to the fused image.
  • step S102 after the fusion image is obtained through step S102, there is no need to further process the fusion image, but the fusion image obtained in step S102 is directly used as the target image.
  • the fusion image obtained through step S102 needs to be processed to enhance the details of the fusion image and improve the display effect of the fusion image.
  • the fusion image obtained through step S102 needs to be processed to enhance the details of the fusion image and improve the display effect of the fusion image.
  • FIG. 3 when obtaining a target image based on the fused image, it is necessary to perform enhancement processing on the fused image, and use the fused image after the enhancement processing as the target image.
  • the method of enhancing the fusion image can be selected according to actual needs, such as increasing the contrast of the fusion image, performing noise reduction processing on the fusion image, and so on.
  • enhancement of the fusion image is achieved by increasing the contrast of the fusion image.
  • the pixel value of the pixel is adjusted according to a preset contrast adjustment model.
  • the preset contrast tone model is used to characterize the relationship between the pixel value of each pixel in the fusion image in the target image and the pixel value of each pixel in the fusion image.
  • the preset contrast adjustment model is:
  • (i, j) is the coordinate of each pixel in the fused image
  • f(i, j) is the pixel value of each pixel in the fused image before increasing the contrast
  • g(i, j) is the fused image
  • m and n are adjustment coefficients, and the size of m and n can be set according to requirements.
  • the visible light image and the infrared image of the target are weighted and fused, so that the target image retains the visible light details and temperature information of the target at the same time, and the weighted fusion process does not require visible light image and/or infrared image.
  • this method has the advantages of simple solution and obvious fusion effect.
  • the photographing device 200 may include a visible light image acquisition module 210, an infrared image acquisition module, and a first controller 230, where the controller and the visible light image acquisition module 210 and infrared
  • the image acquisition modules 220 are respectively electrically coupled and connected.
  • the visible light image acquisition module 210 and the infrared image acquisition module 220 at the same time take a visible light image and an infrared image of the target
  • the first controller 230 is used to acquire the visible light image acquisition module 210 and the infrared image acquisition module 220 at At the same time, the visible light image and the infrared image taken for the target are weighted and fused to obtain the fusion image; according to the fusion image, the target image is obtained.
  • the first controller 230 is used to perform the operation of the method for acquiring the target image shown in FIGS. 1 to 3.
  • the first controller 230 is used to perform the operation of the method for acquiring the target image shown in FIGS. 1 to 3.
  • the method for acquiring the target image in the foregoing embodiment and details are not described here. .
  • the first controller 230 in this embodiment may be a central processing unit (central processing unit, CPU).
  • the first controller 230 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the PLD may be a complex programmable logic device (complex programmable logic device (CPLD), a field programmable logic gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • CPLD complex programmable logic device
  • FPGA field programmable logic gate array
  • GAL general array logic
  • an embodiment of the present invention also provides a drone, the drone may include a rack 100, a shooting device 200 mounted on the rack 100, and a second controller 300, wherein the shooting device 200 may include a visible light image acquisition module and an infrared image acquisition module, and the second controller 300 is electrically coupled to the visible light image acquisition module and the infrared image acquisition module, respectively.
  • the visible light image acquisition module and the infrared image acquisition module aim at the visible light image and the infrared image taken at the same time for the target
  • the second controller 300 is used to acquire the visible light image acquisition module and the infrared image acquisition module at the same time for the target
  • the visible light image and the infrared image are taken, and the weighted fusion of the visible light image and the infrared image is obtained to obtain a fusion image; according to the fusion image, the target image is obtained.
  • the second controller 300 is used to perform the operation of the method for acquiring the target image as shown in FIGS. 1 to 3.
  • the second controller 300 is used to perform the operation of the method for acquiring the target image as shown in FIGS. 1 to 3.
  • FIGS. 1 to 3 For details, refer to the description of the method for acquiring the target image in the foregoing embodiment, and details are not described here. .
  • the second controller 300 may be a flight controller, or a first controller 230 of the camera 200, or a combination of a flight controller and the first controller 230.
  • the second controller 300 of this embodiment may be a central processing unit (central processing unit, CPU).
  • the second controller 300 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the PLD may be a complex programmable logic device (complex programmable logic device (CPLD), a field programmable logic gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • CPLD complex programmable logic device
  • FPGA field programmable logic gate array
  • GAL general array logic
  • the rack 100 may include a fuselage and tripods connected to both sides of the bottom of the fuselage. Further, the rack 100 may further include arms connected to both sides of the fuselage.
  • the photographing device 200 is mounted on the body.
  • the camera 200 of this embodiment is mounted on the body through a gimbal 400, which can be a two-axis gimbal or a three-axis cloud station.
  • the drone of this embodiment may be a fixed-wing drone or a multi-rotor drone.
  • the visible light image acquisition module may be a visible light image sensor
  • the infrared image acquisition module is an infrared imaging sensor
  • the shooting device 200 further includes a housing, and the visible light image sensor and the infrared image sensor are provided in the housing on.
  • the visible light image acquisition module is an independent visible light camera
  • the infrared image acquisition module is an infrared thermal imager
  • the visible light camera and the infrared thermal imager are installed through a mounting bracket.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种目标图像的获取方法、拍摄装置和无人机,所述方法包括:获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像(S101);对所述可见光图像和所述红外图像进行加权融合,获得融合图像(S102);根据所述融合图像,获得目标图像(S103)。通过加权融合的方式对目标的可见光图像和红外图像,使得目标图像同时保留目标的可见光细节和温度信息,加权融合过程无需对可见光图像和/或红外图像进行复杂的图像处理,该方式具有方案简单、融合效果明显的优势。

Description

目标图像的获取方法、拍摄装置和无人机 技术领域
本发明涉及图像处理领域,尤其涉及一种目标图像的获取方法、拍摄装置和无人机。
背景技术
可见光图像可以获得丰富的目标纹理信息,红外图像可以获得目标温度信息,相关技术中,为了在同一幅图像中既显示目标纹理信息又显示目标温度信息,会将可见光图像和红外图像进行融合,获得既显示目标纹理信息又显示目标温度信息的融合图像,这种方式广泛应用在遥感和目标监视(如电力巡航)等领域。
在对可见光图像和红外图像进行融合时,相关技术中,通过将可见光图像的高空频率内容与红外图像进行叠加,或者将红外图像叠加到可见光图像的高空频率内容上,从将可见光图像的对比度插入显示温度变化的红外图像中,以组合两种图像的优点,叠加后的图像的清晰度和可解释性也不会损失。具体是采用空间滤波器的高通滤波从可见光图像中提取表示该可见光图像的高空频率内容的像素数据,该方式对像素数据存在较高的要求,处理过程较为繁杂。
发明内容
本发明提供一种目标图像的获取方法、拍摄装置和无人机。
具体地,本发明是通过如下技术方案实现的:
根据本发明的第一方面,提供一种目标图像的获取方法,所述方法包括:
获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像;
对所述可见光图像和所述红外图像进行加权融合,获得融合图像;
根据所述融合图像,获得目标图像。
根据本发明的第二方面,提供一种拍摄装置,所述装置包括:
可见光图像采集模块;
红外图像采集模块;以及
控制器,所述控制器与所述可见光图像采集模块和所述红外图像采集模块分别电耦合连接;
所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,
所述控制器用于获取所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,并对所述可见光图像和所述红外图像进行加权融合,获得融合图像;根据所述融合图像,获得目标图像。
根据本发明的第三方面,提供一种无人机,所述无人机包括:
机架;
搭载在所述机架上的拍摄装置,所述拍摄装置包括可见光图像采集模块和红外图像采集模块;和
控制器,所述控制器与所述可见光图像采集模块和所述红外图像采集模块分别电耦合连接;
所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,
所述控制器用于获取所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,并对所述可见光图像和所述红外图像进行加权融合,获得融合图像;根据所述融合图像,获得目标图像。
由以上本发明实施例提供的技术方案可见,通过加权融合的方式对目标的可见光图像和红外图像,使得目标图像同时保留目标的可见光细节和温度信息,加权融合过程无需对可见光图像和/或红外图像进行复杂的图像处理,该方式具有方案简单、融合效果明显的优势。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例中的目标图像的获取方法的方法流程图;
图2是图1所示实施例中目标图像的获取方法的具体流程图;
图3是图1所示实施例中目标图像的获取方法的另一具体流程图;
图4是本发明一实施例中的拍摄装置的结构框图;
图5是本发明一实施例中的无人机的结构示意图;
图6是图5所示实施例中的无人机的结构框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、 完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的目标图像的获取方法、拍摄装置和无人机进行详细说明。在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
图1是本发明一实施例中的目标图像的获取方法的方法流程图。参见图1,本发明实施例提供的目标图像的获取方法可包括如下步骤:
步骤S101:获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像;
本实施例的拍摄装置包括可见光图像采集模块和红外图像采集模块,其中,可见光图像采集模块用于获取目标的可见光图像,红外图像采集模块用于获取目标的红外图像。拍摄装置能够控制可见光图像采集模块和红外图像采集模块同时进行拍摄,以获得同一目标在同一时刻的可见光图像和红外图像。在电力巡航的应用场景下,该目标为电力线路。在火灾探测的应用场景下,该目标可为森林或高山等。
在拍摄装置中,通常可见光图像采集模块和红外图像采集模块相对目标的位置是不同的,可见光图像和红外图像之间可能出现相对旋转、比例缩放、平移等,导致不能直接将可见光图像和红外图像进行融合。本实施例中,参见图2,在执行步骤S101之后,步骤S102之前,需要对可见光图像和/或红外图像配准,使得可见光图像和红外图像分辨率相同、同一目标在可见光图像和红外图像中的位置相同后,再对可见光图像和红外图像进行融合。具体的,在执行步骤S101之后,步骤S102之前,根据预设的标定参数,调整可见光图像和/或红外图像。
预设的标定参数是由拍摄装置的类型决定的,可选的,预设的标定参数可包括旋转参数、缩放参数、平移参数和裁剪参数中的至少一种,例如,在一些实施例中,可见光图像采集模块和红外图像采集模块的光轴同轴设置,可见光图像采集模块获取的可见光图像和红外图像采集模块获取的红外图像则不会出现相对平移的现象,无需对可见光图像和红外图像进行平移处理。在另一些实施例中,可见光图像采集模块和红外图像采集模块的焦距相同,可见光图像采集模块获取的可见光图像和红外图像采集模块获取的红外图像中同一目标的大小(宽和高)大致相等,即可见光图像和红外图像的分辨率大致相等,无需对可见光图像和红外图像进行缩放处理。
在一些例子中,预设的标定参数包括旋转参数。在根据预设的标定参数,调整可见光图像和/或红外图像时,具体根据旋转参数,对可见光图像和/或红外图像进行旋转处理,使得可见光图像和红外图像对应的拍摄角度大致相同。可选的,根据旋转参数,对可见光图像和红外图像中的一个进行旋转处理,使得可见光图像和红外图像对应的拍摄角度大致相同。可选的,根据旋转参数,对可见光图像和红外图像同时进行旋转处理,使得可见光图像和红外图像对应的拍摄角度大致相同。可以理解,本发明 实施例中,可见光图像和红外图像对应的拍摄角度大致相同是指可见光图像的拍摄角度与红外图像的拍摄角度之间不存在偏差,或者可见光图像的拍摄角度与红外图像对应的拍摄角度之间的偏差在允许偏差范围内。
在一些例子中,预设的标定参数包括缩放参数。在根据预设的标定参数,调整可见光图像和/或红外图像时,具体根据缩放参数,对可见光图像和/或红外图像进行缩放处理,使得可见光图像与红外图像中同一目标的大小(即可见光图像与红外图像的分辨率)大致相等。可选的,根据缩放参数,对可见光图像和红外图像中的一个进行缩放处理,使得可见光图像和红外图像的分辨率大致相同。可选的,根据缩放参数,对可见光图像和红外图像同时进行缩放处理,使得可见光图像和红外图像的分辨率大致相同。可以理解,本发明实施例中,可见光图像和红外图像的分辨率大致相等是指可见光图像的分辨率与红外图像的分辨率之间不存在偏差,或者可见光图像的分辨率与红外图像的分辨率之间的偏差在允许偏差范围内。
在一些例子中,预设的标定参数包括平移参数。在根据预设的标定参数,调整可见光图像和/或红外图像时,具体根据平移参数,对可见光图像和/或红外图像进行平移处理,使得同一目标在可见光图像和红外图像中的位置大致重合。可选的,根据平移参数,对可见光图像和红外图像中的一个进行平移处理,使得同一目标在可见光图像和红外图像中的位置大致重合。可选的,根据平移参数,对可见光图像和红外图像同时进行平移处理,使得同一目标在可见光图像和红外图像中的位置大致重合。可以理解,本发明实施例中,同一目标在可见光图像和红外图像中的位置大致重合是指目标在可见光图像中的位置与该目标在红外图像中的位置重合,或者同一目标在可见光图像和红外图像中的位置大致重合是指目标在可见光图像中的位置与该目标在红外图像中的位置之间的偏差在允许偏差范围内。
在一些例子中,预设的标定参数包括裁剪参数。在根据预设的标定参数,调整可见光图像和/或红外图像,具体根据裁剪参数,对可见光图像和/或红外图像进行裁剪处理,使得可见光图像和红外图像保留大致相同的目标区域。可选的,根据裁剪参数,对可见光图像和红外图像中的一个进行裁剪处理,使得可见光图像和红外图像保留大致相同的目标区域。可选的,根据裁剪参数,对可见光图像和红外图像同时进行裁剪处理,使得可见光图像和红外图像保留大致相同的目标区域。可以理解,本发明实施例中,可见光图像和红外图像保留大致相同的目标区域是指可见光图像的目标区域和红外图像的目标区域完全相同,或者可见光图像的目标区域和红外图像的目标区域之间的偏差在允许偏差范围内。
步骤S102:对可见光图像和红外图像进行加权融合,获得融合图像;
在对可见光图像和红外图像进行加权融合时,具体的,根据第一系数和第二系数,对可见光图像中每一像素点的像素值和红外图像中该像素点的像素值进行融合。 其中,第一系数用于表征可见光图像中每一像素点的融合权重,第二系数用于表征红外图像中每一像素点的融合权重。融合获得的融合图像既包含了目标的可见光信息(即纹理信息),也包含的目标的温度信息,相比独立的可见光图像和红外图像,融合图像的信息更加丰富,符合特定领域的需求,比如,在电力巡航领域中,目标为电力线路,通过电力线路的可见光图像可较为容易识别出电力线路的外观情况,有利于判断电力线路中外观损坏的线路并且获知损坏线路的具体位置,但在线路外观完好但内部损坏的情况时,通过电力线路的可见光图像则无法识别出;而通过电力线路的红外图像虽然可以很容易识别出电力线路内部的温度情况,从而判断出线路的内部是否存在损坏(电力线路工作时,损坏的线路发热异常如温度高于正常工作的线路温度,或者损毁的线路不会发热,温度低于正常工作的线路温度),但通过电力线路的红外图像的可解释性较差,获知损坏线路的具体位置比较费时费力。本实施例的融合图像既包含了电线的可见光信息,又包含了电线的温度信息,从而能够较为容易地判断处电力线路中外观损坏的线路以及内部损坏的线路,并且获知损坏线路的具体位置也更加容易。
本实施例中,每一像素点在融合图像中的像素值为第一系数和该像素点在可见光图像中的像素值的乘积与第二系数和该像素点在红外图像中的像素值的乘积之和,即每一像素点在融合图像中的像素值=该像素点的第一系数*该像素点在可见光图像中的像素值+该像素点的第二系数*该像素点在红外图像中的像素值,通过简单的加权融合方式,实现了可见光图像和红外图像的融合,融合过程不需要进行目标检测、图像处理等较为复杂的步骤。
第一系数、第二系数可根据融合需求进行设定,例如,在一些例子中,可见光图像中每一像素点的第一系数与红外图像中该像素点的第二系数之和为1,这样的设置方式可以降低像素点的损失,融合图像中各像素点的信息不会被淡化。
在另一些例子中,可见光图像中部分像素点的第一系数与红外图像中该像素点的第二系数之和为1,而可见光图像中另一部分像素点的第一系数与红外图像中该像素点的第二系数之和不为1。比如,由于融合的重点是针对可见光图像中的目标区域和红外图像中的目标区域的融合结果,那么则可对可见光图像中的非目标区域以及红外图像中的非目标区域进行淡化,比如电力巡航领域中更加关注温度比较大的被拍摄物的可见光信息,而对温度较小的被拍摄物的可见光信息关注较小,那么在可见光图像中温度较小的被拍摄物对应的像素点的第一系数与该温度较小的被拍摄物对应的像素点在红外图像中的第二系数都可以设置得较小,两者之和小于1。
进一步的,在一些实施例中,第一系数和第二系数均为预先设定的系数。例如,可见光图像中的每一像素点的第一系数为0.5,红外图像中该像素点的第二系数也为0.5,确保融合图像保留的可见光信息和温度信息的比重相等。可以理解,可见光图像中的每一像素点的第一系数、红外图像中该像素点的第二系数也可设置成其他大小,比如可见光图像中的每一像素点的第一系数为0.6,红外图像中该像素点的第二系数为 0.4,或者可见光图像中的每一像素点的第一系数为0.4,红外图像中该像素点的第二系数为0.6等等。
在另一些实施例中,在根据第一系数和第二系数,对可见光图像中每一像素点的像素值和红外图像中该像素点的像素值进行融合之前,还需根据可见光图像和红外图像,确定第一系数和第二系数。比如,可根据可见光图像和红外图像中目标区域和非目标区域(如背景区域)来设定可见光图像中的每一像素点的第一系数和红外图像中该像素点的第二系数。
而在根据可见光图像和红外图像中目标区域和非目标区域来设定可见光图像中的每一像素点的第一系数和红外图像中该像素点的第二系数时,首先确定目标在可见光图像中的第一位置区域,并确定目标在红外图像中的第二位置区域;接着将第二位置区域中各像素点的第二系数大小设置成大于第一位置区域中对应像素点的第一系数大小,从而使得融合图像中目标区域更多体现目标的温度信息;或者,将第一位置区域中对应的像素点的第一系数设置成大于第二位置区域中各像素点的第二系数,从而使得融合图像中目标区域更多体现目标的可见光信息。
其中,确定目标在可见光图像中的第一位置区域以及确定目标在红外图像中的第二位置区域的方式可通过现有算法实现,如可由人为框选或点选可见光图像中的目标及红外图像中的目标,再通过现有目标检测算法确定出目标在可见光图像中的第一位置区域以及确定目标在红外图像中的第二位置区域。比如,在电力巡航领域,为了获知电力线路内部的情况,可将第二位置区域中各像素点的第二系数大小设置成大于第一位置区域中对应像素点的第一系数大小,使得融合图像更多体现电力线路的内部温度情况,从而判断线路内部是否损坏。
进一步的,在一些例子中,可见光图像中除第一位置区域外的其他区域(即非目标区域)中各像素点的第一系数大小与红外图像中该像素点的第二系数大小相等,以降低融合难度。当然,可见光图像中除第一位置区域外的其他区域中各像素点的第一系数大小与红外图像中该像素点的第二系数大小也可设置成不相等的。另外,在需要淡化融合图像中非目标区域的信息时,可将可见光图像中除第一位置区域外的其他区域中各像素点的第一系数大小设置成小于第一位置区域中的像素点的第一系数大小,并将红外图像中除第二位置区域外的其他区域中各像素点的第一系数大小设置成小于第二位置区域中的像素点的第二系数大小。
另外,在一些实施例中,可见光图像中各像素点的第一系数大小相等。可选的,红外图像中各像素点的第二系数大小也相等,将可见光图像中各像素点的第一系数设置成相同大小,并将红外图像中各像素点的第二系数也设置成相同大小,能够降低加权融合的复杂度。当需要更多的获得目标的可见光信息时,可将可见光图像中各像素点的第一系数大小设置成大于红外图像中各像素点的第二系数大小,比如,可见光图 像中各像素点的第一系数为0.7,红外图像中各像素点的第二系数为0.3;又比如,可见光图像中各像素点的第一系数为0.8,红外图像中各像素点的第二系数为0.2等等。当需要更多的获得目标的温度信息时,可将可见光图像中各像素点的第一系数大小设置成小于红外图像中各像素点的第二系数大小,比如,可见光图像中各像素点的第一系数为0.4,红外图像中各像素点的第二系数为0.6;又比如,可见光图像中各像素点的第一系数为0.3,红外图像中各像素点的第二系数为0.7等等。进一步可选的,可见光图像中每一像素点的第一系数与红外图像中该像素点的第二系数大小相等,本实施例中,可见光图像中各像素点的第一系数为0.5,红外图像中各像素点的第二系数也为0.5。
在另一些实施例中,可见光图像中各像素点的第一系数大小至少部分不相等。例如,可将可见光图像中目标区域的像素点的第一系数设置成大于可见光图像中非目标区域的像素点的第一系数,从而淡化非目标区域的可见光信息。进一步的,为降低加权融合的难度,可将可见光图像中目标区域各像素点的第一系数设置成相同大小,并将可见光图像中非目标区域各像素点的第一系数也设置成相同大小。可以理解,在其他实施例中,可见光图像中目标区域各像素点的第一系数也可设置成不同大小,可见光图像中非目标区域各像素点的第一系数也可设置成不同大小。
更进一步的,红外图像中各像素点的第二系数大小至少部分不相等,比如,可将红外图像中目标区域的像素点的第一系数设置成大于红外图像中非目标区域的像素点的第一系数,从而淡化非目标区域的温度光信息。进一步的,为降低加权融合的难度,可将红外图像中目标区域各像素点的第一系数设置成相同大小,并将红外图像中非目标区域各像素点的第一系数也设置成相同大小。可以理解,在其他实施例中,红外图像中目标区域各像素点的第一系数也可设置成不同大小,红外图像中非目标区域各像素点的第一系数也可设置成不同大小。
步骤S103:根据融合图像,获得目标图像。
具体的,在一些实施例中,在通过步骤S102获得融合图像后,无需进行对融合图像进一步处理,而是直接将步骤S102获得的融合图像作为目标图像。
而在另外一些实施例中,需要对通过步骤S102获得的融合图像进行处理,以增强融合图像细节,改善融合图像的显示效果。本实施例中,参见图3,在根据融合图像,获得目标图像时,需要对融合图像进行增强处理,并将增强处理后的融合图像作为目标图像。
可根据实际需求来选择对融合图像进行增强处理的方式,比如,提高融合图像的对比度、对融合图像进行降噪处理等等。本实施例中,通过提高融合图像的对比度来实现对融合图像的增强处理。具体的,在提高融合图像的对比度时,针对融合图像中的每一像素点,根据预设的对比度调整模型,调整该像素点的像素值。其中,预设 的对比度调模型用于表征融合图像中的每一像素点在目标图像中的像素值和融合图像中的每一像素点的像素值之间的关系。可选的,预设的对比度调整模型为:
g(i,j)=m*f(i,j)+n     (1)
公式(1)中,(i,j)为融合图像中各像素点的坐标,f(i,j)是融合图像中各像素点提高对比度前的像素值,g(i,j)是融合图像中各像素点提高对比度后的像素值,m和n均为调整系数,可根据需求设定m和n的大小。
本发明实施例的目标图像的获取方法,通过加权融合的方式对目标的可见光图像和红外图像,使得目标图像同时保留目标的可见光细节和温度信息,加权融合过程无需对可见光图像和/或红外图像进行复杂的图像处理,该方式具有方案简单、融合效果明显的优势。
参见图4,本发明实施例还提供一种拍摄装置,该拍摄装置200可包括可见光图像采集模块210、红外图像采集模块和第一控制器230,其中,控制器与可见光图像采集模块210和红外图像采集模块220分别电耦合连接。
在本实施例中,可见光图像采集模块210和红外图像采集模块220在同一时刻针对目标拍摄的可见光图像和红外图像,第一控制器230用于获取可见光图像采集模块210和红外图像采集模块220在同一时刻针对目标拍摄的可见光图像和红外图像,并对可见光图像和红外图像进行加权融合,获得融合图像;根据融合图像,获得目标图像。
本实施例中,第一控制器230用于进行如图1至图3所示的目标图像的获取方法的操作,具体可参见上述实施例的目标图像的获取方法的描述,此处不再赘述。
本实施例的第一控制器230可以是中央处理器(central processing unit,CPU)。第一控制器230还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
结合图5和图6,本发明实施例还提供一种无人机,该无人机可包括机架100、搭载在机架100上的拍摄装置200和第二控制器300,其中,拍摄装置200可包括可见光图像采集模块和红外图像采集模块,第二控制器300与可见光图像采集模块和红外图像采集模块分别电耦合连接。
在本实施例中,可见光图像采集模块和红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,第二控制器300用于获取可见光图像采集模块和红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,并对可见光图像和 红外图像进行加权融合,获得融合图像;根据融合图像,获得目标图像。
本实施例中,第二控制器300用于进行如图1至图3所示的目标图像的获取方法的操作,具体可参见上述实施例的目标图像的获取方法的描述,此处不再赘述。
第二控制器300可为飞行控制器,也可为拍摄装置200的第一控制器230,还可以飞行控制器与第一控制器230的组合。
此外,本实施例的第二控制器300可以是中央处理器(central processing unit,CPU)。第二控制器300还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
机架100可包括机身和连接在机身底部两侧的脚架。进一步地,机架100还可包括连接在机身两侧的机臂。可选地,拍摄装置200搭载在机身上。为实现对拍摄装置200的增稳,参见图5,本实施例的拍摄装置200是通过云台400搭载在机身上的,该云台400可为两轴云台,也可为三轴云台。
此外,本实施例的无人机可固定翼无人机,也可以为多旋翼无人机。
另外需要说明的是,在一实施例中,可见光图像采集模块可为可见光图像传感器,红外图像采集模块为红外成像传感器,拍摄装置200还包括壳体,可见光图像传感器和红外图像传感器设于壳体上。
在另一实施例中,可见光图像采集模块为独立的可见光相机,红外图像采集模块为红外热成像仪,并通过一安装架安装可见光相机和红外热成像仪。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的目标图像的获取方法、拍摄装置和无人机进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技 术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (54)

  1. 一种目标图像的获取方法,其特征在于,所述方法包括:
    获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像;
    对所述可见光图像和所述红外图像进行加权融合,获得融合图像;
    根据所述融合图像,获得目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述可见光图像和所述红外图像进行加权融合,包括:
    根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合;
    其中,所述第一系数用于表征所述可见光图像中每一像素点的融合权重,所述第二系数用于表征所述红外图像中每一像素点的融合权重。
  3. 根据权利要求2所述的方法,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数之和为1。
  4. 根据权利要求2或3所述的方法,其特征在于,每一像素点在融合图像中的像素值为所述第一系数和该像素点在所述可见光图像中的像素值的乘积与、所述第二系数和该像素点在所述红外图像中的像素值的乘积之和。
  5. 根据权利要求2或3所述的方法,其特征在于,所述可见光图像中各像素点的第一系数大小相等。
  6. 根据权利要求5所述的方法,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数大小相等。
  7. 根据权利要求2或3所述的方法,其特征在于,所述可见光图像中各像素点的第一系数大小至少部分不相等。
  8. 根据权利要求2或7所述的方法,其特征在于,所述根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合之前,还包括:
    根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数,包括:
    确定所述目标在所述可见光图像中的第一位置区域;
    并确定所述目标在所述红外图像中的第二位置区域;
    将所述第二位置区域中各像素点的第二系数大小设置成大于所述第一位置区域中对应像素点的第一系数大小。
  10. 根据权利要求9所述的方法,其特征在于,所述可见光图像中除所述第一位置区域外的其他区域中各像素点的第一系数大小与所述红外图像中该像素点的第二系数大小相等。
  11. 根据权利要求1所述的方法,其特征在于,所述根据所述融合图像,获得目 标图像,包括:
    对所述融合图像进行增强处理;
    将增强处理后的融合图像作为目标图像。
  12. 根据权利要求11所述的方法,其特征在于,所述对所述融合图像进行增强处理,包括:
    提高所述融合图像的对比度。
  13. 根据权利要求12所述的方法,其特征在于,所述提高所述融合图像的对比度,包括:
    针对所述融合图像中的每一像素点,根据预设的对比度调整模型,调整该像素点的像素值;
    其中,所述预设的对比度调模型用于表征所述融合图像中的每一像素点在目标图像中的像素值和所述融合图像中的每一像素点的像素值之间的关系。
  14. 根据权利要求1所述的方法,其特征在于,所述获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像之后,对所述可见光图像和所述红外图像进行加权融合,获得融合图像之前,还包括:
    根据预设的标定参数,调整所述可见光图像和/或所述红外图像。
  15. 根据权利要求14所述的方法,其特征在于,所述预设的标定参数包括旋转参数;
    所述根据预设的标定参数,调整所述可见光图像和/或所述红外图像,包括:
    根据所述旋转参数,对所述可见光图像和/或所述红外图像进行旋转处理,使得所述可见光图像和所述红外图像对应的拍摄角度大致相同。
  16. 根据权利要求14所述的方法,其特征在于,所述预设的标定参数包括缩放参数;
    所述根据预设的标定参数,调整所述可见光图像和/或所述红外图像,包括:
    根据所述缩放参数,对所述可见光图像和/或所述红外图像进行缩放处理,使得所述可见光图像与所述红外图像中同一目标的大小大致相等。
  17. 根据权利要求14所述的方法,其特征在于,所述预设的标定参数包括平移参数;
    所述根据预设的标定参数,调整所述可见光图像和/或所述红外图像,包括:
    根据所述平移参数,对所述可见光图像和/或所述红外图像进行平移处理,使得同一目标在所述可见光图像和所述红外图像中的位置大致重合。
  18. 根据权利要求14所述的方法,其特征在于,所述预设的标定参数包括裁剪参数;
    所述根据预设的标定参数,调整所述可见光图像和/或所述红外图像,包括:
    根据所述裁剪参数,对所述可见光图像和/或所述红外图像进行裁剪处理,使得所述可见光图像和所述红外图像保留大致相同的目标区域。
  19. 一种拍摄装置,其特征在于,所述装置包括:
    可见光图像采集模块;
    红外图像采集模块;以及
    控制器,所述控制器与所述可见光图像采集模块和所述红外图像采集模块分别电耦合连接;
    所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,
    所述控制器用于获取所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,并对所述可见光图像和所述红外图像进行加权融合,获得融合图像;根据所述融合图像,获得目标图像。
  20. 根据权利要求19所述的拍摄装置,其特征在于,所述控制器对所述可见光图像和所述红外图像进行加权融合时,具体用于:
    根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合;
    其中,所述第一系数用于表征所述可见光图像中每一像素点的融合权重,所述第二系数用于表征所述红外图像中每一像素点的融合权重。
  21. 根据权利要求20所述的拍摄装置,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数之和为1。
  22. 根据权利要求20或21所述的拍摄装置,其特征在于,每一像素点在融合图像中的像素值为所述第一系数和该像素点在所述可见光图像中的像素值的乘积与、所述第二系数和该像素点在所述红外图像中的像素值的乘积之和。
  23. 根据权利要求20或21所述的拍摄装置,其特征在于,所述可见光图像中各像素点的第一系数大小相等。
  24. 根据权利要求23所述的拍摄装置,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数大小相等。
  25. 根据权利要求20或21所述的拍摄装置,其特征在于,所述可见光图像中各像素点的第一系数大小至少部分不相等。
  26. 根据权利要求20或25所述的拍摄装置,其特征在于,所述控制器在根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合之前,还用于:
    根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数。
  27. 根据权利要求26所述的拍摄装置,其特征在于,所述控制器根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数时,具体用于:
    确定所述目标在所述可见光图像中的第一位置区域;
    并确定所述目标在所述红外图像中的第二位置区域;
    将所述第二位置区域中各像素点的第二系数大小设置成大于所述第一位置区域中 对应像素点的第一系数大小。
  28. 根据权利要求27所述的拍摄装置,其特征在于,所述可见光图像中除所述第一位置区域外的其他区域中各像素点的第一系数大小与所述红外图像中该像素点的第二系数大小相等。
  29. 根据权利要求19所述的拍摄装置,其特征在于,所述控制器根据所述融合图像,获得目标图像时,具体用于:
    对所述融合图像进行增强处理;
    将增强处理后的融合图像作为目标图像。
  30. 根据权利要求29所述的拍摄装置,其特征在于,所述控制器对所述融合图像进行增强处理时,具体用于:
    提高所述融合图像的对比度。
  31. 根据权利要求30所述的拍摄装置,其特征在于,所述控制器提高所述融合图像的对比度时,具体用于:
    针对所述融合图像中的每一像素点,根据预设的对比度调整模型,调整该像素点的像素值;
    其中,所述预设的对比度调模型用于表征所述融合图像中的每一像素点在目标图像中的像素值和所述融合图像中的每一像素点的像素值之间的关系。
  32. 根据权利要求19所述的拍摄装置,其特征在于,所述控制器在获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像之后,对所述可见光图像和所述红外图像进行加权融合,获得融合图像之前,还用于:
    根据预设的标定参数,调整所述可见光图像和/或所述红外图像。
  33. 根据权利要求32所述的拍摄装置,其特征在于,所述预设的标定参数包括旋转参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述旋转参数,对所述可见光图像和/或所述红外图像进行旋转处理,使得所述可见光图像和所述红外图像对应的拍摄角度大致相同。
  34. 根据权利要求32所述的拍摄装置,其特征在于,所述预设的标定参数包括缩放参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述缩放参数,对所述可见光图像和/或所述红外图像进行缩放处理,使得所述可见光图像与所述红外图像中同一目标的大小大致相等。
  35. 根据权利要求32所述的拍摄装置,其特征在于,所述预设的标定参数包括平移参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具 体用于:
    根据所述平移参数,对所述可见光图像和/或所述红外图像进行平移处理,使得同一目标在所述可见光图像和所述红外图像中的位置大致重合。
  36. 根据权利要求32所述的拍摄装置,其特征在于,所述预设的标定参数包括裁剪参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述裁剪参数,对所述可见光图像和/或所述红外图像进行裁剪处理,使得所述可见光图像和所述红外图像保留大致相同的目标区域。
  37. 一种无人机,其特征在于,所述无人机包括:
    机架;
    搭载在所述机架上的拍摄装置,所述拍摄装置包括可见光图像采集模块和红外图像采集模块;和
    控制器,所述控制器与所述可见光图像采集模块和所述红外图像采集模块分别电耦合连接;
    所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,
    所述控制器用于获取所述可见光图像采集模块和所述红外图像采集模块在同一时刻针对目标拍摄的可见光图像和红外图像,并对所述可见光图像和所述红外图像进行加权融合,获得融合图像;根据所述融合图像,获得目标图像。
  38. 根据权利要求37所述的无人机,其特征在于,所述对所述可见光图像和所述红外图像进行加权融合,包括:
    根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合;
    其中,所述第一系数用于表征所述可见光图像中每一像素点的融合权重,所述第二系数用于表征所述红外图像中每一像素点的融合权重。
  39. 根据权利要求38所述的无人机,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数之和为1。
  40. 根据权利要求38或39所述的无人机,其特征在于,每一像素点在融合图像中的像素值为所述第一系数和该像素点在所述可见光图像中的像素值的乘积与、所述第二系数和该像素点在所述红外图像中的像素值的乘积之和。
  41. 根据权利要求38或39所述的无人机,其特征在于,所述可见光图像中各像素点的第一系数大小相等。
  42. 根据权利要求41所述的无人机,其特征在于,所述可见光图像中每一像素点的第一系数与所述红外图像中该像素点的第二系数大小相等。
  43. 根据权利要求38或39所述的无人机,其特征在于,所述可见光图像中各像 素点的第一系数大小至少部分不相等。
  44. 根据权利要求38或43所述的无人机,其特征在于,所述控制器在根据第一系数和第二系数,对所述可见光图像中每一像素点的像素值和所述红外图像中该像素点的像素值进行融合之前,还用于:
    根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数。
  45. 根据权利要求44所述的无人机,其特征在于,所述控制器根据所述可见光图像和所述红外图像,确定所述第一系数和所述第二系数时,具体用于:
    确定所述目标在所述可见光图像中的第一位置区域;
    并确定所述目标在所述红外图像中的第二位置区域;
    将所述第二位置区域中各像素点的第二系数大小设置成大于所述第一位置区域中对应像素点的第一系数大小。
  46. 根据权利要求45所述的无人机,其特征在于,所述可见光图像中除所述第一位置区域外的其他区域中各像素点的第一系数大小与所述红外图像中该像素点的第二系数大小相等。
  47. 根据权利要求37所述的无人机,其特征在于,所述控制器根据所述融合图像,获得目标图像时,具体用于:
    对所述融合图像进行增强处理;
    将增强处理后的融合图像作为目标图像。
  48. 根据权利要求47所述的无人机,其特征在于,所述控制器对所述融合图像进行增强处理时,具体用于:
    提高所述融合图像的对比度。
  49. 根据权利要求48所述的无人机,其特征在于,所述控制器提高所述融合图像的对比度时,具体用于:
    针对所述融合图像中的每一像素点,根据预设的对比度调整模型,调整该像素点的像素值;
    其中,所述预设的对比度调模型用于表征所述融合图像中的每一像素点在目标图像中的像素值和所述融合图像中的每一像素点的像素值之间的关系。
  50. 根据权利要求37所述的无人机,其特征在于,所述控制器在获取拍摄装置在同一时刻针对目标拍摄的可见光图像和红外图像之后,对所述可见光图像和所述红外图像进行加权融合,获得融合图像之前,还用于:
    根据预设的标定参数,调整所述可见光图像和/或所述红外图像。
  51. 根据权利要求50所述的无人机,其特征在于,所述预设的标定参数包括旋转参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述旋转参数,对所述可见光图像和/或所述红外图像进行旋转处理,使得所 述可见光图像和所述红外图像对应的拍摄角度大致相同。
  52. 根据权利要求50所述的无人机,其特征在于,所述预设的标定参数包括缩放参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述缩放参数,对所述可见光图像和/或所述红外图像进行缩放处理,使得所述可见光图像与所述红外图像的大小大致相等。
  53. 根据权利要求50所述的无人机,其特征在于,所述预设的标定参数包括平移参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述平移参数,对所述可见光图像和/或所述红外图像进行平移处理,使得同一目标在所述可见光图像和所述红外图像中的位置大致重合。
  54. 根据权利要求50所述的无人机,其特征在于,所述预设的标定参数包括裁剪参数;
    所述控制器根据预设的标定参数,调整所述可见光图像和/或所述红外图像时,具体用于:
    根据所述裁剪参数,对所述可见光图像和/或所述红外图像进行裁剪处理,使得所述可见光图像和所述红外图像保留大致相同的目标区域。
PCT/CN2018/119078 2018-12-04 2018-12-04 目标图像的获取方法、拍摄装置和无人机 WO2020113404A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880069889.5A CN111433810A (zh) 2018-12-04 2018-12-04 目标图像的获取方法、拍摄装置和无人机
PCT/CN2018/119078 WO2020113404A1 (zh) 2018-12-04 2018-12-04 目标图像的获取方法、拍摄装置和无人机
US16/928,874 US11328188B2 (en) 2018-12-04 2020-07-14 Target-image acquisition method, photographing device, and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119078 WO2020113404A1 (zh) 2018-12-04 2018-12-04 目标图像的获取方法、拍摄装置和无人机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/928,874 Continuation US11328188B2 (en) 2018-12-04 2020-07-14 Target-image acquisition method, photographing device, and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2020113404A1 true WO2020113404A1 (zh) 2020-06-11

Family

ID=70974837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119078 WO2020113404A1 (zh) 2018-12-04 2018-12-04 目标图像的获取方法、拍摄装置和无人机

Country Status (3)

Country Link
US (1) US11328188B2 (zh)
CN (1) CN111433810A (zh)
WO (1) WO2020113404A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915683A (zh) * 2020-07-27 2020-11-10 湖南大学 图像位置标定方法、智能设备及存储介质
WO2021253173A1 (zh) * 2020-06-15 2021-12-23 深圳市大疆创新科技有限公司 图像处理方法、装置及巡检系统
CN114087731A (zh) * 2021-11-10 2022-02-25 珠海格力电器股份有限公司 基于红外识别模块与空调感温模块耦合自适应调节系统
US20220130139A1 (en) * 2022-01-05 2022-04-28 Baidu Usa Llc Image processing method and apparatus, electronic device and storage medium
CN116437198A (zh) * 2021-12-29 2023-07-14 荣耀终端有限公司 图像处理方法与电子设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11689822B2 (en) * 2020-09-04 2023-06-27 Altek Semiconductor Corp. Dual sensor imaging system and privacy protection imaging method thereof
CN112016523B (zh) * 2020-09-25 2023-08-29 北京百度网讯科技有限公司 跨模态人脸识别的方法、装置、设备和存储介质
CN112907493A (zh) * 2020-12-01 2021-06-04 航天时代飞鸿技术有限公司 无人机蜂群协同侦察下的多源战场图像快速镶嵌融合算法
CN112614164A (zh) * 2020-12-30 2021-04-06 杭州海康微影传感科技有限公司 一种图像融合方法、装置、图像处理设备及双目系统
CN112884692B (zh) * 2021-03-15 2023-06-23 中国电子科技集团公司第十一研究所 分布式机载协同侦察光电系统及无人机系统
CN113034371B (zh) * 2021-05-27 2021-08-17 四川轻化工大学 一种基于特征嵌入的红外与可见光图像融合方法
CN114581563A (zh) * 2022-03-09 2022-06-03 武汉高德智感科技有限公司 一种图像融合的方法、装置、终端及存储介质
CN117726958B (zh) * 2024-02-07 2024-05-10 国网湖北省电力有限公司 配电线路无人机巡检图像目标检测及隐患智能识别方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (zh) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 一种可见光图像与红外图像融合处理系统及融合方法
CN107292860A (zh) * 2017-07-26 2017-10-24 武汉鸿瑞达信息技术有限公司 一种图像处理的方法及装置
CN107478340A (zh) * 2017-07-25 2017-12-15 许继集团有限公司 一种换流阀监测方法及系统
CN108419062A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 图像融合设备和图像融合方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140231578A1 (en) * 2012-06-19 2014-08-21 Bae Systems Information And Electronic Systems Integration Inc. Stabilized uav platform with fused ir and visible imagery
KR101858646B1 (ko) * 2012-12-14 2018-05-17 한화에어로스페이스 주식회사 영상 융합 장치 및 방법
CN106780392B (zh) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 一种图像融合方法及装置
CN108364003A (zh) * 2018-04-28 2018-08-03 国网河南省电力公司郑州供电公司 基于无人机可见光及红外图像融合的电力巡检方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (zh) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 一种可见光图像与红外图像融合处理系统及融合方法
CN108419062A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 图像融合设备和图像融合方法
CN107478340A (zh) * 2017-07-25 2017-12-15 许继集团有限公司 一种换流阀监测方法及系统
CN107292860A (zh) * 2017-07-26 2017-10-24 武汉鸿瑞达信息技术有限公司 一种图像处理的方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253173A1 (zh) * 2020-06-15 2021-12-23 深圳市大疆创新科技有限公司 图像处理方法、装置及巡检系统
CN111915683A (zh) * 2020-07-27 2020-11-10 湖南大学 图像位置标定方法、智能设备及存储介质
CN114087731A (zh) * 2021-11-10 2022-02-25 珠海格力电器股份有限公司 基于红外识别模块与空调感温模块耦合自适应调节系统
CN114087731B (zh) * 2021-11-10 2022-10-21 珠海格力电器股份有限公司 基于红外识别模块与空调感温模块耦合自适应调节系统
CN116437198A (zh) * 2021-12-29 2023-07-14 荣耀终端有限公司 图像处理方法与电子设备
CN116437198B (zh) * 2021-12-29 2024-04-16 荣耀终端有限公司 图像处理方法与电子设备
US20220130139A1 (en) * 2022-01-05 2022-04-28 Baidu Usa Llc Image processing method and apparatus, electronic device and storage medium
US11756288B2 (en) * 2022-01-05 2023-09-12 Baidu Usa Llc Image processing method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
US20200342275A1 (en) 2020-10-29
CN111433810A (zh) 2020-07-17
US11328188B2 (en) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2020113404A1 (zh) 目标图像的获取方法、拍摄装置和无人机
US11671703B2 (en) System and apparatus for co-registration and correlation between multi-modal imagery and method for same
US20210105403A1 (en) Method for processing image, image processing apparatus, multi-camera photographing apparatus, and aerial vehicle
WO2020113408A1 (zh) 一种图像处理方法、设备、无人机、系统及存储介质
CN107292860B (zh) 一种图像处理的方法及装置
CN105354851A (zh) 对距离自适应的红外与可见光视频融合方法及融合系统
WO2019041276A1 (zh) 一种图像处理方法、无人机及系统
WO2019100219A1 (zh) 输出影像生成方法、设备及无人机
CN103679166B (zh) 一种快速获取设备中鱼眼镜头中心偏移量的方法及系统
CN110322485B (zh) 一种异构多相机成像系统的快速图像配准方法
WO2021184302A1 (zh) 图像处理方法、装置、成像设备、可移动载体及存储介质
WO2022007840A1 (zh) 图像融合方法及装置
WO2021037288A2 (zh) 一种双光相机成像校准的方法、装置和双光相机
WO2020107320A1 (zh) 相机标定方法、装置、设备及存储介质
WO2021037286A1 (zh) 一种图像处理方法、装置、设备及存储介质
WO2022100668A1 (zh) 温度测量方法、装置、系统、存储介质及程序产品
KR101233948B1 (ko) 회전 대칭형의 광각 렌즈를 이용하여 디지털 팬·틸트 영상을 얻는 방법 및 그 영상 시스템
WO2020113407A1 (zh) 一种图像处理方法、设备、无人机、系统及存储介质
EP3877951A1 (en) Automatic co-registration of thermal and visible image pairs
CN107274447B (zh) 深度图像获取装置和深度图像获取方法
CN112102419B (zh) 双光成像设备标定方法及系统、图像配准方法
CN115222785A (zh) 一种基于双目标定的红外与可见光图像配准方法
CN105939445B (zh) 一种基于双目摄像机的透雾摄像方法
US20170322400A1 (en) Omnidirectional catadioptric lens structure
CN109873923A (zh) 基于相机圆顶的调制传递函数的图像降噪

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18942518

Country of ref document: EP

Kind code of ref document: A1