WO2018076732A1 - 红外图像和可见光图像融合的方法及装置 - Google Patents
红外图像和可见光图像融合的方法及装置 Download PDFInfo
- Publication number
- WO2018076732A1 WO2018076732A1 PCT/CN2017/089508 CN2017089508W WO2018076732A1 WO 2018076732 A1 WO2018076732 A1 WO 2018076732A1 CN 2017089508 W CN2017089508 W CN 2017089508W WO 2018076732 A1 WO2018076732 A1 WO 2018076732A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- visible light
- infrared
- infrared image
- luminance component
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000006243 chemical reaction Methods 0.000 claims abstract description 28
- 238000004040 coloring Methods 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims description 43
- 238000012360 testing method Methods 0.000 claims description 42
- 230000009466 transformation Effects 0.000 claims description 32
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 25
- 238000001914 filtration Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 5
- 235000019646 color tone Nutrition 0.000 abstract 2
- 238000007500 overflow downdraw method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000005855 radiation Effects 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000004456 color vision Effects 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013021 overheating Methods 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present disclosure relates to the field of image processing technologies, and in particular, to a method and apparatus for infrared image and visible light image fusion.
- the infrared camera is different from the general visible light camera. It uses infrared camera to obtain the infrared radiation of the target, and records the infrared radiation information of the target itself. Although the infrared camera has better detection performance for the hot target, it is not sensitive to the brightness change of the background. The imaging resolution is low, which is not conducive to human eye interpretation.
- the visible light camera is only sensitive to the reflection of the target scene, and has nothing to do with the thermal contrast of the target scene, and the visible light camera has high resolution and can provide detailed information of the scene in which the target is located.
- the infrared image can provide complete target information, but the background information contained in it is ambiguous; on the contrary, the visible image can provide more comprehensive background information, but the target information is not obvious.
- the infrared temperature display is used to determine whether the device is operating normally.
- the image provided by the infrared camera is an infrared temperature display map when the device is running, and the user can view from the figure.
- the background information of the infrared image provided by the existing infrared camera is relatively blurred and the resolution is low, and the user cannot accurately determine the specific part of the device failure. .
- the present application discloses a method and a device for merging an infrared image and a visible light image, which improves the ability to recognize a hot target, thereby enhancing the positioning efficiency of the maintenance target to the fault target.
- a method for merging an infrared image and a visible light image includes:
- the fusion luminance component, the tone component, and the saturation component are inversely converted in color space to obtain a pseudo color fusion image.
- the method further includes: separately acquiring the visible light image and the infrared image of the same viewing angle.
- the method further includes: performing image registration on the infrared image and the visible light image.
- the performing image registration on the infrared image and the visible light image includes:
- Image registration of the infrared image and the visible light image is implemented according to a registration parameter of the image to be registered.
- the reference image is the infrared image
- the image to be registered is the visible light image
- the acquiring the registration parameters of the image to be registered includes:
- the extracting edge information of the visible light image includes:
- the filtering the luminance component of the visible light image is filtered by using a filter matrix:
- the fusing the edge information and the luminance component of the infrared image comprises: performing equal-weighted fusion of the edge information and a luminance component of the infrared image.
- an apparatus for merging an infrared image and a visible light image includes:
- a first color space conversion module configured to convert the RGB color space to the YUV color space of the visible light image to obtain a brightness component thereof;
- a fusion luminance component acquisition module configured to extract edge information of the luminance component of the visible light image, and fuse the edge information with a luminance component of the infrared image to obtain a fusion luminance component
- a pseudo color infrared image obtaining module configured to perform pseudo coloring on the infrared image to obtain a pseudo color infrared image
- a second color space conversion module configured to convert the RGB color space to the YUV color space of the pseudo color infrared image to obtain a hue component and a saturation component thereof;
- a color space inverse conversion module configured to convert the fusion luminance component, the tone component, and the saturation component into a YUV color space to an RGB color space to obtain a pseudo color fusion image.
- the method further includes: an image acquisition module, configured to separately collect the visible light image and the infrared image of the same perspective.
- the method further includes: an image registration module configured to perform image registration on the infrared image and the visible light image.
- the method further includes: a mode selection module, configured to select any one of a plurality of preset output modes.
- At least one of the foregoing technical solutions has the following beneficial effects: a method and a device for merging an infrared image and a visible light image according to the present disclosure, and merging the edge information of the visible image with the temperature information of the infrared image to enhance the physical details of the fused image At the same time, it maintains the color perception of the original pseudo-color infrared image, in line with the user's observation habit of thermal infrared images.
- FIG. 1 illustrates a flow chart of a method of infrared image and visible light image fusion according to an example embodiment of the present disclosure.
- FIG. 2 illustrates a schematic diagram of a method of infrared image and visible light image fusion according to an example embodiment of the present disclosure.
- FIG. 3 illustrates a flow chart of an image registration implementation method in accordance with an example embodiment of the present disclosure.
- FIG. 4 illustrates a block diagram of an apparatus structure for infrared image and visible light image fusion according to an exemplary embodiment of the present disclosure.
- FIG. 5 illustrates a block diagram of an apparatus structure for infrared image and visible light image fusion according to another example embodiment of the present disclosure.
- FIG. 1 illustrates a flow chart of a method of infrared image and visible light image fusion according to an example embodiment of the present disclosure.
- step S110 the visible light image is subjected to color space conversion to obtain a luminance component thereof.
- the visible light image may be subjected to color space conversion from RGB color space to YUV color space.
- YUV also known as YCrCb
- YCrCb is a color coding method (belonging to PAL) adopted by European television systems.
- YUV is mainly used to optimize the transmission of color video signals, making it backward compatible with old-fashioned black-and-white TVs.
- RGB RGB requires three independent video signals to be transmitted simultaneously.
- Y indicates brightness (Luminance or Luma), which is the grayscale value
- U and “V” indicate the chroma (Chrominance or Chroma), which is used to describe the image color and saturation, and is used to specify The color of the pixel.
- “Brightness” is created by RGB input signals by superimposing specific parts of the RGB signal together.
- Chroma defines two sides of the color Face-to-tone and saturation are represented by Cr and Cb, respectively.
- Cr reflects the difference between the red part of the RGB input signal and the brightness value of the RGB signal.
- Cb reflects the difference between the blue part of the RGB input signal and the brightness value of the RGB signal.
- the method may further include separately acquiring the visible light image and the infrared image of the same viewing angle.
- the infrared image and the visible image captured by the same angle of view may be acquired first.
- the infrared image and the visible light image may be implemented by an infrared camera or the like capable of simultaneously capturing an infrared image and a visible light image, such that visible light in the assembled infrared thermal imager
- the infrared detector and lens are fixed.
- the method of the present invention can be applied to the fusion of any infrared image and visible light image of the same viewing angle.
- the infrared image and the visible light image may be acquired by an infrared camera and a visible light camera, respectively.
- the lenses of the infrared camera and the visible light camera must be mounted at the same position, and the optical axes of the lenses are in the same direction and parallel for acquiring infrared images and visible light images taken at the same angle of view.
- the infrared image and the visible light image of the same scene captured by different angles of view may also be acquired, but compared to the embodiment of acquiring the infrared image and the visible light image of the same scene captured by the same perspective, the image is in the image. The matching of the viewing angle is required before the registration, and the operation process is more complicated.
- a high-resolution visible light sensor can be used for capturing visible light images, for example, a visible light sensor of 1.3 million pixels or higher is selected; and an infrared image can be captured. Use a non-cooling or cooling infrared camera.
- the method for obtaining the infrared image and the visible light image in this step may be the same as in the prior art, and will not be described herein.
- the method may further include performing image registration on the infrared image and the visible light image.
- the infrared image and the visible light image of the same viewing angle may be acquired by mounting the infrared camera and the lens of the visible light camera in parallel, but the field of view of the infrared image and the visible light image still cannot match.
- the position of the target on the infrared image is greatly different from the position on the visible light image. Therefore, in the prior art, the infrared and visible light images cannot be effectively utilized to simultaneously identify the same target. Due to the different targets, the target recognition and localization methods of the existing infrared monitoring systems are mostly identified by a single image (infrared or visible light).
- the visible light image recognition algorithm cannot effectively identify the target; when the ambient temperature or the temperature around the target is very close to the temperature of the monitored target, the infrared image recognition algorithm It is impossible to identify the target effectively, so the use of a single image recognition may often result in an unrecognized target. Reducing the target recognition rate may lead to the omission of faults or defect monitoring during the detection, resulting in significant economic losses. Therefore, the use of infrared and visible light complementary identification and positioning monitoring targets can greatly improve the accuracy of the target recognition rate.
- the range of viewing angles may be registered according to the resolution of the infrared image and the visible light image such that the infrared image and the image scene area of the visible light image are the same.
- the performing image registration on the infrared image and the visible light image includes: selecting one of the infrared image and the visible light image as a reference image, wherein the other is to be registered Obtaining a registration parameter of the image to be registered; and performing image registration of the infrared image and the visible light image according to a registration parameter of the image to be registered.
- the reference image is the infrared image, and the image to be registered is the visible light image.
- the reference image may also be selected as the visible light image, and the image to be registered is the infrared image.
- the acquiring the registration parameters of the image to be registered includes: selecting a test target and a preset number of feature points on the test target; respectively acquiring the test target at different target distances a preset number of test infrared images and test visible light images, and coordinates of the feature points on the test infrared image and the test visible light image; acquiring the coordinates according to an affine transformation formula and coordinates of the feature points a preset number of affine transformation parameters: obtaining a relationship between the affine transformation parameter and the target distance according to the preset number of affine transformation parameters; according to the relationship between the image to be registered and the target to be measured The distance and the relationship acquire registration parameters of the image to be registered.
- the preset number is greater than or equal to 2.
- An example of image registration can participate in the embodiment shown in Figure 3 below.
- the infrared image and the visible light image may be preprocessed separately in the image registration process, which may further improve the accuracy of image registration.
- the step of preprocessing the infrared image and the visible light image may include image denoising, image enhancement, and image transformation, wherein the image is denoised and spatially filtered on the infrared image and the visible image;
- the denoised infrared image and the visible light image are subjected to histogram enhancement; in the image transformation process, the enhanced infrared image and the visible light image are subjected to wavelet transformation and the like.
- the present disclosure does not limit this, and any existing image preprocessing method can be applied to the embodiments of the present invention.
- step S120 edge information of the visible light image luminance component is extracted, and the edge information and the luminance component of the infrared image are fused to obtain a fused luminance component.
- the extracting the edge information of the visible light image comprises: filtering a luminance component of the visible light image, and extracting edge information of the visible light image.
- the filtering of the luminance component of the visible light image may be performed by using a filter matrix as follows:
- the visible light image luminance component may also be sequentially subjected to 3*3 Gaussian filtering and 3*3 Laplacian filtering.
- the above 5*5 filter matrix filtering may be directly used.
- the fusing the edge information and the luminance component of the infrared image comprises: performing equal-weighted fusion of the edge information and a luminance component of the infrared image.
- edge information of the visible light image and the luminance component of the infrared image are proportionally weighted and fused, but in other embodiments, the following formula may be used for fusion:
- a and B are respectively used to represent the edge information of the visible light image and the luminance component of the infrared image
- ⁇ 1 and ⁇ 2 represent the weight values of A and B, respectively, and the sum of the two is 1, and F represents The fusion luminance component is described.
- the weight value can be adjusted according to actual needs, as long as the two are added to be equal to one.
- step S130 the infrared image is subjected to pseudo coloring to obtain a pseudo color infrared image.
- Infrared imaging technology is a radiation information detection technology used to convert the temperature distribution of an object's surface into an image that is visible to the adult eye.
- the image is an infrared image that reflects the infrared radiation capability of the surface of the object and visually characterizes and displays the infrared radiation temperature field distribution of the surface of the object being measured. Since the infrared image is a black and white grayscale image, the dynamic range of the gray value is not large, and it is difficult for the human eye to obtain the target detailed radiation information from the grayscale information.
- the human eye can only distinguish between more than twenty gray levels, but it can distinguish tens of millions of colors.
- industrial thermal imaging cameras generally use a color table to map grayscale images into color images, that is, pseudo-color infrared images, so that the contrast between different gray levels of the images is enhanced, so that the reader can more accurately interpret the images.
- step S140 the pseudo color infrared image is subjected to color space conversion to obtain a hue component and a saturation component thereof.
- the color space conversion refers to converting the pseudo color infrared image from the RGB color space to the YUV color. space.
- step S150 the fused luminance component, the hue component, and the saturation component are inversely converted in color space to obtain a pseudo color fused image.
- the inverse conversion of the color space refers to converting the fused luminance component, the gradation component, and the saturation component from a YUV color space to an RGB color space.
- a temperature threshold is exceeded, an alarm message may be sent to the user, so that the user can take measures in time.
- the limits can be set based on the operating limit temperature of the different components in the unit.
- the infrared image and the visible light image fusion method provided by the embodiment combine the infrared image and the high-resolution visible light image, so that the fused image can reflect the approximate temperature information of the target and can observe the detailed information of the target.
- Using the image fusion method to improve the spatial resolution of the infrared image not only helps to improve the interpretation accuracy and efficiency of the image interpreter, but also helps the interpreter to interpret the image, thereby overcoming the inaccurate recognition in the full visible image.
- the problem of the distribution of the temperature inside the device and the type of the object cannot be accurately distinguished in the full infrared image.
- the user can fuse the two images so that the components exceeding the specified temperature limit can be accurately displayed. Help users better identify and report suspicious parts and enable maintenance personnel to complete repairs in the first place.
- the method is simple to implement, and the hardware language can complete the method implementation; at the same time, the speed is fast and can be realized in real time.
- FIG. 2 illustrates a schematic diagram of a method of infrared image and visible light image fusion according to an example embodiment of the present disclosure.
- the visible light image VIS and the infrared image IR of the object to be measured are first input.
- the visible light image VIS is converted into a YUV color space, and the luminance component Yvis, the hue component Uvis, and the saturation component Vvis of the visible light image VIS are obtained.
- the infrared image IR captured by the infrared camera is a grayscale image
- the pseudo-color infrared image IR is obtained by pseudo-coloring the infrared image IR, and then the pseudo-color infrared image IR is converted into a YUV color space to obtain the brightness of the pseudo-color infrared image IR.
- the component Yir, the tone component Uir, and the saturation component Vir are first input.
- the infrared image gray value and the pre-configured pseudo color lookup table can be used to pseudo-color the infrared image IR to generate the target pseudo color infrared image IR.
- the method may include: reading a gray value of each pixel of the infrared image; and mapping the pixels of the same gray value to the same color by using a color defined in the pseudo color lookup table, thereby generating a pseudo color infrared image. Then, the pseudo color infrared image IR and the visible light image VIS of the target to be measured are converted into an RGB color space to a YUV color space, and a YUV color space representation of the pseudo color infrared image IR and the visible light image VIS is obtained.
- pseudo color is provided in the infrared camera for the user to select.
- the pseudo color lookup table is a gray value ranging from 0 to 255, and each gray value corresponds to three color values of R/G/B.
- the grayscale image is found according to the gray value of each pixel.
- the corresponding RGB three color values in the pseudo color lookup table form a pseudo color image.
- color space conversion is performed on the pseudo color infrared image IR and the color visible light image VIS, and the color visible light image VIS and the pseudo color infrared image IR may be from the RGB color according to the YUV and RGB color space conversion empirical formula.
- the space is converted to the YUV color space, and there are three-channel grayscale images of Y/U/V, respectively.
- R, G, and B represent the red, green, and blue channels of the visible light image or the infrared image, respectively.
- the luminance component Yvis of the visible light image VIS described above is subjected to filtering processing to extract edge information.
- the luminance component Yvis of the visible light image VIS may be filtered by using a 5*5 filter operator of the LOG to obtain a grayscale image containing only edge information.
- the luminance component Yvis of the visible light image VIS is subjected to matrix filtering to obtain a luminance component Yvis' of the visible light image VIS.
- the luminance component Yvis' of the visible light image VIS is weighted and fused with the luminance component of the infrared image IR to obtain a fused luminance component Yblend.
- a pseudo color fusion image is obtained by converting the fused luminance component Yblend obtained from the visible light image VIS and the luminance component of the infrared image IR, the gradation component Uir of the pseudo color infrared image IR, and the saturation component Vir thereof into RGB spaces.
- the image fusion method adopted in the embodiment mode is superimposed with the conventional visible light image and the infrared image, or the picture-in-picture partial replacement fusion is different.
- the method does not retain the color information and the gray-scale information of the visible light image, and only retains the edge contour information thereof. And superimpose it with the infrared image to form a fused image.
- FIG. 3 illustrates a flow chart of an image registration implementation method in accordance with an example embodiment of the present disclosure.
- the image registration implementation method includes the following two steps.
- step S210 calibration of the detector is achieved prior to image registration.
- step S220 the detector labeled in the above step S210 is applied to the image registration implementation method in the embodiment of the present invention.
- the step S210 may include the following steps.
- step S211 the detector and the lens are mounted.
- the lens of the visible light detector is adjusted to be parallel to the optical axis of the infrared detector, and the visible light and the infrared detector have no rotation angle and are in the same parallel plane.
- the distance between the test target and the detector is 1-10 meters. It should be noted that the specific value of the target distance selected here (here, 1 meter, 2 meters up to 10 meters), divided into groups (here, 10 groups), each group can be equally spaced or non-equal.
- the interval distribution (here, the equidistant distribution, which differs by 1 m between each group) can be set autonomously according to requirements, and does not limit the invention.
- step S213 the detector distance test target is set to n meters.
- step S214 the feature points on the test target are selected, and the corresponding coordinates of the test spot on the test visible light image acquired by the detector and the test infrared image are recorded.
- a high temperature object having a landmark shape may be selected as the test target. Ensure that the object is visible at the same time in the range of the selected target distance (for example, within 1-10 meters) from the test infrared image and the test visible light image, and has at least two pairs of clear feature points. Wherein, testing a point on the infrared image and a point on the corresponding test visible light image is called a pair. Two feature points are selected on each test infrared image, corresponding to two feature points on the test visible light image.
- two pairs of feature points are taken as an example, but the present disclosure is not limited thereto, and in other embodiments, more pairs of feature points may be selected.
- the two pairs of feature points are artificially selected as two pairs of registration control points, and the corresponding coordinates of the two pairs of registration control points on the test infrared image and the test visible light image are recorded.
- step S215 according to the following affine transformation formula and the corresponding coordinates of the two pairs of registration control points on the test visible light image and the test infrared image, when the target distance is L, the corresponding affine transformation parameter k 1 is calculated. , k 2 , t 1 , t 2 .
- the test infrared image is taken as a reference image, and the image to be registered is a test visible light image as an example.
- the infrared image may be tested as the reference image by testing the visible light image as a reference image. Since the field of view of a general infrared image is small, an infrared image is used as a reference image in visible light. Find the matching area in the image. If a visible image with a large field of view is used as a reference image, there may be a partial region on the final fused image that does not have a corresponding infrared image.
- the affine transformation formula is:
- (x', y') is the coordinate of a registration control point in the image to be registered
- (x, y) is the coordinate corresponding to the registration control point in the reference image.
- (x', y') is the coordinate of a pixel in the visible light image before the transformation
- (x, y) is the coordinate mapped by the pixel in the visible light image after the transformation
- k 1 and k 2 are respectively x.
- the scaling factor in the y direction, t 1 and t 2 are the translation coefficients in the x and y directions, respectively.
- step S217 n is incremented by 1, and then jumps back to step S213 to cyclically calculate the mapping relationship between the next set of affine transformation parameters and the target distance.
- step S2128 the mapping relationship between the 10 sets of affine transformation parameters obtained above and the target distance is fitted, and the relationship between the target distance L and the affine transformation parameter is obtained and saved.
- the mapping relationship between the 10 sets of affine transformation parameters k 1 , k 2 , t 1 , t 2 and the target distance L (1 m, 2 m, ..., 10 m, respectively) is obtained.
- the k 1 /L, k 2 /L, t 1 /L, t 2 /L curve that is, the relationship between the target distance L and the affine transformation parameter can be obtained and saved.
- the fitting can be performed by quadratic polynomial fitting by means of excel or matlab, etc., which is a mature technology and will not be described herein.
- the distance between the target and the detector can be obtained by the distance measuring module (for example, the laser ranging module), according to the saved k 1 /L, k 2 /L, t 1 /L, t 2
- the /L relational expression calculates the value of the affine transformation parameters k 1 , k 2 , t 1 , t 2 corresponding to the target distance L between the measured object and the detector.
- the step S220 may include the following steps.
- step S221 the distance L between the object to be measured and the detector is measured.
- step S222 the affine transformation parameters k 1 , k 2 , t 1 , t 2 of the image to be registered of the measured object are calculated.
- step S223 the amount of shift and the amount of zooming of the image to be registered are calculated.
- the affine transformation parameter k 1 obtained in the above step S222 can be used as the scaling amount of the to-be-registered image in the x direction, and the affine transformation parameter k 2 is used as the scaling amount of the to-be-registered image in the y direction.
- the shot transformation parameter t 1 is used as the amount of translation of the image to be registered in the x direction
- the affine transformation parameter t 2 is used as the amount of translation of the image to be registered in the y direction.
- step S224 the image is registered.
- the visible light images are paralleled up and down and left and right and enlarged/reduced, thereby realizing automatic registration.
- the embodiment has the following beneficial effects: on the one hand, in the image fusion process of the infrared image and the visible light image, the image registration is first realized according to the image registration parameter, the calculation amount of the numerical transformation is greatly reduced, and the calculation speed is improved. Therefore, the speed of image fusion is improved, and the real-time requirement of image processing is satisfied; on the other hand, the infrared image and the visible light image fusion method are fixed in parallel on the same turntable and keep two The optical axis is parallel to the imaging coordinate axis, so that the elevation angle and the azimuth angle of the two images are consistent. In the image registration process, only the scaling and the translation amount need to be adjusted, which reduces the difficulty of image registration.
- embodiments of the present invention also provide an apparatus for merging infrared images and visible light images.
- the devices are all designed to implement the steps of the foregoing method, but the present invention is not limited to the following embodiments, Devices that can achieve the above methods are all included in the scope of the present invention. And in the following description, the same contents as the foregoing methods are omitted here to save space.
- FIG. 4 illustrates a block diagram of an apparatus structure for infrared image and visible light image fusion according to an exemplary embodiment of the present disclosure.
- the apparatus 100 for merging the infrared image and the visible light image includes: a first color space conversion module 110, a fusion luminance component acquisition module 120, a pseudo color infrared image obtaining module 130, a second color space conversion module 140, and a color. Spatial inverse conversion module 150.
- the first color space conversion module 110 is configured to convert the RGB color space to the YUV color space of the visible light image to obtain a brightness component thereof.
- the fused luminance component acquisition module 120 is configured to extract edge information of the visible light image luminance component, and fuse the edge information and the luminance component of the infrared image to obtain a fused luminance component.
- the pseudo color infrared image obtaining module 130 is configured to perform pseudo coloring on the infrared image to obtain a pseudo color infrared image.
- the second color space conversion module 140 is configured to convert the RGB color space to the YUV color space of the pseudo color infrared image to obtain a hue component and a saturation component thereof.
- the color space inverse conversion module 150 is configured to perform the fusion luminance component, the tone component, and the saturation component The color space is inversely transformed to obtain a pseudo color fused image.
- the apparatus 100 may further include: an image acquisition module, configured to separately collect the visible light image and the infrared image of the same viewing angle.
- the apparatus 100 may further include: an image registration module for performing image registration on the infrared image and the visible light image.
- the apparatus 100 may further include: a mode selection module, configured to select any one of a plurality of preset output modes.
- FIG. 5 illustrates a block diagram of an apparatus structure for infrared image and visible light image fusion according to another example embodiment of the present disclosure.
- the apparatus 200 for merging the infrared image and the visible light image may include: an infrared camera 201 (infrared detector) and a visible light camera 202 (visible light detector); and an image acquisition module 203, the image acquisition module may collect the infrared
- the camera 201 and the visible light camera 202 capture the obtained infrared image and the visible light image of the same viewing angle;
- the image preprocessing module 204 performs the denoising, the dead pixel, the non-uniformity correction, and the infrared image pseudo color coloring of the visible light image and the infrared image.
- a pre-processing configured to measure a distance between the target (test target and/or the measured target) and the infrared detector and the visible light detector; and an image registration module 206, wherein the image is aligned to the machine before registration (Infrared detector, visible light detector) for calibration, the calibration process is shown in Figure 3.
- the registration control point can be manually selected, only once for calibration, and the translation parameters and scaling parameters are obtained after calibration.
- Negative means translation and zoom direction), for example, processing visible light images to obtain pixel-matched visible light images;
- the module 207 can include a first mode, a second mode, and a third mode, wherein the first mode outputs only the infrared image to the display module 209 for display, and the second mode outputs only the visible light image display module 209 for display.
- the third mode only outputs the pseudo color fusion image, and when the third mode is selected, the mode selection module 207 also needs to be connected to the image fusion module 208, and the image fusion module 208 generates the pseudo color fusion image and outputs the image to the display module.
- 209 performs display, where the fusion method adopts the image fusion method in the above embodiment of the present invention.
- the mode selection module 207 may further include more or less modes, for example, the fusion image obtained by registering the infrared image and the visible light image above a certain temperature is output to the display module for display.
- the fourth mode; or the infrared image replaces part of the visible light region to form a fifth mode in which the fused image obtained by the picture-in-picture fusion is output to the display module for display; or the full field of view registration fusion, and the fusion method is an adjustable weight ratio.
- the fused image obtained by the image pixel superimposition fusion is output to the sixth mode in which the display module performs display, and the like.
- the physical details are still not rich compared to the fused images obtained in the fourth to sixth modes, and the location of the device fault point can only be determined according to the unfused partial visible light region; the fused portion of the fused image has both visible light
- the color information has the color information of the infrared image, which is easy to confuse the observation point, and does not conform to the user's observation habit.
- the final pseudo color fusion image obtained by the fusion method adopted by the embodiment of the present invention includes both physical detail information of the visible light image and temperature information of the infrared image.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
一种红外图像和可见光图像融合的方法及装置,该方法包括:对所述可见光图像进行颜色空间转换,获得其亮度分量(S110);提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量(S120);对所述红外图像进行赋伪彩,获得伪彩色红外图像(S130);对所述伪彩色红外图像进行颜色空间转换,获得其色调分量和饱和度分量(S140);将所述融合亮度分量、所述色调分量和所述饱和度分量进行颜色空间逆转换,获得伪彩色融合图像(S150)。根据该方法获得的融合图像,能够既可以保持可见光图像的细节信息又可以保持红外图像的温度信息。
Description
本公开涉及图像处理技术领域,具体而言,涉及红外图像和可见光图像融合的方法及装置。
红外摄像机与一般的可见光摄像机不同,它采用红外摄像机获取目标的红外辐射,记录的是目标自身的红外辐射信息,虽然红外摄像机对热目标的探测性能较好,但其对背景的亮度变化不敏感,成像分辨率低,不利于人眼判读。而可见光摄像机则只敏感于目标场景的反射,而与目标场景的热对比无关,并且可见光摄像机分辨率高,能够提供目标所在场景的细节信息。
一般而言,由于目标和背景的红外辐射特性差异,红外图像可提供完整的目标信息,然而它包含的背景信息却模糊不清;相反可见光图像能够提供较全面的背景信息,但目标信息不明显。例如在利用红外摄像机来检测设备是否正常运行的场景中,就是通过红外温度显示来判断设备是否正常运行,此时红外摄像机所提供的图像是设备运行时红外温度显示图,用户可从该图中判断设备中是否存在过热情况,但由于设备所处背景环境的温度分布均匀,导致红外摄像机所提供的背景信息对比度不明显,成像分辨率较低,用户不能准确识别出过热的具体部位,也就是不能分辨出设备的哪个部件出现了故障,从而会耽误了维修的时间。并且还可能存在,用户不能仅根据该红外图像来判断红外图像中的设备究竟为哪种设备,也会延误维修人员的正常维修工作。
在实现本发明的过程中,发现现有技术中至少存在如下问题:现有的红外摄像机所提供的红外图像的背景信息较模糊、分辨率低,不能帮助用户准确的判断设备出现故障的具体部位。
因此,需要一种新的红外图像和可见光图像融合的方法及装置。
在所述背景技术部分公开的上述信息仅用于加强对本公开的背景的理解,因此它可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本申请公开一种红外图像和可见光图像融合的方法及装置,提高了对热目标的识别能力,从而增强了维护人员对故障目标的定位效率。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的一个方面,提供一种红外图像和可见光图像融合的方法,包括:
对所述可见光图像进行颜色空间转换,获得其亮度分量;
提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量;
对所述红外图像进行赋伪彩,获得伪彩色红外图像;
对所述伪彩色红外图像进行颜色空间转换,获得其色调分量和饱和度分量;
将所述融合亮度分量、所述色调分量和所述饱和度分量进行颜色空间逆转换,获得伪彩色融合图像。
在本公开的一种示例性实施例中,还包括:分别采集同一视角的所述可见光图像和所述红外图像。
在本公开的一种示例性实施例中,还包括:对所述红外图像和所述可见光图像进行图像配准。
在本公开的一种示例性实施例中,所述对所述红外图像和所述可见光图像进行图像配准,包括:
选择所述红外图像和所述可见光图像其中之一为参考图像,其中另一为待配准图像;
获取所述待配准图像的配准参数;
根据所述待配准图像的配准参数实现所述红外图像和所述可见光图像的图像配准。
在本公开的一种示例性实施例中,所述参考图像为所述红外图像,所述待配准图像为所述可见光图像。
在本公开的一种示例性实施例中,所述获取所述待配准图像的配准参数,包括:
选取测试目标及所述测试目标上的预设数量的特征点;
分别获取所述测试目标在不同目标距离时的预置数目的测试红外图像和测试可见光图像,以及所述测试红外图像和所述测试可见光图像上的所述特征点的坐标;
根据仿射变换公式和所述特征点的坐标获取所述预置数目的仿射变换参数;
根据所述预置数目的仿射变换参数拟合获取仿射变换参数与目标距离之间的关系式;
根据所述待配准图像与被测目标之间的距离以及所述关系式获取所述待配准图像的配准参数。
在本公开的一种示例性实施例中,所述提取所述可见光图像的边缘信息,包括:
对所述可见光图像的亮度分量进行滤波,提取所述可见光图像的边缘信息。
在本公开的一种示例性实施例中,所述对所述可见光图像的亮度分量进行滤波,采用下述滤波矩阵进行滤波:
在本公开的一种示例性实施例中,所述将所述边缘信息和所述红外图像的亮度分量融合,包括:将所述边缘信息和所述红外图像的亮度分量进行等比例加权融合。
根据本公开的一个方面,提供一种红外图像和可见光图像融合的装置,包括:
第一颜色空间转换模块,用于对所述可见光图像进行RGB颜色空间到YUV颜色空间的转换,获得其亮度分量;
融合亮度分量获取模块,用于提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量;
伪彩色红外图像获得模块,用于对所述红外图像进行赋伪彩,获得伪彩色红外图像;
第二颜色空间转换模块,用于对所述伪彩色红外图像进行RGB颜色空间到YUV颜色空间的转换,获得其色调分量和饱和度分量;
颜色空间逆转换模块,用于将所述融合亮度分量、所述色调分量和所述饱和度分量进行YUV颜色空间到RGB颜色空间的转换,获得伪彩色融合图像。
在本公开的一种示例性实施例中,还包括:图像采集模块,用于分别采集同一视角的所述可见光图像和所述红外图像。
在本公开的一种示例性实施例中,还包括:图像配准模块,用于对所述红外图像和所述可见光图像进行图像配准。
在本公开的一种示例性实施例中,还包括:模式选择模块,用于选择预设的多种输出模式中的任意一种。
上述技术方案中的至少一个技术方案具有如下有益效果:根据本公开的红外图像和可见光图像融合的方法及装置,融合图像的有可见光图像的边缘信息和红外图像的温度信息,增强融合图像物理细节的同时,又维持原伪彩色红外图像的颜色感观,符合用户对热红外图像的观察习惯。
通过参照附图详细描述其示例实施方式,本公开的上述和其它特征及优点将变得更加明显。
图1示出根据本公开一示例实施方式的红外图像和可见光图像融合的方法流程图。
图2示出根据本公开一示例实施方式的红外图像和可见光图像融合的方法示意图。
图3示出根据本公开一示例实施方式的图像配准实现方法流程图。
图4示出根据本公开一示例实施方式的红外图像和可见光图像融合的装置结构框图。
图5示出根据本公开另一示例实施方式的红外图像和可见光图像融合的装置结构框图。
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本公开将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而没有所述特定细节中的一个或更多,或者可以采用其它的方法、组元、材料、装置、步骤等。在其它情况下,不详细示出或描述公知结构、方法、装置、实现、材料或者操作以避免模糊本公开的各方面。
图1示出根据本公开一示例实施方式的红外图像和可见光图像融合的方法流程图。
如图1所示,在步骤S110中,对所述可见光图像进行颜色空间转换,获得其亮度分量。
在一实施例中,可以将所述可见光图像进行RGB颜色空间至YUV颜色空间的颜色空间转换。
YUV(亦称YCrCb)是被欧洲电视系统所采用的一种颜色编码方法(属于PAL)。YUV主要用于优化彩色视频信号的传输,使其向后兼容老式黑白电视。与RGB视频信号传输相比,它最大的优点在于只需占用极少的带宽(RGB要求三个独立的视频信号同时传输)。其中“Y”表示亮度(Luminance或Luma),也就是灰阶值;而“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。“亮度”是通过RGB输入信号来创建的,方法是将RGB信号的特定部分叠加到一起。“色度”则定义了颜色的两个方
面—色调与饱和度,分别用Cr和Cb来表示。其中,Cr反映了RGB输入信号红色部分与RGB信号亮度值之间的差异。而Cb反映的是RGB输入信号蓝色部分与RGB信号亮度值之同的差异。
在示例性实施例中,该方法还可以包括:分别采集同一视角的所述可见光图像和所述红外图像。
为了实现红外图像和可见光图像的融合,可以先获取同一视角拍摄的红外图像和可见光图像。
在本实施例中,所述红外图像和所述可见光图像可以由能够同时拍摄获取红外图像和可见光图像的红外热像仪等类似机器实现,这样,因为组装完成的红外热像仪中的可见光和红外的探测器及镜头都已固定,在下述的求配准参数的步骤中,一台机器,只需要在出厂前标定一次就可以了。但本发明不限定于此,在其他实施例中,本发明所述的方法可以应用于任何同视角的红外图像和可见光图像的融合。在一实施例中,所述红外图像和所述可见光图像可以分别通过红外摄像机和可见光摄像机来获取。
在本实施例中,所述红外摄像机和所述可见光摄像机两者的镜头必须安装在相同位置,镜头光轴同方向且平行,以用于获取同一视角拍摄的红外图像和可见光图像。另一实施例中,也可以获取不同视角拍摄的同一场景的红外图像和可见光图像,但相比于获取同一视角拍摄的同一场景的红外图像和可见光图像的实施例而言,这种方式在图像配准之前需要经过视角的匹配,运算过程更复杂。
在本实施例中,为了使可见光图像有较好的清晰度,可选用高分辨率的可见光传感器用于拍摄可见光图像,例如选用130万像素或者更高的可见光传感器;而红外图像的拍摄则可选用非致冷或致冷红外摄像机。本步骤中获取红外图像和可见光图像的方法可采用现有技术中相同的方式,在此就不再赘述。
在示例性实施例中,该方法还可以包括:对所述红外图像和所述可见光图像进行图像配准。
虽然上述实施例中可以通过使所述红外摄像机和所述可见光摄像机的镜头平行安装获取了同一视角的所述红外图像和所述可见光图像,但所述红外图像与可见光图像的视场仍然无法匹配,目标在红外图像上的位置与在可见光图像上的位置有很大的差距,因此现有技术中无法有效地利用红外和可见光图像同时对同一目标识别。由于目标不同位,现有的红外监测系统的目标识别和定位方法大都采用对单一图像(红外或可见光)识别。当环境颜色或目标周围的颜色与被监测目标的颜色非常接近时,可见光图像识别算法就无法有效地识别目标;当环境温度或目标周围的温度与监测目标的温度非常接近时,红外图像识别算法就无法有效地识别目标,所以采用单一图像识别可能常常会出现无法识别目标的情况,
降低目标识别率,在检测时可能导致故障或缺陷监测的遗漏,造成重大的经济损失,因此采用红外和可见光互补识别和定位监测目标,可以大大地提高目标识别率的准确度。
可以根据所述红外图像和所述可见光图像的分辨率对视角范围进行配准,使所述红外图像和所述可见光图像的图像场景区域相同。
在示例性实施例中,所述对所述红外图像和所述可见光图像进行图像配准,包括:选择所述红外图像和所述可见光图像其中之一为参考图像,其中另一为待配准图像;获取所述待配准图像的配准参数;根据所述待配准图像的配准参数实现所述红外图像和所述可见光图像的图像配准。
在示例性实施例中,所述参考图像为所述红外图像,所述待配准图像为所述可见光图像。在另一实施例中,也可以选择所述参考图像为所述可见光图像,所述待配准图像为所述红外图像。
在示例性实施例中,所述获取所述待配准图像的配准参数,包括:选取测试目标及所述测试目标上的预设数量的特征点;分别获取所述测试目标在不同目标距离时的预置数目的测试红外图像和测试可见光图像,以及所述测试红外图像和所述测试可见光图像上的所述特征点的坐标;根据仿射变换公式和所述特征点的坐标获取所述预置数目的仿射变换参数:根据所述预置数目的仿射变换参数拟合获取仿射变换参数与目标距离之间的关系式;根据所述待配准图像与被测目标之间的距离以及所述关系式获取所述待配准图像的配准参数。
在一实施例中,所述预设数量大于等于2。图像配准的一个实例可以参加下述的图3所示的实施例。
在一实施例中,在图像配准过程中可以首先分别对红外图像、可见光图像进行预处理,可以进一步提高图像配准的精度。其中对该红外图像、可见光图像进行预处理的步骤,可以包括图像去噪、图像增强和图像变换,其中该图像去噪过程中对红外图像以及可见光图像进行空间滤波;该图像增强过程中对经去噪后的红外图像和可见光图像进行直方图增强;该图像变换过程中对经增强后的红外图像和可见光图像进行小波变换等。当然,本公开对此不作限定,任意现有的图像预处理方式均可以应用于本发明实施例中。
在步骤S120中,提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量。
在示例性实施例中,所述提取所述可见光图像的边缘信息,包括:对所述可见光图像的亮度分量进行滤波,提取所述可见光图像的边缘信息。
在示例性实施例中,所述对所述可见光图像的亮度分量进行滤波,可以采用下述滤波矩阵进行滤波:
在其他实施例中,还可以对所述可见光图像亮度分量依次进行3*3高斯滤波和3*3拉普拉斯滤波。本实施例中,为了减少计算量,直接采用上述5*5滤波矩阵滤波一次即可。
在示例性实施例中,所述将所述边缘信息和所述红外图像的亮度分量融合,包括:将所述边缘信息和所述红外图像的亮度分量进行等比例加权融合。
需要说明的是,虽然上述实施例中采用将所述可见光图像的边缘信息和所述红外图像的亮度分量等比例加权融合的方式,但在其他实施例中,可以采用下述的公式进行融合:
F=ω1A+ω2B,ω1+ω2=1
其中,A和B分别用于代表所述可见光图像的边缘信息和所述红外图像的亮度分量,ω1和ω2分别代表A和B的权重值,且两者的和为1,F代表所述融合亮度分量。上述的等比例加权融合方法中,相当于设置ω1=ω2=0.5。权重值可以根据实际需求进行调整,只要保证两者相加恒等于1即可。
在步骤S130中,对所述红外图像进行赋伪彩,获得伪彩色红外图像。
红外成像技术是一种辐射信息探测技术,用于将物体表面的温度分布转换成人眼可见的图像。该图像是红外图像,可以反映物体表面的红外辐射能力,直观地表征和显示被测目标表面的红外辐射温度场分布。由于红外图像为黑白灰度级图像,灰度值动态范围不大,人眼难以从这些灰度信息中获得目标的细节辐射信息。人眼能分辨的灰度级通常只有二十多个,但却能区分上千万种颜色。所以利用人眼彩色视觉的高分辨力和高灵敏度的特性实现红外图像伪彩色化,则可增强场景理解、突出目标,有利于更快更精确地探测和识别目标,并且可减小观察者的疲劳感。目前工业红外热像仪一般是利用颜色表将灰度图像映射为彩色图像,即伪彩色红外图像,从而使得图像不同灰度级之间对比度增强,这样有利用判读人员更为准确地判读图像。
但是由于红外图像分辨率较低,即使采用伪彩色红外图像的显示方法也只能观测到图像的某些局域(或大型目标)的温度信息,难以观测到图像目标的具体细节信息。而高分辨率的可见光图像具有高空间分辨率,可以反映出细节信息。
在步骤S140中,对所述伪彩色红外图像进行颜色空间转换,获得其色调分量和饱和度分量。
在本实施例中,所述颜色空间转换是指将所述伪彩色红外图像从RGB颜色空间转换至YUV颜色
空间。
在步骤S150中,将所述融合亮度分量、所述色调分量和所述饱和度分量进行颜色空间逆转换,获得伪彩色融合图像。
在本实施例中,所述颜色空间逆转换是指将所述融合亮度分量、所述色调分量和所述饱和度分量从YUV颜色空间转换至RGB颜色空间。
在一实施例中,还可判断该伪彩色融合图像所表示的温度是否超过温度门限值,当超过该温度门限值时,可向用户发出告警信息,便于用户及时采取措施,该温度门限值可根据设备中不同部件的工作极限温度来设定。
本实施方式提供的红外图像和可见光图像融合的方法,融合红外图像和高分辨率可见光图像,使得融合图像既能反映出目标的大概温度信息,又能观测到目标的细节信息。利用图像融合的方法提高红外图像的空间分辨率,不仅有助于提高图像判读人员的判读精度和效率,而且有助于判读人员对图像的解译,从而可克服在全可见光图像中不能准确识别出设备内温度的分布情况的问题以及在全红外图像中不能准确辨别该物体的类型。
在一些实施例中,一方面,通过将从同一视角拍摄的红外图像和可见光图像进行匹配后再图像融合处理,使得用户可以融合两种图像,从而可精确地显示超过指定温度极限的部件,可帮助用户更好地识别和报告可疑部件,并使得维修人员能够在第一时间完成维修。另一方面,本方法实现简单,硬件语言即可完成方法实现;同时速度快,可实时实现。
图2示出根据本公开一示例实施方式的红外图像和可见光图像融合的方法示意图。
如图2所示,首先输入被测目标的可见光图像VIS和红外图像IR。将可见光图像VIS转换为YUV颜色空间,获得可见光图像VIS的亮度分量Yvis、色调分量Uvis以及饱和度分量Vvis。由于红外摄像机拍摄到的红外图像IR是灰度图像,需要将红外图像IR赋伪彩色后获得伪彩色红外图像IR,然后伪彩色红外图像IR转换为YUV颜色空间,获得伪彩色红外图像IR的亮度分量Yir、色调分量Uir以及饱和度分量Vir。
其中,可以利用红外图像灰度值和预先配置好的伪彩色查找表,对红外图像IR进行赋伪彩,生成目标的伪彩色红外图像IR。具体可以包括:读取红外图像的每个像素点的灰度值;再利用伪彩色查找表中定义的颜色,将同一灰度值的像素点映射为同一颜色,从而生成伪彩色红外图像。然后对该伪彩色红外图像IR和被测目标的可见光图像VIS进行RGB颜色空间到YUV颜色空间的转换,得到伪彩色红外图像IR和可见光图像VIS的YUV颜色空间表示。
在本实施例中,红外热像仪中会提供伪彩色,供用户选择。伪彩色查找表即0~255范围的灰度值,每一个灰度值对应R/G/B三个颜色值。灰度图像根据每个像素的灰度值,找
到伪彩色查找表中相对应的RGB三个颜色值,形成伪彩色图像。
在本实施例中,对伪彩色红外图像IR和彩色的可见光图像VIS进行颜色空间转换,可以根据YUV和RGB颜色空间的转换经验公式,将彩色的可见光图像VIS和伪彩色红外图像IR从RGB颜色空间转换到YUV颜色空间,分别有Y/U/V三通道的灰度图像。
其中可以采用如下的转换经验公式:
其中R,G,B分别表示可见光图像或者红外图像的红,绿,蓝通道。
在示例性实施例中,对上述可见光图像VIS的亮度分量Yvis进行滤波处理提取边缘信息。本实施例中,可采用LOG的5*5滤波算子对可见光图像VIS的亮度分量Yvis进行滤波,得到只包含边缘信息的灰度图像。
可见光图像VIS的亮度分量Yvis经过矩阵滤波后获得可见光图像VIS的亮度分量Yvis’。该可见光图像VIS的亮度分量Yvis’与上述红外图像IR的亮度分量进行加权融合后获得融合亮度分量Yblend。
对根据该可见光图像VIS与该红外图像IR的亮度分量获取的融合亮度分量Yblend、该伪彩色红外图像IR的色调分量Uir以及其饱和度分量Vir转换为RGB空间,获得伪彩色融合图像。
其中,可以采用如下的转换经验公式:
本实施例方式采用的图像融合方法,与传统的可见光图像和红外图像叠加融合,或画中画部分替代融合不同,本方法不保留可见光图像的颜色信息和灰度信息,只保留其边缘轮廓信息,将之与红外图像叠加,形成融合图像。
由于通过将同一视角拍摄的红外图像和可见光图像进行匹配后再融合处理,能使得用户更直观的看到设备上标注的型号、贴的标签、结构特点等关键信息,从而判断故障点位置,第一时间完成维修,或检测维修后的效果。而获取同一视角拍摄的红外图像和可见光图像,还需要对该红外图像和可见光图像进行图像配准。下面图3对该图像配准实现方法进行举例说明。
图3示出根据本公开一示例实施方式的图像配准实现方法流程图。
如图3所示,该图像配准实现方法包括以下两个步骤。
在步骤S210中,在图像配准前实现对探测器的标定。
步骤S220中,将上述步骤S210标定好的探测器应用于本发明实施例的图像配准实现方法中。
其中,步骤S210可以包括以下步骤。
在步骤S211中,探测器及镜头安装。
调整可见光探测器的镜头和红外探测器的镜头光轴平行、且该可见光和红外探测器无旋转角且在同一平行平面。
这里假设标定时选择测试目标与探测器之间的距离分别为1-10米。需要说明的是,这里选择的目标距离的具体取值(这里为1米、2米直至10米)、分割成多少组(这里为10组)、每组之间可以等间隔分布也可以非等间隔分布(这里为等间隔分布,每组之间相差1米)均是可以根据需求进行自主设置,对本发明不构成限制。
在步骤S212中,初始设置n=1,n为大于等于1、且小于等于10的正整数。
在步骤S213中,设置探测器距离测试目标为n米。
在步骤S214中,选择测试目标上的特征点,并记录特征点在探测器拍摄获取的测试可见光图像和测试红外图像上的相应坐标。
标定时,可选取有标志性形状的高温物体作为所述测试目标。确保该物体在距离探测器在选定的目标距离范围内(例如1-10米内)均可在测试红外图像和测试可见光图像上能同时明显成像,且有至少两对清晰的特征点。其中,测试红外图像上的一点与相应测试可见光图像上的一点称为一对。每幅测试红外图像上选择两个特征点,与测试可见光图像上的两个特征点相对应。这里以两对特征点为例进行说明,但本公开不限定于此,在其他实施例中,可以选择更多对特征点。人为选取这两对特征点作为两对配准控制点,记录这两对配准控制点在其测试红外图像和测试可见光图像上的相应坐标。
在步骤S215中,根据下述的仿射变换公式和上述的两对配准控制点在测试可见光图像和测试红外图像上的相应坐标,计算目标距离为L时,对应的仿射变换参数k1,k2,t1,t2。在下述的实施例中,以测试红外图像为参考图像,待配准图像为测试可见光图像为例进行说明。
需要说明的是,在其他实施例中,可以以测试可见光图像作为参考图像,测试红外图像作为该待配准图像。由于一般红外图像的视场小,以红外图像作为参考图像,在可见光
图像中找匹配区域。如果以视场大的可见光图像为参考图像,则最终融合图像上可能有部分区域没有相应的红外图像。
其中仿射变换公式为:
标定时,其中(x',y')为待配准图像中某配准控制点的坐标,(x,y)为参考图像图像中该配准控制点对应的坐标。应用时,(x',y')为变换前测试可见光图像中某像素的坐标,(x,y)为变换后测试可见光图像中该像素所映射的坐标,k1、k2分别为x,y方向上的缩放系数,t1、t2分别为x、y方向上的平移系数。
在步骤S216中,判断n是否小于10;当n不小于10时(n=10),进入步骤S218;当n小于10时,进入步骤S217。
在步骤S217中,使n递增1,然后继续跳回到步骤S213,循环计算下一组仿射变换参数与目标距离之间的映射关系。
在步骤S218中,通过上述获得的10组仿射变换参数与目标距离之间的映射关系进行拟合,获得目标距离L与仿射变换参数的关系式并保存。
通过上述10次的循环,获取10组的仿射变换参数k1,k2,t1,t2与目标距离L(分别为1米,2米,……,10米)之间的映射关系,从而通过拟合能够获取k1/L,k2/L,t1/L,t2/L曲线即目标距离L与仿射变换参数的关系式并保存。
其中所述拟合可以借助excel或matlab等进行二次多项式拟合,为已有成熟技术,在此不再赘述。
应用时,可以通过测距模块(例如激光测距模块),得到被测目标与探测器之间的目标距离L,根据保存的k1/L,k2/L,t1/L,t2/L关系式,计算该被测目标与探测器之间的目标距离L对应的仿射变换参数k1,k2,t1,t2的值。
其中,步骤S220可以包括以下步骤。
在步骤S221中,测量被测目标与探测器之间的距离L。
在步骤S222中,计算该被测目标的待配准图像的仿射变换参数k1,k2,t1,t2。
在步骤S223中,计算待配准图像的平移量和缩放量。
可以将上述步骤S222中计算获得的仿射变换参数k1作为该待配准图像的在x方向的缩放量,仿射变换参数k2作为该待配准图像的在y方向的缩放量,仿射变换参数t1作为该待配准图像的在x方向的平移量,仿射变换参数t2作为该待配准图像的在y方向的平移量。
在步骤S224中,图像配准。
根据上述待配准图像的平移量和缩放量,上下左右平行和放大/缩小可见光图像,实现自动配准。
本实施方式具有如下的有益效果:一方面,在红外图像和可见光图像的图像融合过程中,首先根据图像配准参数实现图像配准,极大地减小了数值变换的计算量,提高了计算速度,从而提高了图像融合的速度,满足了图像处理的实时性要求;另一方面,该红外图像和可见光图像融合的方法中将红外探测器和可见光探测器并列固定在同一转台上,并保持两者光轴与成像坐标轴的平行,由此两者成像的俯仰角和方位角一致,在图像配准过程中仅需要调整缩放比例和平移量,降低了图像配准的难度。
为了实现上述的方法实施例,本发明的其他实施例还提供了一种红外图像和可见光图像融合的装置。另需首先说明的是,由于下述的实施例是为实现前述的方法实施例,故该装置都是为了实现前述方法的各步骤而设,但本发明并不限于下述的实施例,任何可实现上述方法的装置都应包含于本发明的保护范围。并且在下面的描述中,与前述方法相同的内容在此省略,以节约篇幅。
图4示出根据本公开一示例实施方式的红外图像和可见光图像融合的装置结构框图。
如图4所示,该红外图像和可见光图像融合的装置100包括:第一颜色空间转换模块110、融合亮度分量获取模块120、伪彩色红外图像获得模块130、第二颜色空间转换模块140以及颜色空间逆转换模块150。
其中第一颜色空间转换模块110用于对所述可见光图像进行RGB颜色空间到YUV颜色空间的转换,获得其亮度分量。
其中融合亮度分量获取模块120用于提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量。
其中伪彩色红外图像获得模块130用于对所述红外图像进行赋伪彩,获得伪彩色红外图像。
其中第二颜色空间转换模块140用于对所述伪彩色红外图像进行RGB颜色空间到YUV颜色空间的转换,获得其色调分量和饱和度分量。
其中颜色空间逆转换模块150用于将所述融合亮度分量、所述色调分量和所述饱和度分量进行
颜色空间逆转换,获得伪彩色融合图像。
在示例性实施例中,该装置100还可以包括:图像采集模块,用于分别采集同一视角的所述可见光图像和所述红外图像。
在示例性实施例中,该装置100还可以包括:图像配准模块,用于对所述红外图像和所述可见光图像进行图像配准。
在示例性实施例中,该装置100还可以包括:模式选择模块,用于选择预设的多种输出模式中的任意一种。
图5示出根据本公开另一示例实施方式的红外图像和可见光图像融合的装置结构框图。
如图5所示,该红外图像和可见光图像融合的装置200可包括:红外摄像机201(红外探测器)和可见光摄像机202(可见光探测器);图像采集模块203,该图像采集模块可以采集该红外摄像机201和可见光摄像机202拍摄获得的同一视角的红外图像和可见光图像;图像预处理模块204,将可见光图像和红外图像进行去噪、去坏点、非均匀性校正以及红外图像伪彩色赋彩等预处理;激光测距模块205,用于测量目标(测试目标和/或被测目标)与红外探测器、可见光探测器之间的距离;图像配准模块206,其中图像配准前要对机器(红外探测器、可见光探测器)进行标定,标定过程如图3所示,其中对于定焦镜头,可以手动选取配准控制点,只用标定一次,标定后得到平移参数和缩放参数(用正负表示平移和缩放方向),对例如可见光图像进行处理,得到像素匹配的可见光图像;模式选择模块207,该模式选择模块207可包括第一模式、第二模式以及第三模式,其中第一模式只输出红外图像至显示模块209进行显示,第二模式只输出可见光图像显示模块209进行显示,第三模式只输出伪彩色融合图像,且在选择第三模式时,该模式选择模块207还需要连接图像融合模块208,由该图像融合模块208生成所述伪彩色融合图像并输出至显示模块209进行显示,此处融合方法采用本发明上述实施例中的图像融合方法。
在其他实施例中,所述模式选择模块207中还可以包括更多或者更少的模式,例如将红外图像和可见光图像高于一定温度的区域配准融合得到的融合图像输出至显示模块进行显示的第四模式;或红外图像替代部分可见光区域形成画中画融合得到的融合图像输出至显示模块进行显示的第五模式;或全视场配准融合,融合方法是以可调的权重比例,按图像像素叠加融合获得的融合图像输出至显示模块进行显示的第六模式等等。
其中,相比于上述第四至第六模式获得的融合图像,物理细节仍然不丰富,只能根据未融合的部分可见光区域大概判断设备故障点位置;融合图像的融合部分既有可见光的颜
色信息又有红外图像的颜色信息,容易混淆观察点,而且不符合用户的观察习惯。而本发明实施例采用的融合方法获得的最终伪彩色融合图像既包括可见光图像的物理细节信息又包括红外图像的温度信息。
以上具体地示出和描述了本公开的示例性实施例。应该理解,本公开不限于所公开的实施例,相反,本公开意图涵盖包含在所附权利要求的精神和范围内的各种修改和等效布置。
Claims (13)
- 一种红外图像和可见光图像融合的方法,其特征在于,包括:对所述可见光图像进行颜色空间转换,获得其亮度分量;提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量;对所述红外图像进行赋伪彩,获得伪彩色红外图像;对所述伪彩色红外图像进行颜色空间转换,获得其色调分量和饱和度分量;将所述融合亮度分量、所述色调分量和所述饱和度分量进行颜色空间逆转换,获得伪彩色融合图像。
- 如权利要求1所述的方法,其特征在于,还包括:分别采集同一视角的所述可见光图像和所述红外图像。
- 如权利要求2所述的方法,其特征在于,还包括:对所述红外图像和所述可见光图像进行图像配准。
- 如权利要求3所述的方法,其特征在于,所述对所述红外图像和所述可见光图像进行图像配准,包括:选择所述红外图像和所述可见光图像其中之一为参考图像,其中另一为待配准图像;获取所述待配准图像的配准参数;根据所述待配准图像的配准参数实现所述红外图像和所述可见光图像的图像配准。
- 如权利要求4所述的方法,其特征在于,所述参考图像为所述红外图像,所述待配准图像为所述可见光图像。
- 如权利要求4所述的方法,其特征在于,所述获取所述待配准图像的配准参数,包括:选取测试目标及所述测试目标上的预设数量的特征点;分别获取所述测试目标在不同目标距离时的预置数目的测试红外图像和测试可见光图像,以及所述测试红外图像和所述测试可见光图像上的所述特征点的坐标;根据仿射变换公式和所述特征点的坐标获取所述预置数目的仿射变换参数;根据所述预置数目的仿射变换参数拟合获取仿射变换参数与目标距离之间的关系式;根据所述待配准图像与被测目标之间的距离以及所述关系式获取所述待配准图像的配准参数。
- 如权利要求1所述的方法,其特征在于,所述提取所述可见光图像的边缘信息,包括:对所述可见光图像的亮度分量进行滤波,提取所述可见光图像的边缘信息。
- 如权利要求1所述的方法,其特征在于,所述将所述边缘信息和所述红外图像的亮度分量融合,包括:将所述边缘信息和所述红外图像的亮度分量进行等比例加权融合。
- 一种红外图像和可见光图像融合的装置,其特征在于,包括:第一颜色空间转换模块,用于对所述可见光图像进行RGB颜色空间到YUV颜色空间的转换,获得其亮度分量;融合亮度分量获取模块,用于提取所述可见光图像亮度分量的边缘信息,将所述边缘信息和所述红外图像的亮度分量融合,获得融合亮度分量;伪彩色红外图像获得模块,用于对所述红外图像进行赋伪彩,获得伪彩色红外图像;第二颜色空间转换模块,用于对所述伪彩色红外图像进行RGB颜色空间到YUV颜色空间的转换,获得其色调分量和饱和度分量;颜色空间逆转换模块,用于将所述融合亮度分量、所述色调分量和所述饱和度分量进行YUV颜色空间到RGB颜色空间的转换,获得伪彩色融合图像。
- 如权利要求10所述的装置,其特征在于,还包括:图像采集模块,用于分别采集同一视角的所述可见光图像和所述红外图像。
- 如权利要求11所述的装置,其特征在于,还包括:图像配准模块,用于对所述红外图像和所述可见光图像进行图像配准。
- 如权利要求12所述的装置,其特征在于,还包括:模式选择模块,用于选择预设的多种输出模式中的任意一种。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17865257.4A EP3534326A4 (en) | 2016-10-31 | 2017-06-22 | METHOD AND DEVICE FOR COMBINING AN INFRARED IMAGE AND A VISIBLE LIGHT IMAGE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610930903.9 | 2016-10-31 | ||
CN201610930903.9A CN106548467B (zh) | 2016-10-31 | 2016-10-31 | 红外图像和可见光图像融合的方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018076732A1 true WO2018076732A1 (zh) | 2018-05-03 |
Family
ID=58393524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/089508 WO2018076732A1 (zh) | 2016-10-31 | 2017-06-22 | 红外图像和可见光图像融合的方法及装置 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3534326A4 (zh) |
CN (1) | CN106548467B (zh) |
WO (1) | WO2018076732A1 (zh) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785277A (zh) * | 2018-12-11 | 2019-05-21 | 南京第五十五所技术开发有限公司 | 一种实时的红外与可见光图像融合方法 |
CN110322423A (zh) * | 2019-04-29 | 2019-10-11 | 天津大学 | 一种基于图像融合的多模态图像目标检测方法 |
CN110428455A (zh) * | 2019-04-19 | 2019-11-08 | 中国航空无线电电子研究所 | 一种可见光图像与远红外图像目标配准方法 |
CN110473240A (zh) * | 2019-08-13 | 2019-11-19 | 陕西高速星展科技有限公司 | 图像波纹处理方法 |
CN111192229A (zh) * | 2020-01-02 | 2020-05-22 | 中国航空工业集团公司西安航空计算技术研究所 | 一种机载多模态视频画面增强显示方法及系统 |
CN111798560A (zh) * | 2020-06-09 | 2020-10-20 | 同济大学 | 一种电力设备红外热像测温数据三维实景模型可视化方法 |
CN112001260A (zh) * | 2020-07-28 | 2020-11-27 | 国网湖南省电力有限公司 | 一种基于红外可见光图像融合的电缆沟故障检测方法 |
CN112102217A (zh) * | 2020-09-21 | 2020-12-18 | 四川轻化工大学 | 一种可见光图像与红外图像快速融合方法及系统 |
CN112102380A (zh) * | 2020-09-11 | 2020-12-18 | 北京华捷艾米科技有限公司 | 一种红外图像与可见光图像的配准方法及相关装置 |
CN112132753A (zh) * | 2020-11-06 | 2020-12-25 | 湖南大学 | 多尺度结构引导图像的红外图像超分辨率方法及系统 |
CN113362261A (zh) * | 2020-03-04 | 2021-09-07 | 杭州海康威视数字技术股份有限公司 | 图像融合方法 |
CN114061764A (zh) * | 2020-07-27 | 2022-02-18 | 浙江宇视科技有限公司 | 一种人体温度的检测方法、装置、介质及电子设备 |
CN114092761A (zh) * | 2021-11-10 | 2022-02-25 | 复旦大学 | 一种基于双模态数据融合的变电站设备故障检测方法 |
US11346938B2 (en) | 2019-03-15 | 2022-05-31 | Msa Technology, Llc | Safety device for providing output to an individual associated with a hazardous environment |
CN114881899A (zh) * | 2022-04-12 | 2022-08-09 | 北京理工大学 | 一种用于可见光与红外图像对的快速保色融合方法及装置 |
CN116086537A (zh) * | 2023-02-08 | 2023-05-09 | 杭州安脉盛智能技术有限公司 | 一种设备状态监测方法、装置、设备及存储介质 |
CN116137043A (zh) * | 2023-02-21 | 2023-05-19 | 长春理工大学 | 一种基于卷积和Transformer的红外图像彩色化方法 |
CN116934815A (zh) * | 2023-09-18 | 2023-10-24 | 国网山东省电力公司嘉祥县供电公司 | 电力设备图像配准方法及系统 |
CN117911401A (zh) * | 2024-03-15 | 2024-04-19 | 国网山东省电力公司泗水县供电公司 | 一种电力设备故障检测方法、系统、存储介质及设备 |
CN118397786A (zh) * | 2024-05-23 | 2024-07-26 | 南京众行能源科技有限公司 | 基于图像和热像深度学习的火灾检测系统与方法 |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548467B (zh) * | 2016-10-31 | 2019-05-14 | 广州飒特红外股份有限公司 | 红外图像和可见光图像融合的方法及装置 |
CN108694709B (zh) * | 2017-04-12 | 2021-06-29 | 深圳市朗驰欣创科技股份有限公司 | 一种图像融合方法及装置 |
CN109040534A (zh) * | 2017-06-12 | 2018-12-18 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法及图像采集设备 |
CN107491781A (zh) * | 2017-07-21 | 2017-12-19 | 国家电网公司 | 一种巡检机器人可见光与红外传感器数据融合方法 |
CN107478340B (zh) * | 2017-07-25 | 2019-08-06 | 许继集团有限公司 | 一种基于图像融合的换流阀监测方法及系统 |
CN108154493B (zh) * | 2017-11-23 | 2021-11-30 | 南京理工大学 | 一种基于fpga的双波段红外图像伪彩融合算法 |
CN108198157A (zh) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | 基于显著目标区域提取和nsst的异源图像融合方法 |
CN108303182A (zh) * | 2017-12-30 | 2018-07-20 | 广东金泽润技术有限公司 | 一种红外成像温度监控系统 |
CN108090477A (zh) * | 2018-01-23 | 2018-05-29 | 北京易智能科技有限公司 | 一种基于多光谱融合的人脸识别方法与装置 |
CN110136183B (zh) | 2018-02-09 | 2021-05-18 | 华为技术有限公司 | 一种图像处理的方法、装置以及摄像装置 |
CN109272549B (zh) * | 2018-08-31 | 2021-04-23 | 维沃移动通信有限公司 | 一种红外热点的位置确定方法及终端设备 |
CN109360177B (zh) * | 2018-10-17 | 2021-09-28 | 成都森川科技股份有限公司 | 快速移动物体热成像图像与光学图像的快速小波融合方法 |
CN110246108B (zh) * | 2018-11-21 | 2023-06-20 | 浙江大华技术股份有限公司 | 一种图像处理方法、装置及计算机可读存储介质 |
WO2020113408A1 (zh) * | 2018-12-04 | 2020-06-11 | 深圳市大疆创新科技有限公司 | 一种图像处理方法、设备、无人机、系统及存储介质 |
CN111325701B (zh) * | 2018-12-14 | 2023-05-09 | 杭州海康微影传感科技有限公司 | 图像处理方法、装置及存储介质 |
EP3704668A4 (en) | 2018-12-17 | 2021-04-07 | SZ DJI Technology Co., Ltd. | IMAGE PROCESSING METHOD AND APPARATUS |
CN109978926B (zh) * | 2018-12-29 | 2021-05-25 | 深圳市行知达科技有限公司 | 一种图像自动融合方法、装置及终端设备 |
CN110211083A (zh) * | 2019-06-10 | 2019-09-06 | 北京宏大天成防务装备科技有限公司 | 一种图像处理方法及装置 |
CN110379002A (zh) * | 2019-07-09 | 2019-10-25 | 电子科技大学 | 一种基于红外与可见光图像融合的三维重建表面温度显示方法 |
CN110544205B (zh) * | 2019-08-06 | 2021-05-07 | 西安电子科技大学 | 基于可见光与红外交叉输入的图像超分辨率重建方法 |
CN110458787B (zh) * | 2019-08-09 | 2022-03-08 | 武汉高德智感科技有限公司 | 一种图像融合方法、装置及计算机存储介质 |
CN110633682B (zh) * | 2019-09-19 | 2022-07-12 | 合肥英睿系统技术有限公司 | 基于双光融合的红外图像的异常监测方法、装置、设备 |
CN110766706A (zh) * | 2019-09-26 | 2020-02-07 | 深圳市景阳信息技术有限公司 | 图像融合方法、装置、终端设备及存储介质 |
CN110880165A (zh) * | 2019-10-15 | 2020-03-13 | 杭州电子科技大学 | 一种基于轮廓和颜色特征融合编码的图像去雾方法 |
CN112767289B (zh) * | 2019-10-21 | 2024-05-07 | 浙江宇视科技有限公司 | 图像融合方法、装置、介质及电子设备 |
CN110719657B (zh) * | 2019-11-05 | 2024-04-09 | 贵州师范学院 | 一种用于塑料的微波均匀加热装置和方法 |
CN111191574A (zh) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | 一种面诊的脏腑分区温度的获取方法及装置 |
CN111861951B (zh) * | 2020-06-01 | 2024-01-23 | 浙江双视科技股份有限公司 | 基于红外光和可见光的双波段监测方法、装置及系统 |
CN111667520B (zh) * | 2020-06-09 | 2023-05-16 | 中国人民解放军63811部队 | 红外图像和可见光图像的配准方法、装置及可读存储介质 |
CN111738970B (zh) * | 2020-06-19 | 2024-10-15 | 无锡英菲感知技术有限公司 | 图像融合方法、装置及计算机可读存储介质 |
CN111815732B (zh) * | 2020-07-24 | 2022-04-01 | 西北工业大学 | 一种针对中红外图像上色的方法 |
CN112001910A (zh) * | 2020-08-26 | 2020-11-27 | 中国科学院遗传与发育生物学研究所 | 一种自动识别植株穗数的方法、装置、电子设备和存储介质 |
CN112053392A (zh) * | 2020-09-17 | 2020-12-08 | 南昌航空大学 | 一种红外与可见光图像的快速配准与融合方法 |
CN112132874B (zh) * | 2020-09-23 | 2023-12-05 | 西安邮电大学 | 无标定板异源图像配准方法、装置、电子设备及存储介质 |
CN112419745A (zh) * | 2020-10-20 | 2021-02-26 | 中电鸿信信息科技有限公司 | 一种基于深度融合网络的高速公路团雾预警系统 |
CN112614164B (zh) * | 2020-12-30 | 2024-10-11 | 杭州海康微影传感科技有限公司 | 一种图像融合方法、装置、图像处理设备及双目系统 |
CN112819907A (zh) * | 2021-02-01 | 2021-05-18 | 深圳瀚维智能医疗科技有限公司 | 基于红外相机的成像方法、装置及计算机可读存储介质 |
CN115115566A (zh) * | 2021-03-18 | 2022-09-27 | 杭州海康消防科技有限公司 | 一种热成像图像的处理方法及装置 |
CN112991218B (zh) * | 2021-03-23 | 2024-07-09 | 北京百度网讯科技有限公司 | 图像处理的方法、装置、设备以及存储介质 |
CN112945396A (zh) * | 2021-04-09 | 2021-06-11 | 西安科技大学 | 一种复杂流动人群体温检测系统及检测方法 |
CN113284128B (zh) * | 2021-06-11 | 2023-05-16 | 中国南方电网有限责任公司超高压输电公司天生桥局 | 基于电力设备的图像融合显示方法、装置和计算机设备 |
CN113483898A (zh) * | 2021-08-04 | 2021-10-08 | 国能大渡河瀑布沟发电有限公司 | 水利发电机组励磁系统运行温度智能监测与预警技术 |
CN113743286A (zh) * | 2021-08-31 | 2021-12-03 | 福州大学 | 一种多源信号融合的目标监测系统及方法 |
CN116437198B (zh) * | 2021-12-29 | 2024-04-16 | 荣耀终端有限公司 | 图像处理方法与电子设备 |
CN114820506A (zh) * | 2022-04-22 | 2022-07-29 | 岚图汽车科技有限公司 | 热冲压零件的缺陷检测方法、装置、电子设备及存储介质 |
CN115311180A (zh) * | 2022-07-04 | 2022-11-08 | 优利德科技(中国)股份有限公司 | 基于边缘特征的图像融合方法、装置、用户终端及介质 |
CN115359022A (zh) * | 2022-08-31 | 2022-11-18 | 苏州知码芯信息科技有限公司 | 一种电源芯片质量检测方法及系统 |
CN116309013B (zh) * | 2023-02-08 | 2024-09-03 | 无锡英菲感知技术有限公司 | 一种图像映射方法、装置、设备及可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339653A (zh) * | 2008-01-30 | 2009-01-07 | 西安电子科技大学 | 基于彩色传递及熵信息的红外与彩色可见光图像融合方法 |
CN102789640A (zh) * | 2012-07-16 | 2012-11-21 | 中国科学院自动化研究所 | 一种将可见光全色图像与红外遥感图像进行融合的方法 |
CN102982518A (zh) * | 2012-11-06 | 2013-03-20 | 扬州万方电子技术有限责任公司 | 红外与可见光动态图像的融合方法及装置 |
CN104683767A (zh) * | 2015-02-10 | 2015-06-03 | 浙江宇视科技有限公司 | 透雾图像生成方法及装置 |
CN105989585A (zh) * | 2015-03-05 | 2016-10-05 | 深圳市朗驰欣创科技有限公司 | 一种红外图像与可见光图像融合的方法及系统 |
CN106548467A (zh) * | 2016-10-31 | 2017-03-29 | 广州飒特红外股份有限公司 | 红外图像和可见光图像融合的方法及装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7924312B2 (en) * | 2008-08-22 | 2011-04-12 | Fluke Corporation | Infrared and visible-light image registration |
CN101727665B (zh) * | 2008-10-27 | 2011-09-07 | 广州飒特电力红外技术有限公司 | 红外图像和可见光图像融合的方法及装置 |
US9451183B2 (en) * | 2009-03-02 | 2016-09-20 | Flir Systems, Inc. | Time spaced infrared image enhancement |
EP2635022A1 (en) * | 2012-02-29 | 2013-09-04 | Flir Systems AB | A method and system for performing alignment of a projection image to detected infrared (IR) radiation information |
CN103761724A (zh) * | 2014-01-28 | 2014-04-30 | 中国石油大学(华东) | 基于超现实亮度对比度传递算法的可见光与红外视频融合方法 |
CN104021568B (zh) * | 2014-06-25 | 2017-02-15 | 山东大学 | 基于轮廓多边形拟合的可见光与红外图像的自动配准方法 |
CN104134208B (zh) * | 2014-07-17 | 2017-04-05 | 北京航空航天大学 | 利用几何结构特征从粗到精的红外与可见光图像配准方法 |
CN104123734A (zh) * | 2014-07-22 | 2014-10-29 | 西北工业大学 | 基于可见光和红外检测结果融合的运动目标检测方法 |
CN105069768B (zh) * | 2015-08-05 | 2017-12-29 | 武汉高德红外股份有限公司 | 一种可见光图像与红外图像融合处理系统及融合方法 |
CN105719263B (zh) * | 2016-01-22 | 2018-05-25 | 昆明理工大学 | 基于nsct域底层视觉特征的可见光和红外图像融合方法 |
-
2016
- 2016-10-31 CN CN201610930903.9A patent/CN106548467B/zh active Active
-
2017
- 2017-06-22 WO PCT/CN2017/089508 patent/WO2018076732A1/zh unknown
- 2017-06-22 EP EP17865257.4A patent/EP3534326A4/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339653A (zh) * | 2008-01-30 | 2009-01-07 | 西安电子科技大学 | 基于彩色传递及熵信息的红外与彩色可见光图像融合方法 |
CN102789640A (zh) * | 2012-07-16 | 2012-11-21 | 中国科学院自动化研究所 | 一种将可见光全色图像与红外遥感图像进行融合的方法 |
CN102982518A (zh) * | 2012-11-06 | 2013-03-20 | 扬州万方电子技术有限责任公司 | 红外与可见光动态图像的融合方法及装置 |
CN104683767A (zh) * | 2015-02-10 | 2015-06-03 | 浙江宇视科技有限公司 | 透雾图像生成方法及装置 |
CN105989585A (zh) * | 2015-03-05 | 2016-10-05 | 深圳市朗驰欣创科技有限公司 | 一种红外图像与可见光图像融合的方法及系统 |
CN106548467A (zh) * | 2016-10-31 | 2017-03-29 | 广州飒特红外股份有限公司 | 红外图像和可见光图像融合的方法及装置 |
Non-Patent Citations (4)
Title |
---|
FENG, XIAOWEI: "Image Registration Algorithm Based on the Feature Points and Application", MASTER THESIS, 15 June 2009 (2009-06-15), pages 1 - 85, XP009515624 * |
See also references of EP3534326A4 * |
WANG, JIA ET AL.: "A Disguised Target Recognition Method Based on Pseudo-colour Coding and Image Fusion", JOURNAL OF DETECTION & CONTROL, vol. 30, no. 2, 30 April 2008 (2008-04-30), CN, pages 43 - 46, XP009514518, ISSN: 1008-1194 * |
WANG, JIA ET AL.: "An Algorithm to Fuse Grey-scale Infrared and Visible Light Based on Perceptual Colour Space", JOURNAL OF OPTOELECTRONICS, vol. 19, no. 9, 30 September 2008 (2008-09-30), pages 1262 - 1264, XP009514516, ISSN: 1005-0086 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785277B (zh) * | 2018-12-11 | 2022-10-04 | 南京第五十五所技术开发有限公司 | 一种实时的红外与可见光图像融合方法 |
CN109785277A (zh) * | 2018-12-11 | 2019-05-21 | 南京第五十五所技术开发有限公司 | 一种实时的红外与可见光图像融合方法 |
US11346938B2 (en) | 2019-03-15 | 2022-05-31 | Msa Technology, Llc | Safety device for providing output to an individual associated with a hazardous environment |
CN110428455B (zh) * | 2019-04-19 | 2022-11-04 | 中国航空无线电电子研究所 | 一种可见光图像与远红外图像目标配准方法 |
CN110428455A (zh) * | 2019-04-19 | 2019-11-08 | 中国航空无线电电子研究所 | 一种可见光图像与远红外图像目标配准方法 |
CN110322423A (zh) * | 2019-04-29 | 2019-10-11 | 天津大学 | 一种基于图像融合的多模态图像目标检测方法 |
CN110322423B (zh) * | 2019-04-29 | 2023-03-31 | 天津大学 | 一种基于图像融合的多模态图像目标检测方法 |
CN110473240A (zh) * | 2019-08-13 | 2019-11-19 | 陕西高速星展科技有限公司 | 图像波纹处理方法 |
CN111192229A (zh) * | 2020-01-02 | 2020-05-22 | 中国航空工业集团公司西安航空计算技术研究所 | 一种机载多模态视频画面增强显示方法及系统 |
CN111192229B (zh) * | 2020-01-02 | 2023-10-13 | 中国航空工业集团公司西安航空计算技术研究所 | 一种机载多模态视频画面增强显示方法及系统 |
CN113362261A (zh) * | 2020-03-04 | 2021-09-07 | 杭州海康威视数字技术股份有限公司 | 图像融合方法 |
CN113362261B (zh) * | 2020-03-04 | 2023-08-11 | 杭州海康威视数字技术股份有限公司 | 图像融合方法 |
CN111798560B (zh) * | 2020-06-09 | 2023-09-01 | 同济大学 | 一种电力设备红外热像测温数据三维实景模型可视化方法 |
CN111798560A (zh) * | 2020-06-09 | 2020-10-20 | 同济大学 | 一种电力设备红外热像测温数据三维实景模型可视化方法 |
CN114061764A (zh) * | 2020-07-27 | 2022-02-18 | 浙江宇视科技有限公司 | 一种人体温度的检测方法、装置、介质及电子设备 |
CN112001260A (zh) * | 2020-07-28 | 2020-11-27 | 国网湖南省电力有限公司 | 一种基于红外可见光图像融合的电缆沟故障检测方法 |
CN112102380A (zh) * | 2020-09-11 | 2020-12-18 | 北京华捷艾米科技有限公司 | 一种红外图像与可见光图像的配准方法及相关装置 |
CN112102217A (zh) * | 2020-09-21 | 2020-12-18 | 四川轻化工大学 | 一种可见光图像与红外图像快速融合方法及系统 |
CN112102217B (zh) * | 2020-09-21 | 2023-05-02 | 四川轻化工大学 | 一种可见光图像与红外图像快速融合方法及系统 |
CN112132753B (zh) * | 2020-11-06 | 2022-04-05 | 湖南大学 | 多尺度结构引导图像的红外图像超分辨率方法及系统 |
CN112132753A (zh) * | 2020-11-06 | 2020-12-25 | 湖南大学 | 多尺度结构引导图像的红外图像超分辨率方法及系统 |
CN114092761A (zh) * | 2021-11-10 | 2022-02-25 | 复旦大学 | 一种基于双模态数据融合的变电站设备故障检测方法 |
CN114881899B (zh) * | 2022-04-12 | 2024-06-04 | 北京理工大学 | 一种用于可见光与红外图像对的快速保色融合方法及装置 |
CN114881899A (zh) * | 2022-04-12 | 2022-08-09 | 北京理工大学 | 一种用于可见光与红外图像对的快速保色融合方法及装置 |
CN116086537A (zh) * | 2023-02-08 | 2023-05-09 | 杭州安脉盛智能技术有限公司 | 一种设备状态监测方法、装置、设备及存储介质 |
CN116086537B (zh) * | 2023-02-08 | 2024-09-24 | 杭州安脉盛智能技术有限公司 | 一种设备状态监测方法、装置、设备及存储介质 |
CN116137043A (zh) * | 2023-02-21 | 2023-05-19 | 长春理工大学 | 一种基于卷积和Transformer的红外图像彩色化方法 |
CN116934815B (zh) * | 2023-09-18 | 2024-01-19 | 国网山东省电力公司嘉祥县供电公司 | 电力设备图像配准方法及系统 |
CN116934815A (zh) * | 2023-09-18 | 2023-10-24 | 国网山东省电力公司嘉祥县供电公司 | 电力设备图像配准方法及系统 |
CN117911401A (zh) * | 2024-03-15 | 2024-04-19 | 国网山东省电力公司泗水县供电公司 | 一种电力设备故障检测方法、系统、存储介质及设备 |
CN118397786A (zh) * | 2024-05-23 | 2024-07-26 | 南京众行能源科技有限公司 | 基于图像和热像深度学习的火灾检测系统与方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106548467A (zh) | 2017-03-29 |
CN106548467B (zh) | 2019-05-14 |
EP3534326A4 (en) | 2020-05-06 |
EP3534326A1 (en) | 2019-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076732A1 (zh) | 红外图像和可见光图像融合的方法及装置 | |
CN109377469B (zh) | 一种热成像融合可见光图像的处理方法、系统及存储介质 | |
US9729803B2 (en) | Apparatus and method for multispectral imaging with parallax correction | |
CN103067734B (zh) | 视频质量诊断系统检测视频图像偏色的方法 | |
KR102048369B1 (ko) | 영상융합 알고리즘이 적용된 lwir 및 swir를 이용한 융합 듀얼 ir 카메라 | |
CN111738970A (zh) | 图像融合方法、装置及计算机可读存储介质 | |
CN104168478B (zh) | 基于Lab空间及相关性函数的视频图像偏色检测方法 | |
CN107154014A (zh) | 一种实时彩色及深度全景图像拼接方法 | |
CN113762161A (zh) | 一种障碍物智能监测方法及系统 | |
CN111541886A (zh) | 一种应用于浑浊水下的视觉增强系统 | |
CN110120012A (zh) | 基于双目摄像头的同步关键帧提取的视频拼接方法 | |
CN112308776A (zh) | 解决遮挡与错误映射的影像序列与点云数据融合的方法 | |
CN115100556B (zh) | 基于图像分割与融合的增强现实的方法、装置及电子设备 | |
CN111145234A (zh) | 基于双目视觉的火灾烟雾探测方法 | |
JP5163940B2 (ja) | 画質検査装置および画質検査方法 | |
CN113936017A (zh) | 图像处理方法及装置 | |
CN110389390B (zh) | 一种大视场红外微光自然感彩色融合系统 | |
CN112858331A (zh) | 一种vr屏幕的检测方法及检测系统 | |
Zhong et al. | Performance analysis of joint imaging system with polarized, infrared, and visible cameras for multi-sensor imaging | |
CN109028234B (zh) | 一种能够对烟雾等级进行标识的油烟机 | |
KR20220040025A (ko) | 딥러닝 기반 열 영상 재구성 장치 및 방법 | |
US9041815B2 (en) | Digital camera imaging evaluation module | |
CN216210273U (zh) | 一种基于视场融合的成像系统 | |
CN116033278B (zh) | 一种面向单色-彩色双相机的低照度图像预处理方法 | |
KR20130058480A (ko) | 영상융합장치 및 그 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17865257 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017865257 Country of ref document: EP Effective date: 20190531 |