CN116823863A - Infrared image contour extraction method and device - Google Patents

Infrared image contour extraction method and device Download PDF

Info

Publication number
CN116823863A
CN116823863A CN202310869373.1A CN202310869373A CN116823863A CN 116823863 A CN116823863 A CN 116823863A CN 202310869373 A CN202310869373 A CN 202310869373A CN 116823863 A CN116823863 A CN 116823863A
Authority
CN
China
Prior art keywords
image
base layer
infrared
layer
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310869373.1A
Other languages
Chinese (zh)
Inventor
马天
龙知洲
陈珺娴
李伟萍
唐荣富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Zhonghe Electromechanical Technology Co ltd
Institute of Systems Engineering of PLA Academy of Military Sciences
Original Assignee
Weihai Zhonghe Electromechanical Technology Co ltd
Institute of Systems Engineering of PLA Academy of Military Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Zhonghe Electromechanical Technology Co ltd, Institute of Systems Engineering of PLA Academy of Military Sciences filed Critical Weihai Zhonghe Electromechanical Technology Co ltd
Priority to CN202310869373.1A priority Critical patent/CN116823863A/en
Publication of CN116823863A publication Critical patent/CN116823863A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared image contour extraction method and device, wherein the method comprises the following steps: acquiring an infrared image to be processed; processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image; performing enhancement processing on the base layer image to obtain a base layer enhanced image; performing enhancement processing on the detail layer image to obtain a detail layer enhanced image; processing the base layer enhanced image and the detail layer enhanced image by using an infrared image fusion model to obtain a fusion image; and extracting the edge contour of the fusion image to obtain infrared image contour information. The method can improve the recognition capability of human eyes or machines on infrared images and highlight contour information.

Description

Infrared image contour extraction method and device
Technical Field
The invention relates to the technical field of infrared image processing, in particular to an infrared image contour extraction method and device.
Background
According to the infrared thermal imaging technology for passively detecting the thermal radiation energy received by the camera, the infrared thermal imaging technology is more and more widely applied to various fields of the army and the civil engineering due to the characteristics of strong environment adaptability, strong penetrating capacity, strong concealment, all-weather operation and the like. However, due to the imaging principle of the infrared image, the characteristics of the infrared system and various external noises (such as thermal noise, shot noise, photon electron fluctuation noise, etc.), the imaging quality of the infrared image is poorer than that of the visible light image, which is embodied in lower resolution, lower signal-to-noise ratio, poorer contrast and blurred edges, which presents a not small challenge for the image contour extraction and the realization of subsequent advanced functions.
Due to the above-mentioned characteristics of the infrared image, it has the following difficulties in the profile extraction task: firstly, because of the complex and various noise, the requirement on denoising during detection is higher; secondly, due to low contrast and blurred edges, proper contrast enhancement and certain detail enhancement treatment are required; thirdly, the extracted edges are very likely to be intermittent and uneven in brightness, and edge display enhancement is needed. Therefore, various mature visible light image edge detection algorithms can only provide reference for infrared image contour extraction research to a certain extent, cannot be fully applicable to the edge detection task of infrared images, and have less specialized research in related aspects.
In this case, how to design a suitable infrared image quality enhancement algorithm and extract an image contour that is beneficial to human eyes or machines to observe is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to solve the technical problem of providing an infrared image contour extraction method and device so as to achieve the purposes of improving the recognition capability of human eyes or machines on infrared images and highlighting contour information.
In order to solve the above technical problems, a first aspect of an embodiment of the present invention discloses an infrared image contour extraction method, which includes:
s1, acquiring an infrared image to be processed;
s2, processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image;
s3, carrying out enhancement processing on the base layer image to obtain a base layer enhanced image;
s4, carrying out enhancement processing on the detail layer image to obtain a detail layer enhanced image;
s5, processing the base layer enhanced image and the detail layer enhanced image by using an infrared image fusion model to obtain a fusion image;
s6, extracting edge contours of the fusion images to obtain infrared image contour information.
In a first aspect of the embodiment of the present invention, the processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image and a detail layer image includes:
s21, processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image;
the infrared image decomposition model is as follows:
wherein q n For base layer pictures, I n For the infrared image to be processed,for gain factor>Is the deviation coefficient;
s22, subtracting the infrared image to be processed and the basic layer image pixel by pixel to obtain a detail layer image.
In a first aspect of the embodiment of the present invention, the enhancing the base layer image to obtain a base layer enhanced image includes:
s31, carrying out histogram calculation on the base layer image to obtain a base layer image histogram;
s32, carrying out n-neighborhood maximum gray value calculation on the base layer image histogram to obtain a base layer maximum gray value set;
s33, carrying out average value calculation on the maximum gray value in the maximum gray value set of the base layer to obtain the gray average value of the base layer;
s34, summing the gray values of the base layer images to obtain first base layer gray information;
s35, multiplying the gray average value of the base layer by the gray information of the first base layer to obtain gray information of the second base layer;
s36, dividing the second basic layer gray level information by the stretching gray level to obtain third basic layer gray level information;
and S37, processing the base layer image by utilizing the third base layer gray level information to obtain a base layer enhanced image.
In a first aspect of the embodiment of the present invention, the enhancing the detail layer image to obtain a detail layer enhanced image includes:
s41, weighting the detail layer image by using an edge information template to obtain a strong edge image;
s42, carrying out pixel-by-pixel subtraction processing on the detail layer image and the strong edge image to obtain a high-frequency layer image;
s43, filtering the high-frequency layer image to obtain a detail layer enhanced image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the edge information template is:
wherein sigma is the variance of the gray value of the detail layer image, the parameter epsilon is 0.004, and mask is an edge information template.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the infrared image fusion model is:
y=k 1 x 1 +k 2 x 2 +k 3 x 3
wherein k is 1 Enhancing image weights for base layer, x 1 Enhancing the image for the base layer, k 2 For strong edge image weight, x 2 For strong edge image, k 3 Enhancing image weights for detail layers, x 3 The image is enhanced for the detail layer.
In a first aspect of the embodiment of the present invention, the extracting the edge contour of the fused image to obtain infrared image contour information includes:
s61, dividing the fusion image to obtain N sub-image blocks, wherein N is an integer;
s62, carrying out intensity statistics on the N sub-image blocks to obtain an average intensity value of each sub-image block;
s63, processing the average intensity value of each sub-image block by using a factor calculation model to obtain a gamma value of each sub-image block;
the factor calculation model is as follows:
gamma k =log 10 (m k )
wherein m is k Gamma, the average intensity value of the kth sub-image block k Gamma value for the kth sub-image block;
s64, taking the gamma value of each sub-image block as the central value of the image block, and interpolating each sub-image block until the size of each sub-image block is equal to that of the fusion image, so as to obtain N interpolation image blocks;
s65, gamma correction is carried out on the fusion image by utilizing the N interpolation image blocks, so that infrared image contour information is obtained;
s66, carrying out contour extraction on the corrected infrared image by using a preset edge detection method to obtain infrared image contour information.
The second aspect of the embodiment of the invention discloses an infrared image contour extraction device, which comprises:
the image acquisition module is used for acquiring an infrared image to be processed;
the infrared image decomposition module is used for processing the infrared image to be processed by utilizing the infrared image decomposition model to obtain a base layer image and a detail layer image;
the base layer enhancement module is used for enhancing the base layer image to obtain a base layer enhanced image;
the detail layer enhancement module is used for enhancing the detail layer image to obtain a detail layer enhanced image;
the image fusion module is used for processing the base layer enhanced image and the detail layer enhanced image by utilizing an infrared image fusion model to obtain a fusion image;
and the edge contour extraction module is used for extracting the edge contour of the fusion image to obtain infrared image contour information.
In a second aspect of the embodiment of the present invention, the processing the infrared image to be processed using the infrared image decomposition model to obtain a base layer image and a detail layer image includes:
s21, processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image;
the infrared image decomposition model is as follows:
wherein q n For base layer pictures, I n For the infrared image to be processed,for gain factor>Is the deviation coefficient;
s22, subtracting the infrared image to be processed and the basic layer image pixel by pixel to obtain a detail layer image.
In a second aspect of the embodiment of the present invention, the enhancement processing is performed on the base layer image to obtain a base layer enhanced image, including:
s31, carrying out histogram calculation on the base layer image to obtain a base layer image histogram;
s32, carrying out n-neighborhood maximum gray value calculation on the base layer image histogram to obtain a base layer maximum gray value set;
s33, carrying out average value calculation on the maximum gray value in the maximum gray value set of the base layer to obtain the gray average value of the base layer;
s34, summing the gray values of the base layer images to obtain first base layer gray information;
s35, multiplying the gray average value of the base layer by the gray information of the first base layer to obtain gray information of the second base layer;
s36, dividing the second basic layer gray level information by the stretching gray level to obtain third basic layer gray level information;
and S37, processing the base layer image by utilizing the third base layer gray level information to obtain a base layer enhanced image.
In a second aspect of the embodiment of the present invention, the enhancing the detail layer image to obtain a detail layer enhanced image includes:
s41, weighting the detail layer image by using an edge information template to obtain a strong edge image;
s42, carrying out pixel-by-pixel subtraction processing on the detail layer image and the strong edge image to obtain a high-frequency layer image;
s43, filtering the high-frequency layer image to obtain a detail layer enhanced image.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the edge information template is:
wherein sigma is the variance of the gray value of the detail layer image, the parameter epsilon is 0.004, and mask is an edge information template.
In a second aspect of the embodiment of the present invention, the infrared image fusion model is:
y=k 1 x 1 +k 2 x 2 +k 3 x 3
wherein k is 1 Enhancing image weights for base layer, x 1 Enhancing the image for the base layer, k 2 For strong edge image weight, x 2 For strong edge image, k 3 Enhancing image weights for detail layers, x 3 The image is enhanced for the detail layer.
In a second aspect of the embodiment of the present invention, the extracting the edge contour of the fused image to obtain infrared image contour information includes:
s61, dividing the fusion image to obtain N sub-image blocks, wherein N is an integer;
s62, carrying out intensity statistics on the N sub-image blocks to obtain an average intensity value of each sub-image block;
s63, processing the average intensity value of each sub-image block by using a factor calculation model to obtain a gamma value of each sub-image block;
the factor calculation model is as follows:
gamma k =log 10 (m k )
wherein m is k Gamma, the average intensity value of the kth sub-image block k Gamma value for the kth sub-image block;
s64, taking the gamma value of each sub-image block as the central value of the image block, and interpolating each sub-image block until the size of each sub-image block is equal to that of the fusion image, so as to obtain N interpolation image blocks;
s65, gamma correction is carried out on the fusion image by utilizing the N interpolation image blocks, so that infrared image contour information is obtained;
s66, carrying out contour extraction on the corrected infrared image by using a preset edge detection method to obtain infrared image contour information.
In a third aspect, the present invention discloses another infrared image contour extraction apparatus, which includes:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform some or all of the steps in the method for extracting an infrared image profile disclosed in the first aspect of the embodiment of the present invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions for executing part or all of the steps in the method for extracting an infrared image profile disclosed in the first aspect of the present invention when the computer instructions are called.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
(1) Through layering operation, detailed information is not lost while the overall contrast of the infrared image is improved, edge information is not blurred while image noise is filtered, the quality of the infrared image is effectively improved, and the image contour which is beneficial to human eyes or machine observation is highlighted;
(2) The display effect of weak edges in the infrared image can be effectively improved by performing enhanced display operation on the extracted edges;
(3) The method has the advantages of strong detail expression capability, remarkable edge display enhancement effect, low cost, easy implementation and real-time satisfaction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an infrared image contour extraction method disclosed in an embodiment of the invention;
fig. 2 is a schematic structural diagram of an infrared image contour extraction device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another infrared image profile extraction apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the present invention better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or elements but may, in the alternative, include other steps or elements not expressly listed or inherent to such process, method, article, or device.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses an infrared image contour extraction method and device, which can be used for acquiring an infrared image to be processed; processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image; performing enhancement processing on the base layer image to obtain a base layer enhanced image; performing enhancement processing on the detail layer image to obtain a detail layer enhanced image; processing the base layer enhanced image and the detail layer enhanced image by using an infrared image fusion model to obtain a fusion image; and extracting the edge contour of the fusion image to obtain infrared image contour information. The method can improve the recognition capability of human eyes or machines on infrared images and highlight contour information. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of an infrared image contour extraction method according to an embodiment of the invention. The method for extracting the outline of the infrared image described in fig. 1 is applied to the field of infrared image processing, for example, image recognition, and the embodiment of the invention is not limited. As shown in fig. 1, the infrared image profile extraction method may include the following operations:
s1, acquiring an infrared image to be processed;
s2, processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image;
s3, carrying out enhancement processing on the base layer image to obtain a base layer enhanced image;
s4, carrying out enhancement processing on the detail layer image to obtain a detail layer enhanced image;
s5, processing the base layer enhanced image and the detail layer enhanced image by using an infrared image fusion model to obtain a fusion image;
s6, extracting edge contours of the fusion images to obtain infrared image contour information.
Optionally, the processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image and a detail layer image includes:
s21, processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image;
the infrared image decomposition model is as follows:
wherein q n For base layer pictures, I n For the infrared image to be processed,for gain factor>Is the deviation coefficient;
s22, subtracting the infrared image to be processed and the basic layer image pixel by pixel to obtain a detail layer image.
Optionally, the enhancing the base layer image to obtain a base layer enhanced image includes:
s31, carrying out histogram calculation on the base layer image to obtain a base layer image histogram;
s32, carrying out n-neighborhood maximum gray value calculation on the base layer image histogram to obtain a base layer maximum gray value set;
s33, carrying out average value calculation on the maximum gray value in the maximum gray value set of the base layer to obtain the gray average value of the base layer;
s34, summing the gray values of the base layer images to obtain first base layer gray information;
s35, multiplying the gray average value of the base layer by the gray information of the first base layer to obtain gray information of the second base layer;
s36, dividing the second basic layer gray level information by the stretching gray level to obtain third basic layer gray level information;
and S37, processing the base layer image by utilizing the third base layer gray level information to obtain a base layer enhanced image.
In this embodiment, n has a value of 5.
Optionally, the enhancing the detail layer image to obtain a detail layer enhanced image includes:
s41, weighting the detail layer image by using an edge information template to obtain a strong edge image;
s42, carrying out pixel-by-pixel subtraction processing on the detail layer image and the strong edge image to obtain a high-frequency layer image;
s43, filtering the high-frequency layer image to obtain a detail layer enhanced image.
Optionally, the edge information template is:
wherein sigma is the variance of the gray value of the detail layer image, the parameter epsilon is 0.004, and mask is an edge information template.
Optionally, the infrared image fusion model is:
y=k 1 x 1 +k 2 x 2 +k 3 x 3
wherein k is 1 Enhancing image weights for base layer, x 1 Enhancing the image for the base layer, k 2 For strong edge image weight, x 2 For strong edge image, k 3 Enhancing image weights for detail layers, x 3 The image is enhanced for the detail layer.
Optionally, before fusion, the base layer enhanced image, the strong edge image and the detail layer enhanced image may be respectively guided and filtered to obtain a filtered base layer enhanced image, a filtered strong edge image and a filtered detail layer enhanced image, where the filtered base layer enhanced image, the filtered strong edge image and the filtered detail layer enhanced image are obtained by using formula y 1 =k 11 x 11 +k 21 x 21 +k 31 x 31 Fusing to obtain a final fused image k 11 Enhancing image weights for filtering base layers, x 11 Enhancing the image for filtering the base layer, k 21 To filter strong edge image weights, x 21 To filter strong edge images, k 31 Enhancing image weights for filtering detail layers, x 31 The image is enhanced for the filtering detail layer.
The pilot filter has two inputs and one output. In the theory of a guided filter, the filtered output image q is considered to be a linear transformation of the guided image I, expressed as
Wherein omega k Represents a window centered on pixel k, and has a size of (2r+1) × (2r+1). Linear coefficient a k And b k At omega k Is a constant whose value is equal to the minimum of the square difference between the output q and the input p.
The regularization parameter δ in the equation may change the smoothness of the filter. After linear regression, the two coefficients can be calculated as follows:
here the parameter ε k Sum mu k Respectively, the guiding image I is at ω K The variance and the mean of (a). The parameter ω is the number of pixels. The final filtered output may be calculated according to the following formula:
wherein the method comprises the steps ofAnd->Is a in different windows k And b k Average value of (2). The parameters r and delta are regularization parameters that determine the filter size and flatness. The inputs p and I are the input image and the guide image, respectively.
In this embodiment, the base layer enhanced image weight is 1, the strong edge image weight is 10, and the detail layer enhanced image weight is 4.
Optionally, the extracting the edge contour of the fused image to obtain infrared image contour information includes:
s61, dividing the fusion image to obtain N sub-image blocks, wherein N is an integer;
s62, carrying out intensity statistics on the N sub-image blocks to obtain an average intensity value of each sub-image block;
s63, processing the average intensity value of each sub-image block by using a factor calculation model to obtain a gamma value of each sub-image block;
the factor calculation model is as follows:
gamma k =log 10 (m k )
wherein m is k Gamma, the average intensity value of the kth sub-image block k Gamma value for the kth sub-image block;
s64, taking the gamma value of each sub-image block as the central value of the image block, and interpolating each sub-image block until the size of each sub-image block is equal to that of the fusion image, so as to obtain N interpolation image blocks;
s65, gamma correction is carried out on the fusion image by utilizing the N interpolation image blocks, so that infrared image contour information is obtained;
s66, carrying out contour extraction on the corrected infrared image by using a preset edge detection method to obtain infrared image contour information.
Optionally, gamma correction may be performed on the fused image to obtain infrared image profile information, where the method includes:
I L and I R Imaging planes of left and right cameras respectively, x 1 And x 2 Is the space point p at I L And I R Projection points on the plane. X is x 1 、x 2 P and two camera optical centers o 1 ,o 2 All lying in the same plane (polar plane pi). The projections of pi plane on two imaging planes are respectively l 1 And l 2 Optical center connecting line o of two cameras 1 o 2 Referred to as the baseline, the projection point e at which the baseline intersects the imaging plane 1 And e 2 Called pole, then called l 1 Is I L Corresponds to x 2 Polar lines of (l) 2 Is I R Corresponds to x 1 The polar lines of the imaging plane all intersect at the poles thereof to form a epipolar constraint relationship.
Assume that the rotation matrix for horizontally aligning the left pole line is R rect Principal point (C) x ,C y ) Is the origin of the left view, then the origin points to the left pole e 1 Is a square of (2)The direction is the translation vector direction between the two camera projection centers:
thereby calculating the vector e 1 Setting a vector e orthogonal to the main optical axis direction 2 By combining the vectors e 1 Cross product is carried out on the main optical axis direction vector, and normalized representation is carried out, so that the following steps are obtained:
similarly, a vector e is known to exist 3 Orthogonal to vector e 1 ,e 2 Push e 3 =e 1 ×e 2 . Through the operation, the left polar line is aligned with the horizontal direction, and the horizontal alignment matrix R rect The method comprises the following steps:
based on horizontal alignment matrix R rect The left eye image can be rotated about its center of projection, with the left pole shifted to infinity, and the epipolar line shifted to a state horizontal and parallel to the baseline. To further align the two camera imaging planes in a row, one can achieve this by the following equation:
after correction, the following relation exists between the matrix of the two cameras and the projection matrix:
in the above formula, alpha l ,α r Respectively two imagesThe distortion ratio of the pixels of the machine is assigned to 0 to simplify the calculation flow. Converting the coordinates of the spatial points to pixel coordinates in the imaging plane by a re-projection matrix, the re-projection being expressed as:
mapping the two-dimensional coordinate points into a three-dimensional coordinate system by using a reprojection matrix Q, and obtaining:
d represents parallax, and the three-dimensional coordinates (X/W, Y/W, Z/W) are obtained by simplifying the above formula, wherein c x ' represents the principal point of the right image, c when the correction is correct x =c x After three-dimensional correction, the obtained three-dimensional coordinates are unfolded to obtain:
the edge detection method includes, but is not limited to, an edge detection algorithm such as LoG, sobel, roberts, doG, and the filter kernel size of Gaussian filter and Laplacian is 3*3.
Alternatively, an edge detection method based on attention residual U-Net (Attention Residual Block-UNet, ARB-UNet) may be employed, the ARB-UNet being composed of two parts, a contracted path and an expanded path. The contraction path is used for sampling down and extracting the local characteristics of the image, and the expansion path is used for accurately positioning the characteristics of the image according to the context information. In this model, a Convolution Block Attention Module (CBAM) is introduced to enhance the expression of features by focusing on detailed features of the image. On the one hand, applying the CBAM module to the "skip connection" of the original U-Net model, weights are assigned to each feature map in the contracted path, rather than copying them equally into the corresponding extended path as in the original U-Net network; on the other hand, considering the relation between feature channels, the CBAM attention mechanism module is applied to the DRB residual structure to obtain an attention residual structure (ARB). The entire structure of the network model is derived from U-Net, the convolution blocks of U-Net are replaced by ARBs. The contracted path contains 4 modules, each module comprises an ARB structure and a maximum pooling layer with the specification of 2×2, the expanded path also contains 4 modules, each module comprises a transposed convolution with the specification of 2×2, the modules are connected with a feature map weighted from the contracted path by a CBAM module, after passing through the ARB module, focal Tversky Loss is selected as a loss function of the model, and finally an edge detection diagram of the model is obtained through the convolution layer and the Sigmoid function.
In this example, to verify the effectiveness of the method of the present invention, quality evaluations were performed subjectively and objectively, respectively. The subjective evaluation is subjective judgment of the human eyes on the image detail enhancement condition and the contour extraction condition. The objective index is a detail enhancement evaluation index (Ehancement Measure Evaluation), an information Entropy (Entropy) and an Average Gradient (Average Gradient) and is used for assisting in judging the infrared image enhancement quality. The physical meaning of the detail enhancement evaluation index is that the degree of change of local gray level of an image is expressed, and when the local gray level is changed more severely, the detail expression is more abundant, and the EME value is larger; the information entropy is a measure of the information content of the image, and the larger the information content of the image is, the more detail is, and the clearer the image is; the average gradient is the average value of the gradient of the whole image and is used for measuring the detail contrast and texture change characteristics of the image, and the larger the value is, the richer the detail is represented and the clearer the texture is.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of an infrared image contour extraction device according to an embodiment of the invention. The infrared image contour extraction device described in fig. 2 is applied to the field of infrared image processing, for example, image recognition, and the embodiment of the invention is not limited. As shown in fig. 2, the infrared image profile extraction apparatus may include the operations of:
s301, an image acquisition module is used for acquiring an infrared image to be processed;
s302, an infrared image decomposition module is used for processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image;
s303, a base layer enhancement module, which is used for enhancing the base layer image to obtain a base layer enhanced image;
s304, a detail layer enhancement module is used for enhancing the detail layer image to obtain a detail layer enhanced image;
s305, an image fusion module is used for processing the basic layer enhanced image and the detail layer enhanced image by utilizing an infrared image fusion model to obtain a fusion image;
s306, an edge contour extraction module is used for extracting the edge contour of the fusion image to obtain infrared image contour information.
Optionally, the processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image and a detail layer image includes:
s21, processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image;
the infrared image decomposition model is as follows:
wherein q n For base layer pictures, I n For the infrared image to be processed,for gain factor>Is the deviation coefficient;
s22, subtracting the infrared image to be processed and the basic layer image pixel by pixel to obtain a detail layer image.
Optionally, the enhancing the base layer image to obtain a base layer enhanced image includes:
s31, carrying out histogram calculation on the base layer image to obtain a base layer image histogram;
s32, carrying out n-neighborhood maximum gray value calculation on the base layer image histogram to obtain a base layer maximum gray value set;
s33, carrying out average value calculation on the maximum gray value in the maximum gray value set of the base layer to obtain the gray average value of the base layer;
s34, summing the gray values of the base layer images to obtain first base layer gray information;
s35, multiplying the gray average value of the base layer by the gray information of the first base layer to obtain gray information of the second base layer;
s36, dividing the second basic layer gray level information by the stretching gray level to obtain third basic layer gray level information;
and S37, processing the base layer image by utilizing the third base layer gray level information to obtain a base layer enhanced image.
Optionally, the enhancing the detail layer image to obtain a detail layer enhanced image includes:
s41, weighting the detail layer image by using an edge information template to obtain a strong edge image;
s42, carrying out pixel-by-pixel subtraction processing on the detail layer image and the strong edge image to obtain a high-frequency layer image;
s43, filtering the high-frequency layer image to obtain a detail layer enhanced image.
Optionally, the edge information template is:
wherein sigma is the variance of the gray value of the detail layer image, the parameter epsilon is 0.004, and mask is an edge information template.
Optionally, the infrared image fusion model is:
y=k 1 x 1 +k 2 x 2 +k 3 x 3
wherein k is 1 Enhancing image weights for base layer, x 1 Enhancing the image for the base layer, k 2 For strong edge image weight, x 2 For strong edge image, k 3 Enhancing image weights for detail layers, x 3 The image is enhanced for the detail layer.
Optionally, the extracting the edge contour of the fused image to obtain infrared image contour information includes:
s61, dividing the fusion image to obtain N sub-image blocks, wherein N is an integer;
s62, carrying out intensity statistics on the N sub-image blocks to obtain an average intensity value of each sub-image block;
s63, processing the average intensity value of each sub-image block by using a factor calculation model to obtain a gamma value of each sub-image block;
the factor calculation model is as follows:
gamma k =log 10 (m k )
wherein m is k Gamma, the average intensity value of the kth sub-image block k Gamma value for the kth sub-image block;
s64, taking the gamma value of each sub-image block as the central value of the image block, and interpolating each sub-image block until the size of each sub-image block is equal to that of the fusion image, so as to obtain N interpolation image blocks;
s65, gamma correction is carried out on the fusion image by utilizing the N interpolation image blocks, so that infrared image contour information is obtained;
s66, carrying out contour extraction on the corrected infrared image by using a preset edge detection method to obtain infrared image contour information.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an infrared image contour extraction device according to an embodiment of the invention. The infrared image contour extraction device described in fig. 3 is applied to the field of infrared image processing, for example, image recognition, and the embodiment of the invention is not limited. As shown in fig. 3, the infrared image profile extraction apparatus may include the operations of:
a memory 401 storing executable program codes;
a processor 402 coupled with the memory 401;
the processor 402 invokes executable program code stored in the memory 401 for performing the steps in the monocular infrared image target detection and depth estimation method described in embodiment one.
Example IV
The embodiment of the invention discloses a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the steps in the monocular infrared image target detection and depth estimation method described in the embodiment one.
The apparatus embodiments described above are merely illustrative, in which the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses an infrared image contour extraction method and device, which are disclosed as preferred embodiments of the invention, and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. An infrared image contour extraction method, characterized in that the method comprises:
s1, acquiring an infrared image to be processed;
s2, processing the infrared image to be processed by utilizing an infrared image decomposition model to obtain a base layer image and a detail layer image;
s3, carrying out enhancement processing on the base layer image to obtain a base layer enhanced image;
s4, carrying out enhancement processing on the detail layer image to obtain a detail layer enhanced image;
s5, processing the base layer enhanced image and the detail layer enhanced image by using an infrared image fusion model to obtain a fusion image;
s6, extracting edge contours of the fusion images to obtain infrared image contour information.
2. The method for extracting an infrared image contour according to claim 1, wherein said processing the infrared image to be processed using an infrared image decomposition model to obtain a base layer image and a detail layer image includes:
s21, processing the infrared image to be processed by using an infrared image decomposition model to obtain a base layer image;
the infrared image decomposition model is as follows:
wherein q n For base layer pictures, I n For the infrared image to be processed,for gain factor>Is the deviation coefficient;
s22, subtracting the infrared image to be processed and the basic layer image pixel by pixel to obtain a detail layer image.
3. The method of claim 1, wherein the enhancing the base layer image to obtain a base layer enhanced image comprises:
s31, carrying out histogram calculation on the base layer image to obtain a base layer image histogram;
s32, carrying out n-neighborhood maximum gray value calculation on the base layer image histogram to obtain a base layer maximum gray value set;
s33, carrying out average value calculation on the maximum gray value in the maximum gray value set of the base layer to obtain the gray average value of the base layer;
s34, summing the gray values of the base layer images to obtain first base layer gray information;
s35, multiplying the gray average value of the base layer by the gray information of the first base layer to obtain gray information of the second base layer;
s36, dividing the second basic layer gray level information by the stretching gray level to obtain third basic layer gray level information;
and S37, processing the base layer image by utilizing the third base layer gray level information to obtain a base layer enhanced image.
4. The method for extracting an infrared image contour according to claim 1, wherein said enhancing the detail layer image to obtain a detail layer enhanced image comprises:
s41, weighting the detail layer image by using an edge information template to obtain a strong edge image;
s42, carrying out pixel-by-pixel subtraction processing on the detail layer image and the strong edge image to obtain a high-frequency layer image;
s43, filtering the high-frequency layer image to obtain a detail layer enhanced image.
5. The method of claim 4, wherein the edge information template is:
wherein sigma is the variance of the gray value of the detail layer image, the parameter epsilon is 0.004, and mask is an edge information template.
6. The method for extracting an infrared image profile according to claim 1, wherein the infrared image fusion model is:
y=k 1 x 1 +k 2 x 2 +k 3 x 3
wherein k is 1 Enhancing image weights for base layer, x 1 Adding to the base layerStrong image, k 2 For strong edge image weight, x 2 For strong edge image, k 3 Enhancing image weights for detail layers, x 3 The image is enhanced for the detail layer.
7. The method for extracting an infrared image contour according to claim 1, wherein said performing edge contour extraction on said fused image to obtain infrared image contour information includes:
s61, dividing the fusion image to obtain N sub-image blocks, wherein N is an integer;
s62, carrying out intensity statistics on the N sub-image blocks to obtain an average intensity value of each sub-image block;
s63, processing the average intensity value of each sub-image block by using a factor calculation model to obtain a gamma value of each sub-image block;
the factor calculation model is as follows:
gamma k =log 10 (m k )
wherein m is k Gamma, the average intensity value of the kth sub-image block k Gamma value for the kth sub-image block;
s64, taking the gamma value of each sub-image block as the central value of the image block, and interpolating each sub-image block until the size of each sub-image block is equal to that of the fusion image, so as to obtain N interpolation image blocks;
s65, gamma correction is carried out on the fusion image by utilizing the N interpolation image blocks, so that a corrected infrared image is obtained;
s66, carrying out contour extraction on the corrected infrared image by using a preset edge detection method to obtain infrared image contour information.
8. An infrared image profile extraction apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an infrared image to be processed;
the infrared image decomposition module is used for processing the infrared image to be processed by utilizing the infrared image decomposition model to obtain a base layer image and a detail layer image;
the base layer enhancement module is used for enhancing the base layer image to obtain a base layer enhanced image;
the detail layer enhancement module is used for enhancing the detail layer image to obtain a detail layer enhanced image;
the image fusion module is used for processing the base layer enhanced image and the detail layer enhanced image by utilizing an infrared image fusion model to obtain a fusion image;
and the edge contour extraction module is used for extracting the edge contour of the fusion image to obtain infrared image contour information.
9. An infrared image profile extraction apparatus, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the infrared image profile extraction method of any one of claims 1-7.
10. A computer-storable medium storing computer instructions for performing the infrared image profile extraction method according to any one of claims 1 to 7 when called.
CN202310869373.1A 2023-07-14 2023-07-14 Infrared image contour extraction method and device Pending CN116823863A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310869373.1A CN116823863A (en) 2023-07-14 2023-07-14 Infrared image contour extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310869373.1A CN116823863A (en) 2023-07-14 2023-07-14 Infrared image contour extraction method and device

Publications (1)

Publication Number Publication Date
CN116823863A true CN116823863A (en) 2023-09-29

Family

ID=88116669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310869373.1A Pending CN116823863A (en) 2023-07-14 2023-07-14 Infrared image contour extraction method and device

Country Status (1)

Country Link
CN (1) CN116823863A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495867A (en) * 2024-01-03 2024-02-02 东莞市星火齿轮有限公司 Visual detection method and system for precision of small-module gear

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495867A (en) * 2024-01-03 2024-02-02 东莞市星火齿轮有限公司 Visual detection method and system for precision of small-module gear
CN117495867B (en) * 2024-01-03 2024-05-31 东莞市星火齿轮有限公司 Visual detection method and system for precision of small-module gear

Similar Documents

Publication Publication Date Title
CN109754377B (en) Multi-exposure image fusion method
Sugimura et al. Enhancing color images of extremely low light scenes based on RGB/NIR images acquisition with different exposure times
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN109685045B (en) Moving target video tracking method and system
CN110223377A (en) One kind being based on stereo visual system high accuracy three-dimensional method for reconstructing
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN111192226B (en) Image fusion denoising method, device and system
CN108734676A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN116823863A (en) Infrared image contour extraction method and device
Asmare et al. Image Enhancement by Fusion in Contourlet Transform.
Sun et al. Deep maximum a posterior estimator for video denoising
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN109064402A (en) Based on the single image super resolution ratio reconstruction method for enhancing non local total variation model priori
CN117197627B (en) Multi-mode image fusion method based on high-order degradation model
Ye et al. LFIENet: light field image enhancement network by fusing exposures of LF-DSLR image pairs
Jung et al. Multispectral fusion of rgb and nir images using weighted least squares and convolution neural networks
Sheng et al. Guided colorization using mono-color image pairs
CN117173232A (en) Depth image acquisition method, device and equipment
Chen et al. Depth map inpainting via sparse distortion model
Deng et al. Selective kernel and motion-emphasized loss based attention-guided network for HDR imaging of dynamic scenes
CN116402908A (en) Dense light field image reconstruction method based on heterogeneous imaging
Zhou et al. Scale-aware multispectral fusion of RGB and NIR images based on alternating guidance
Mittal et al. Image Resolution Enhancer using Deep Learning
Khoddami et al. Depth map super resolution using structure-preserving guided filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination