CN114757912A - Material damage detection method, system, terminal and medium based on image fusion - Google Patents
Material damage detection method, system, terminal and medium based on image fusion Download PDFInfo
- Publication number
- CN114757912A CN114757912A CN202210395025.0A CN202210395025A CN114757912A CN 114757912 A CN114757912 A CN 114757912A CN 202210395025 A CN202210395025 A CN 202210395025A CN 114757912 A CN114757912 A CN 114757912A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- weight
- damage
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 90
- 239000000463 material Substances 0.000 title claims abstract description 36
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 238000001914 filtration Methods 0.000 claims abstract description 20
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 14
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 13
- 238000010276 construction Methods 0.000 claims description 13
- 230000008447 perception Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 5
- 230000003902 lesion Effects 0.000 claims description 2
- 239000000203 mixture Substances 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 27
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 235000015842 Hesperis Nutrition 0.000 description 1
- 235000012633 Iberis amara Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a material damage detection method, a system, a terminal and a medium based on image fusion, relating to the field of material damage detection, and the technical scheme is as follows: acquiring infrared damage images, and decomposing the infrared damage images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details; constructing a corresponding contrast saliency map according to contrast information in the infrared damage image, and constructing a fusion weight map according to the contrast saliency map; optimizing a fusion weight map according to gradient oriented filtering to obtain a fusion weight matrix containing basic feature weight and texture detail weight; and respectively weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature maps and texture detail maps, and fusing the basic feature maps and the texture detail maps to obtain a fusion image. The invention can obtain a fusion image representing various defects simultaneously, and realizes complete detection of the damaged part under the condition that the surface of the material is damaged.
Description
Technical Field
The present invention relates to the field of material damage detection, and more particularly, to a method, a system, a terminal, and a medium for detecting material damage based on image fusion.
Background
In recent years, more and more rockets, satellites and detectors have been launched into space. The collision or explosion of these objects creates a large number of space debris of varying size and shape. When the space debris fly in space, the speed is extremely high. Because the quantity of fragments is very huge and devices such as ground radars and the like cannot effectively track, the ultra-high-speed impact of the tiny space fragments cannot be avoided, which is one of the main reasons for damaging materials and structures in the in-orbit operation process of the spacecraft. Therefore, damage detection is required for various spacecrafts exposed in space debris environment. Due to the randomness of the impact event, the parameters such as the impact position, the impact speed and the like are unpredictable, the caused damage defects often do not exist independently, the damage types, the sizes, the quantities and the positions are unknown, and multiple defects are adjacent to each other and mutually influence each other.
For the composite coupling type defects and large-size defects caused by space debris, the image defect detection algorithm is difficult to reflect the comprehensive, multi-detail and integral defect damage outline of the detected material through a single detection image. At present, a damage detection method based on feature extraction can generally obtain a defect detection image with a good detection effect on a single type of defect. However, for a spacecraft impacted at an ultra-high speed, complex defects often exist, and the existing image defect detection algorithm cannot realize comprehensive, multi-detail and integral defect damage contour detection of a detected material.
Therefore, how to design a material damage detection method, system, terminal and medium based on image fusion, which can overcome the above-mentioned defects, is a problem that we are in urgent need of solving at present.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a material damage detection method, a system, a terminal and a medium based on image fusion, which can accurately detect a single type of defect, integrate images representing different defects, finally obtain a fusion image representing multiple types of defects simultaneously, and realize complete detection of a damaged part under the condition that the surface of a material is damaged.
The technical purpose of the invention is realized by the following technical scheme:
in a first aspect, a method for detecting material damage based on image fusion is provided, which includes the following steps:
acquiring at least two infrared damage images, and decomposing the infrared damage images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details;
constructing a corresponding contrast saliency map according to contrast information in the infrared damage image, and constructing a fusion weight map according to the contrast saliency map;
optimizing a fusion weight map according to gradient-oriented filtering to obtain a fusion weight matrix containing basic feature weights and texture detail weights;
And respectively weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature maps and texture detail maps, and fusing the basic feature maps and the texture detail maps to obtain a fusion image.
Further, the decomposition process of the basic features and the texture details specifically includes:
carrying out mean value filtering on the multiple infrared damage images to obtain corresponding basic characteristics;
and filtering corresponding basic features from the infrared damage image to obtain corresponding texture details.
Further, the construction process of the contrast saliency map specifically comprises:
the image local features in the infrared damage image are represented by local contrast in the infrared damage image, and the representation formula is specifically as follows:
wherein LCiRepresenting local contrast; o represents the window length; p represents a window width; i isiRepresenting an infrared damage image; μ (x, y) represents a window average centered at spatial location (x, y);
constructing a contrast saliency map by using a local average value of local contrast, wherein a construction formula specifically comprises the following steps:
CSi=LCi*Gr,σ
wherein CSiRepresenting a contrast saliency map; gr,σRepresenting a gaussian filter; r represents a filter radius; sigma tableVariance is shown.
Further, the construction formula of the fusion weight map is specifically as follows:
wherein R isiRepresenting a fusion weight graph;expressing the contrast value of the kth pixel point of the ith infrared damage image; n represents the number of infrared damage images.
Further, the optimization obtaining process of the fusion weight matrix specifically comprises:
determining a local linear model of an output image of the gradient guiding filter and a guiding image in a filtering window, and taking an infrared damage image as a guiding image;
obtaining a first coefficient and a second coefficient in a local linear model by minimizing an error between an input image and an output image, fusing a weight map as the input image;
respectively calculating the mean values of the first coefficient and the second coefficient in all windows to obtain corresponding first mean values and second mean values;
and inputting the first mean value and the second mean value into a local linear model, and respectively calculating by combining different edge perception weights in the gradient guiding filter to obtain a basic feature weight and a texture detail weight.
Further, the expression of the local linear model is specifically as follows:
wherein Z (j) represents the output image pixel value corresponding to the pixel j; i (j) represents the leading image pixel value corresponding to pixel j; a is kRepresenting a first coefficient; bkRepresenting a second coefficient; omegakThe expression is centered on the jth pixel point and has a size (2 r)G+1)×(2rG+1) window, rGRepresents a window radius;
the calculation formula of the first coefficient and the second coefficient is specifically as follows:
wherein the content of the first and second substances,the product of pixel values of corresponding pixel points in the guide image I and the input image R is represented in a window omegakThe average value of (1);the pixel values of the pixels in the input image R are represented in the window omegakThe average value of (1);the pixel values of the pixel points in the guide image I are represented in the window omegakThe average value of (1); λ represents a regularization parameter; ΨI(k)Representing the edge perception weight of the kth pixel point of the guide image I; sigma2 kIndicating that the guide image I is in the window omegakThe average value of (1); gamma raykIndicating the setting coefficient.
Further, the calculation formula of the edge perception weight is specifically as follows:
wherein M represents the total pixel number of the guide image I; xi (k) and xi (i) respectively represent the variance of the regions with the pixel point k and the pixel point i as the centers; epsilon represents the parameter of the edge perception weight, and the value is a normal number; sigmaI,3(k) Represents the variance of the guide image I in a window of size 3 × 3;the indicated size is (2 r)1+1)×(2r1+1) variance of the guide image I in the window, r1Represents a constant;
the calculation formula of the setting coefficient is specifically as follows:
In a second aspect, there is provided an image fusion-based material damage detection system, comprising:
the image decomposition module is used for acquiring at least two infrared damage images and decomposing the infrared damage images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details;
the weight construction module is used for constructing a corresponding contrast saliency map according to the contrast information in the infrared damage image and constructing a fusion weight map according to the contrast saliency map;
the weight optimization module is used for optimizing a fusion weight map according to gradient-oriented filtering to obtain a fusion weight matrix containing basic feature weights and texture detail weights;
and the image fusion module is used for weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature images and texture detail images, and fusing the basic feature images and the texture detail images to obtain fusion images.
In a third aspect, a computer terminal is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the method for detecting material damage based on image fusion according to any one of the first aspect is implemented.
In a fourth aspect, a computer-readable medium is provided, on which a computer program is stored, the computer program being executed by a processor, and the method for detecting material damage based on image fusion according to any one of the first aspect can be implemented.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the material damage detection method based on image fusion, image multi-scale information is considered, the decomposed multi-scale information fully retains the damaged part in the original infrared damage image, meanwhile, the basic characteristics which do not need to be concerned are blurred, not only can a single type of defect be accurately detected, but also the images representing different defects are integrated, finally, a fusion image which can simultaneously represent multiple types of defects is obtained, and the complete detection of the damaged part is realized under the condition that the surface of the material is damaged;
2. the present invention considers that the human visual system is not sensitive to a single pixel, but to variations in the local neighborhood of the pixel; therefore, the image local contrast is used for constructing an initial fusion weight map so as to fully utilize the local characteristics of the corresponding image; the initial fusion weight map is usually noisy and may have problems such as incomplete alignment, which easily causes artifacts in the fused image. Gradient guided filters are used to solve this problem;
3. The invention synthesizes a plurality of defect characteristic images into a new image through an image fusion algorithm, so that the fused image has more comprehensive and clearer description on the damage defect, the combination of complementary information of a surface image, a subsurface image, a local image and a global image is realized, and the specific positioning of the defect type, the outline shape and the size of the damage of the area in the material is visually represented.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is an overall flow diagram in an embodiment of the invention;
FIG. 2 is a flow chart of gradient domain oriented filtering fusion according to an embodiment of the present invention;
fig. 3 is a system block diagram in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example 1: the material damage detection method based on image fusion, as shown in fig. 1, includes the following steps:
S1: acquiring two infrared damage images, and decomposing the two infrared damage images by a multi-scale decomposition algorithm to obtain corresponding basic features and texture details from different infrared damage images; it should be noted that, the number of the infrared damage images may also be two or more, and is not limited herein;
s2: constructing a corresponding contrast saliency map according to contrast information in the infrared damage image, and constructing a fusion weight map according to the contrast saliency map;
s3: optimizing a fusion weight map according to gradient oriented filtering to obtain a fusion weight matrix containing basic feature weight and texture detail weight;
s4: and respectively weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature maps and texture detail maps, and fusing the basic feature maps and the texture detail maps to obtain a fusion image.
The decomposition process of the basic features and the texture details is specifically as follows: carrying out mean value filtering on the multiple infrared damage images to obtain corresponding basic characteristics; and filtering corresponding basic features from the infrared damage image to obtain corresponding texture details.
Specifically, the calculation formula of the basic features is specifically as follows:
Bai=Ii×M0
wherein M is 0The average filter is represented, and as the filtering radius increases, the information of the detail layer also increases correspondingly, and in this embodiment, the size of the average filter is 31 × 31; I.C. AiRepresenting an infrared damage image; baiThe underlying features are represented.
The texture detail calculation formula is specifically as follows:
Dei=Ii-Bai
wherein, DeiShowing texture details.
The construction process of the contrast saliency map comprises the following specific steps:
(1) the image local features in the infrared damage image are represented by local contrast in the infrared damage image, and the representation formula specifically comprises the following steps:
wherein LCiRepresenting local contrast; o represents the window length; p represents a window width; i isiRepresenting an infrared damage image; μ (x, y) represents a window average centered at spatial location (x, y);
(2) and constructing a contrast saliency map by using the local average value of the local contrast, wherein the construction formula specifically comprises the following steps:
CSi=LCi*Gr,σ
wherein CSiRepresenting a contrast saliency map; gr,σRepresenting a gaussian filter; r represents a filter radius; σ denotes the variance. In the present embodiment, r is 20 and σ is 5.
The construction formula of the fusion weight graph is specifically as follows:
wherein R isiRepresenting a fusion weight graph;indicating the ith infrared injuryThe contrast value of the kth pixel point of the image; n represents the number of infrared lesion images.
The initial fusion weight map obtained by the above method is usually noisy, and may have problems such as incomplete alignment, which easily causes artifacts in the fused image. The regularization parameters of the traditional image filter are fixed in the processing process, and no definite constraint condition is adopted to process edges, so that edge information is inevitably lost in some cases, and some artifact images are generated to reduce the fusion effect. Considering the randomness of the ultra-high-speed impact event, the impact position, the speed and other parameters are unpredictable, the damage conditions of different positions are different, the fusion algorithm of the ultra-high-speed impact event has to ensure that the impact damage under different conditions can be flexibly fused, and good fusion performance can be exerted under different impact conditions. Therefore, we introduce gradient domain guided filtering with explicit edge constraints to optimize the initial weight matrix, as shown in fig. 2.
And determining a local linear model of an output image of the gradient guiding filter and a guiding image in a filtering window, and taking the infrared damage image as the guiding image. The expression of the local linear model is specifically as follows:
wherein Z (j) represents the output image pixel value corresponding to the pixel j; i (j) represents the leading image pixel value corresponding to pixel j; a is kRepresenting a first coefficient; b is a mixture ofkRepresents a second coefficient; omegakThe expression is centered on the jth pixel point and has a size (2 r)G+1)×(2rG+1) window, rGDenotes the window radius, set rG=16。
The first coefficient and the second coefficient in the local linear model are obtained by minimizing an error between the input image and the output image, and the weight map is fused as the input image. The minimization calculation formula is specifically as follows:
wherein E represents an error function; λ represents a preset regularization parameter; ΨI(k)Representing the edge perception weight of the kth pixel point of the guide image I; gamma raykRepresenting a setting coefficient; r (j) represents the input image pixel value corresponding to pixel j.
Specifically, the calculation formula of the edge perception weight is specifically as follows:
wherein M represents the total pixel number of the guide image I; xi (k) and xi (i) respectively represent the variance of the regions with the pixel point k and the pixel point i as the centers; epsilon represents the parameter of the edge perception weight, and the value is a normal number; sigmaI,3(k) Represents the variance of the guide image I in a window of size 3 × 3;the indicated size is (2 r)1+1)×(2r1+1) variance of the guide image I in the window, r1Denotes a constant, and r is set in the present embodiment1=40。
The calculation formula for setting the coefficient specifically is as follows:
The optimization obtaining process of the fusion weight matrix specifically comprises the following steps: determining a local linear model of an output image of the gradient guiding filter and a guiding image in a filtering window, and taking an infrared damage image as a guiding image; obtaining a first coefficient and a second coefficient in a local linear model by minimizing an error between an input image and an output image, and fusing a weight map as the input image; respectively calculating the mean values of the first coefficient and the second coefficient in all windows to obtain corresponding first mean values and second mean values; and inputting the first mean value and the second mean value into a local linear model, and respectively calculating by combining different edge perception weights in the gradient guiding filter to obtain a basic feature weight and a texture detail weight.
The calculation formula of the first coefficient and the second coefficient is specifically as follows:
wherein the content of the first and second substances,the product of pixel values of corresponding pixel points in the guide image I and the input image R is represented in a window omegakThe average value of (1);the pixel values of the pixels in the input image R are represented in the window omegakThe average value of (1);the pixel values of the pixel points in the guide image I are represented in the window omegakThe average value of (1); λ represents a regularization parameter; ΨI(k)Representing the edge perception weight of the kth pixel point of the guide image I; sigma 2 kIndicating that the guide image I is in the window omegakThe average value of (1); gamma raykIndicating the setting coefficient.
Since one pixel can be contained in a plurality of windows, the final output image ZiAnd a guide image IiThe relationship is expressed as:
wherein the content of the first and second substances, coefficient a represented in all windowskThe average value of (a) of (b),coefficient b represented in all windowskIs measured.
Defining the gradient-oriented filter as GDGF (R, I), where R and I are the input image and the guide image, respectively, the optimized weight map output is represented as:
wherein, Wi BaAnd Wi DeRespectively, an infrared damage image IiThe basic feature weight and the texture detail weight of (1); epsiloniIs a parameter of edge perceptual weighting in a gradient-oriented filter, in this embodiment ε1=0.3,ε2=10-5。
For basic feature BaiAnd texture detail DeiAnd respectively weighting to obtain final fusion images, wherein the specific calculation formula is as follows:
wherein the content of the first and second substances,in order to obtain the basic characteristic diagram after the fusion,and F is a final fused image.
As shown in table 1, it is an objective performance quantification indicator for several classical methods. It can be seen that, for the damage image of the detection material, the maximum quality indexes of q (a) (characteristic measurement), q (sf) (spatial frequency), q (mi) (normalized mutual information) and q (ssim) (structural similarity) are given by the material damage detection method based on image fusion, and the tower-shaped method has almost the same difference with the method of the present invention in q (mi) and q (ssim), which indicates that the tower-shaped method and the present invention can well retain the original information of different source images. However, if the fused image is similar to only one of the source images, the obtained q (mi) is also ideal (for example, the fused image is one of the source images), and therefore, the fused result is not comprehensive only according to q (mi). From the comparison result of q (sf), it can be seen that, in the case of a comparable fusion content, the index of the wavelet method and the tower method is obviously inferior to that of the method of the present invention, which indicates that the detection images obtained by the wavelet method and the tower method are unclear in some detail parts and even have artifacts; the method of the invention can provide ideal quality indexes in Q (A) and Q (SF), which also shows that the definition of the material damage detection method based on image fusion is superior to other two methods.
TABLE 1 quantitative evaluation of defect reconstructed images based on different image fusion methods
Example 2: the material damage detection system based on image fusion is used for the detection method described in embodiment 1, and as shown in fig. 3, the system comprises an image decomposition module, a weight construction module, a weight optimization module and an image fusion module.
The image decomposition module is used for acquiring at least two infrared damage images and decomposing the images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details; the weight construction module is used for constructing a corresponding contrast saliency map according to the contrast information in the infrared damage image and constructing a fusion weight map according to the contrast saliency map; the weight optimization module is used for optimizing a fusion weight map according to gradient-oriented filtering to obtain a fusion weight matrix containing basic feature weight and texture detail weight; and the image fusion module is used for weighting the basic features and the texture details in different infrared damage images respectively according to all the fusion weight matrixes to obtain corresponding basic feature images and texture detail images, and fusing the basic feature images and the texture detail images to obtain a fusion image.
Working principle; according to the method, the multi-scale information of the image is considered, the decomposed multi-scale information fully retains the damage part in the original infrared damage image, meanwhile, the basic characteristics which do not need to be concerned are blurred, not only can the single type of defect be accurately detected, but also the images representing different defects are integrated, finally, the fusion image which can represent multiple types of defects simultaneously is obtained, and the complete detection of the damage part is realized under the condition that the surface of the material is damaged; furthermore, consider that the human visual system is insensitive to individual pixels, but sensitive to changes in the local neighborhood of pixels; therefore, the image local contrast is used for constructing an initial fusion weight map so as to fully utilize the local characteristics of the corresponding image; the initial fusion weight map is usually noisy and may have problems such as incomplete alignment, which easily causes artifacts in the fused image. A gradient guided filter is used to solve this problem.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. The material damage detection method based on image fusion is characterized by comprising the following steps of:
acquiring at least two infrared damage images, and decomposing the infrared damage images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details;
constructing a corresponding contrast saliency map according to contrast information in the infrared damage image, and constructing a fusion weight map according to the contrast saliency map;
Optimizing a fusion weight map according to gradient-oriented filtering to obtain a fusion weight matrix containing basic feature weights and texture detail weights;
and respectively weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature maps and texture detail maps, and fusing the basic feature maps and the texture detail maps to obtain a fusion image.
2. The method for detecting material damage based on image fusion as claimed in claim 1, wherein the decomposition process of the basic feature and the texture detail specifically comprises:
carrying out mean value filtering on the multiple infrared damage images to obtain corresponding basic characteristics;
and filtering corresponding basic features from the infrared damage image to obtain corresponding texture details.
3. The material damage detection method based on image fusion as claimed in claim 1, wherein the construction process of the contrast saliency map specifically comprises:
the image local features in the infrared damage image are represented by local contrast in the infrared damage image, and the representation formula is specifically as follows:
wherein LCiRepresenting local contrast; o represents the window length; p represents a window width; i is iRepresenting an infrared damage image; μ (x, y) represents a window average centered at spatial location (x, y);
constructing a contrast saliency map by using a local average value of local contrast, wherein a construction formula specifically comprises the following steps:
CSi=LCi*Gr,σ
wherein CSiRepresenting a contrast saliency map; gr,σRepresenting a gaussian filter; r represents a filter radius; σ denotes the variance.
4. The material damage detection method based on image fusion as claimed in claim 1, wherein the construction formula of the fusion weight map is specifically:
5. The method for detecting material damage based on image fusion as claimed in claim 1, wherein the optimization obtaining process of the fusion weight matrix is specifically as follows:
determining a local linear model of an output image of the gradient guiding filter and a guiding image in a filtering window, and taking an infrared damage image as a guiding image;
obtaining a first coefficient and a second coefficient in a local linear model by minimizing an error between an input image and an output image, fusing a weight map as the input image;
respectively calculating the mean values of the first coefficient and the second coefficient in all windows to obtain corresponding first mean values and second mean values;
And inputting the first mean value and the second mean value into a local linear model, and respectively calculating to obtain a basic feature weight and a texture detail weight by combining different edge perception weights in the gradient guiding filter.
6. The image fusion-based material damage detection method according to claim 5, wherein the expression of the local linear model is specifically:
wherein, z (j) represents the output image pixel value corresponding to the pixel j; i (j) indicates the leading image pixel value corresponding to pixel j; a is akRepresenting a first coefficient; b is a mixture ofkRepresents a second coefficient; omegakThe expression is centered on the jth pixel point and has a size of (2 r)G+1)×(2rG+1) window, rGRepresents a window radius;
the calculation formula of the first coefficient and the second coefficient is specifically as follows:
wherein the content of the first and second substances,the product of pixel values of corresponding pixel points in the guide image I and the input image R is represented in a window omegakThe average value of (1);the pixel values of the pixels in the input image R are represented in the window omegakThe average value of (1);the pixel values of the pixel points in the guide image I are represented in the window omegakThe average value of (1); λ represents a regularization parameter; ΨI(k)Representing the edge perception weight of the kth pixel point of the guide image I; sigma2 kIndicating that the guide image I is in the window omegakThe average value of (1); gamma raykIndicating the setting coefficient.
7. The method for detecting material damage based on image fusion as claimed in claim 6, wherein the calculation formula of the edge perception weight is specifically as follows:
wherein, M represents the total pixel number of the guide image I; ξ (k) and ξ (i) respectively represent variances of regions with pixel points k and i as centers; epsilon represents a parameter of the edge perception weight, and the value of epsilon is a normal number; sigmaI,3(k) Represents the variance of the guide image I in a window of size 3 × 3; sigmaI,r1(k) Indicates a size of (2 r)1+1)×(2r1+1) variance of the guide image I in the window, r1Represents a constant;
the calculation formula of the setting coefficient is specifically as follows:
8. Material damage detecting system based on image fusion, characterized by includes:
the image decomposition module is used for acquiring at least two infrared damage images and decomposing the images from different infrared damage images through a multi-scale decomposition algorithm to obtain corresponding basic features and texture details;
the weight construction module is used for constructing a corresponding contrast saliency map according to the contrast information in the infrared damage image and constructing a fusion weight map according to the contrast saliency map;
the weight optimization module is used for optimizing a fusion weight map according to gradient-oriented filtering to obtain a fusion weight matrix containing basic feature weight and texture detail weight;
And the image fusion module is used for weighting the basic features and the texture details in different infrared damage images according to all the fusion weight matrixes to obtain corresponding basic feature images and texture detail images, and fusing the basic feature images and the texture detail images to obtain fusion images.
9. A computer terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the method for detecting material damage based on image fusion according to any one of claims 1 to 7.
10. A computer-readable medium, on which a computer program is stored, wherein the computer program is executed by a processor to implement the method for detecting material damage based on image fusion according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210395025.0A CN114757912A (en) | 2022-04-15 | 2022-04-15 | Material damage detection method, system, terminal and medium based on image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210395025.0A CN114757912A (en) | 2022-04-15 | 2022-04-15 | Material damage detection method, system, terminal and medium based on image fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114757912A true CN114757912A (en) | 2022-07-15 |
Family
ID=82330992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210395025.0A Pending CN114757912A (en) | 2022-04-15 | 2022-04-15 | Material damage detection method, system, terminal and medium based on image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114757912A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777797A (en) * | 2023-06-28 | 2023-09-19 | 广州市明美光电技术有限公司 | Method and system for clearing bright field microscopic image through anisotropic guide filtering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509164A (en) * | 2018-09-28 | 2019-03-22 | 洛阳师范学院 | A kind of Multisensor Image Fusion Scheme and system based on GDGF |
CN112184646A (en) * | 2020-09-22 | 2021-01-05 | 西北工业大学 | Image fusion method based on gradient domain oriented filtering and improved PCNN |
CN112419212A (en) * | 2020-10-15 | 2021-02-26 | 卡乐微视科技(云南)有限公司 | Infrared and visible light image fusion method based on side window guide filtering |
CN113763368A (en) * | 2021-09-13 | 2021-12-07 | 中国空气动力研究与发展中心超高速空气动力研究所 | Large-size test piece multi-type damage detection characteristic analysis method |
US20220044375A1 (en) * | 2019-12-17 | 2022-02-10 | Dalian University Of Technology | Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method |
-
2022
- 2022-04-15 CN CN202210395025.0A patent/CN114757912A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509164A (en) * | 2018-09-28 | 2019-03-22 | 洛阳师范学院 | A kind of Multisensor Image Fusion Scheme and system based on GDGF |
US20220044375A1 (en) * | 2019-12-17 | 2022-02-10 | Dalian University Of Technology | Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method |
CN112184646A (en) * | 2020-09-22 | 2021-01-05 | 西北工业大学 | Image fusion method based on gradient domain oriented filtering and improved PCNN |
CN112419212A (en) * | 2020-10-15 | 2021-02-26 | 卡乐微视科技(云南)有限公司 | Infrared and visible light image fusion method based on side window guide filtering |
CN113763368A (en) * | 2021-09-13 | 2021-12-07 | 中国空气动力研究与发展中心超高速空气动力研究所 | Large-size test piece multi-type damage detection characteristic analysis method |
Non-Patent Citations (2)
Title |
---|
JIN ZHU等: "Multiscale infrared and visible image fusion using gradient domain guided image filtering" * |
王健等: "基于梯度域导向滤波器和改进PCNN的图像融合算法" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777797A (en) * | 2023-06-28 | 2023-09-19 | 广州市明美光电技术有限公司 | Method and system for clearing bright field microscopic image through anisotropic guide filtering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929607B (en) | Remote sensing identification method and system for urban building construction progress | |
Maretto et al. | Spatio-temporal deep learning approach to map deforestation in amazon rainforest | |
CN110288602B (en) | Landslide extraction method, landslide extraction system and terminal | |
CN109902567B (en) | Data processing method and system for rapidly evaluating vegetation health condition | |
CN112084923B (en) | Remote sensing image semantic segmentation method, storage medium and computing device | |
CN110991430B (en) | Ground feature identification and coverage rate calculation method and system based on remote sensing image | |
CN111368825A (en) | Pointer positioning method based on semantic segmentation | |
CN114757912A (en) | Material damage detection method, system, terminal and medium based on image fusion | |
Wen et al. | Hybrid BM3D and PDE filtering for non-parametric single image denoising | |
CN116309612B (en) | Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision | |
Shit et al. | An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection | |
Kang et al. | A single image dehazing model using total variation and inter-channel correlation | |
Adu-Gyamfi et al. | Functional evaluation of pavement condition using a complete vision system | |
CN116958145B (en) | Image processing method and device, visual detection system and electronic equipment | |
CN103455986B (en) | Random noise point detecting method based on fractional order differential gradient | |
CN111445446B (en) | Concrete surface crack detection method based on improved U-net | |
CN112819753A (en) | Building change detection method and device, intelligent terminal and storage medium | |
CN112862729B (en) | Remote sensing image denoising method based on characteristic curve guidance | |
CN115761606A (en) | Box electric energy meter identification method and device based on image processing | |
CN115456972A (en) | Concrete crack detection and identification method, device, equipment and storage medium | |
Chen et al. | Urban damage estimation using statistical processing of satellite images | |
CN113516059B (en) | Solid waste identification method and device, electronic device and storage medium | |
CN115393730A (en) | Accurate identification method for Mars meteorite crater, electronic equipment and storage medium | |
He et al. | Feature aggregation convolution network for haze removal | |
CN105809187A (en) | Multi-manufacturer partial discharge data result diagnosis analysis method based on image identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220715 |