CN112818822B - Automatic identification method for damaged area of aerospace composite material - Google Patents
Automatic identification method for damaged area of aerospace composite material Download PDFInfo
- Publication number
- CN112818822B CN112818822B CN202110118568.3A CN202110118568A CN112818822B CN 112818822 B CN112818822 B CN 112818822B CN 202110118568 A CN202110118568 A CN 202110118568A CN 112818822 B CN112818822 B CN 112818822B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- infrared
- segmentation
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 239000002131 composite material Substances 0.000 title claims abstract description 40
- 230000011218 segmentation Effects 0.000 claims abstract description 100
- 230000007547 defect Effects 0.000 claims abstract description 95
- 238000003709 image segmentation Methods 0.000 claims abstract description 70
- 238000005457 optimization Methods 0.000 claims abstract description 60
- 230000004044 response Effects 0.000 claims abstract description 38
- 230000001052 transient effect Effects 0.000 claims abstract description 33
- 239000013598 vector Substances 0.000 claims description 82
- 239000011159 matrix material Substances 0.000 claims description 40
- 230000014759 maintenance of location Effects 0.000 claims description 23
- 238000004321 preservation Methods 0.000 claims description 17
- 238000012360 testing method Methods 0.000 claims description 13
- 230000008030 elimination Effects 0.000 claims description 11
- 238000003379 elimination reaction Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 10
- 238000009499 grossing Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000006185 dispersion Substances 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 6
- 238000012423 maintenance Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 4
- 230000003321 amplification Effects 0.000 claims description 3
- 230000006872 improvement Effects 0.000 claims description 3
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 238000011524 similarity measure Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 125000004432 carbon atom Chemical group C* 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 22
- 238000011160 research Methods 0.000 abstract description 4
- 239000000284 extract Substances 0.000 abstract 1
- 230000008569 process Effects 0.000 description 9
- 239000000463 material Substances 0.000 description 6
- 230000005855 radiation Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000032798 delamination Effects 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003116 impacting effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 229920000049 Carbon (fiber) Polymers 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 239000004917 carbon fiber Substances 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
- 239000013585 weight reducing agent Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/12—Computing arrangements based on biological models using genetic models
- G06N3/126—Evolutionary algorithms, e.g. genetic algorithms or genetic programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Genetics & Genomics (AREA)
- Physiology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Radiation Pyrometers (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an automatic identification method for a damaged area of an aerospace composite material, which comprises the following steps: extracting typical transient thermal response of each type of defect; acquiring an infrared reconstruction image; obtaining a low-quality infrared reconstruction image; obtaining weight coefficients for removing noise, retaining details and keeping the three segmentation performances of the edge; constructing an infrared image segmentation function; obtaining a weight coefficient of a target function for realizing each segmentation performance; constructing a full-pixel infrared image segmentation target function, and performing image segmentation on the reconstructed full-pixel infrared image by using a segmentation model; and realizing infrared full-pixel image segmentation on the image segmentation layer to obtain a segmented image. The method utilizes the multi-objective optimization theory to carry out defect segmentation in the infrared reconstructed image, constructs the objective function aiming at the noise problem and the edge fuzzy problem respectively to improve the segmentation precision, ensures high defect detection rate, reduces false detection rate, effectively extracts the damaged defect area in the reconstructed image, and is convenient for the quantitative research of complex defects.
Description
Technical Field
The invention belongs to the technical field of damage detection and maintenance guarantee of aerospace aircrafts, and particularly relates to an automatic identification method for damaged areas of aerospace composite materials.
Background
With the urgent weight reduction requirement of aerospace aircrafts, light-weight structural materials with excellent mechanical properties, in particular to light-weight composite materials represented by high-strength/high-modulus carbon fiber composite materials, honeycomb structural materials and the like, increasingly become hot spots of aerospace research. Meanwhile, composite materials with special functions and purposes, such as stealth coating materials, carbon-based heat-proof materials and other functional composite materials, are widely applied in the aerospace field. However, during manufacturing, assembly or real-time use of the composite material, serious quality problems such as delamination, debonding, porosity, cracks, impact defects, etc. may be caused by improper processes, repeated cyclic stresses, external impacts, etc. For example, an aircraft is easily impacted by flying birds in the processes of taking off and landing, and a spacecraft is impacted by the ultra-high speed of micro space debris in the processes of launching and in-orbit running, so that various damages such as perforation, impact pits, delamination, peeling and the like are generated on the surface of an aerospace aircraft composite material, and the composite material structure on the surface of the aircraft is damaged or has reduced functions and failures. Therefore, in order to avoid serious accidents caused by various damage defects in the using process of aerospace composite material members, the detection of the damage defects and the quality evaluation of the composite materials are particularly critical.
The infrared thermal imaging technology has the advantages of safety, intuition, rapidness, high efficiency, large detection area, no contact and the like, plays an important role in the damage detection of aerospace composite materials, and has the following basic principle: based on the Fourier heat transfer and infrared radiation principle, when an object to be detected is subjected to external thermal excitation (irradiation of sunlight or artificial light source), the heat conduction process is influenced due to the existence of material defects and is expressed as the difference of transient temperature response of the surface of the object to be detected, and the surface temperature field response is collected through a thermal infrared imager, so that the defect states of the surface and the interior of the object to be detected are known. The data collected by the infrared imager is infrared thermal image sequence data formed by a plurality of frames of infrared thermal images, the infrared thermal image sequence data comprises temperature change information (transient thermal response curve) of each pixel point in the detected area, and the infrared thermal image sequence data is analyzed and processed to obtain a reconstructed image of the defect, so that the visual detection of the damage defect of the composite material is realized.
In order to accurately evaluate the damage defect, the target defect region and the background region in the infrared reconstructed image of the defect need to be effectively separated. Compared with the conventional natural visible light image, the infrared image has lower resolution and fuzzy edges, especially in a complex detection background, due to the existence of other heat sources in the background or the strong heat reflectivity of the material, the background area is overlapped and disordered, the contrast between the target and the background is reduced, the defect identification in the reconstructed image is seriously interfered, and the accurate extraction and type identification of the defect area are more difficult. In order to solve the above problems, the original image needs to be processed by an image segmentation algorithm, and the target region and the background region are effectively separated, so that it is seen that the correct segmentation defect becomes a key step in the target identification process. In the existing research, images are segmented by using the FCM algorithm and the improved algorithm thereof, but the segmentation problem is often oriented to a damage function, namely an objective function. On the one hand, if the requirement of keeping details is fully met, although the detection rate of the defects is improved to a certain extent, noise is also kept, and false judgment is easily caused to defect identification, so that the false detection rate is increased. On the other hand, if only the requirement of integral denoising of the image is met, the damage defects caused by the impact of the tiny fragments are small in size and large in quantity, and the tiny defects similar to the noise can be removed along with the denoising process, so that the detection rate and the detection precision of the defects are reduced. Therefore, when the conventional segmentation method is applied to the infrared reconstructed image of the defect, which is the object of the present invention, the false detection rate and the detection rate of the defect cannot be balanced, and the segmentation effect is not satisfactory. Particularly, the infrared thermal image reflects the thermal radiation information of the test piece, and is easily influenced by the environment, an imaging link and the like, so that the background noise of the obtained defect infrared reconstruction image is large. Meanwhile, due to the difference of the surface heat radiation capability of the defect area and the background area, the edge of the infrared reconstruction image with the defect is not smooth enough, the division of the edge area is not clear enough, and the image segmentation is not facilitated.
In order to reduce the false detection rate of defects, improve the detection rate, remove noise and fully reserve details, a noise elimination function and a detail reserving function are set, and in consideration of the fact that infrared images reflect the temperature difference of different regions after thermal excitation is applied, temperature change is continuous, and therefore no obvious contour division exists between the regions, an edge preserving function is introduced to achieve accurate segmentation of the defects. When a noise elimination function is set, the influence of noise pixel points on infrared image segmentation is eliminated as much as possible by setting a fuzzy factor and fully considering neighborhood information of an infrared image, but the infrared image is greatly influenced by noise, and when the noise elimination effect is poor, the situation that two similar defects are classified into one class and noise and a boundary are classified into one class can occur, so that a function for measuring the dispersion degree between the classes is introduced, the distance between the clustering centers of the classes can be flexibly adjusted, and the problem that the pixel points between different defect classes with small similarity are difficult to distinguish is solved. When a detail retaining function is set, in order to retain more defect detail information, the compactness of the segmented image is small, the separation is large, in order to enhance the tiny defect information, the correlation between the positions and the colors of a neighborhood pixel point and a center pixel point is considered, a correlation coefficient is introduced, if the correlation between the neighborhood pixel point and the center pixel point is large, the information of the pixel is considered in an objective function, and if the correlation between the neighborhood pixel point and the center pixel point is small, the information of the pixel is not considered in the objective function. When the edge retention function is set, the edge revision is carried out on the infrared image by calculating the edge pixels by utilizing the local gradient information, and the key of accurate segmentation is the influence degree of the neighborhood pixels on the central pixels, so that the influence degree of the neighborhood pixels on the central pixels is calculated based on the correlation of the gray level difference of the pixels, the correlation is large, the neighborhood pixels and the central pixels belong to the same class, and the defect edge information is enhanced by amplifying the influence of the neighborhood pixels on the membership degree of the central pixels, thereby improving the image segmentation effect.
After the objective functions for realizing the three division performances are arranged, a new problem is how to adjust the weight coefficients of the three objective functions so that the formed division objective functions have the best division performance. The method adopts a double-layer segmentation model, and the first layer obtains the weight coefficient of each objective function through a multi-objective optimization algorithm; and the second layer constructs a segmentation target function by using the obtained weight coefficient to realize infrared image segmentation.
When the weight coefficient of each objective function is solved, a multi-objective optimization problem is decomposed into a plurality of scalar subproblems through weight vectors by using a multi-objective algorithm in a processed low-quality infrared image containing complete defect information, and the weight vector component of each subproblem can reflect the importance degree of each objective function to the division of the objective function.
The invention is based on the defect detection of multi-objective optimization segmentation, uses the thermal infrared imager to record the surface temperature field change of the measured object, meets the in-situ and non-contact nondestructive detection requirements, and meets the requirements of high-precision detection and identification of complex defects by analyzing and processing the infrared thermal image sequence. The algorithm samples the infrared thermal image sequence in a mode of changing row-column step length to obtain a data set formed by a transient thermal response curve with typical temperature change characteristics, and the speed of subsequent data classification is improved. And obtaining the membership degree of each pixel point and the clustering center by using an FCM (fuzzy C-means) algorithm, classifying each transient thermal response curve in the data set by comparing the membership degree, and selecting the classified typical thermal response curve to reconstruct the infrared thermal image to obtain a defect reconstruction image. On the basis, the method further utilizes a multi-objective optimization theory to carry out defect segmentation in the infrared reconstructed image, constructs appropriate objective functions aiming at the noise problem and the edge blurring problem respectively to improve the segmentation precision, ensures high defect detection rate and reduces false detection rate, thereby effectively extracting the damaged defect area in the reconstructed image and facilitating the quantitative research of complex defects.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided an aerospace composite material damage region automatic identification method, including the steps of:
step one, after effective information is extracted from collected test piece infrared data, classifying the collected test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail preservation and edge preservation;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method comprises the following steps of obtaining weight coefficients of objective functions for realizing each segmentation performance by using a multi-objective optimization algorithm and combining weight vectors, wherein the specific steps comprise:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
Preferably, the specific method of the first step comprises: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
Preferably, the method for obtaining the infrared reconstructed image in the second step includes: from step to stepK d-dimensional typical transient thermal responses extracted in the first step are used for obtaining a linear change matrix H with dimensions of d multiplied by K1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.WhereinIs a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
Preferably, the infrared image segmentation function constructed in the fourth step under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weight coefficients of the three objective functions respectively;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the number of clusters, utiIs a pixel point xiFor the clustering center vtDegree of membership, Wi rIs a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1), is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);satisfy the requirement ofWhere the spatial distance constraint ζdcSatisfy the requirements ofSpace gray scale constraint ζgcSatisfy the requirement ofWherein, the first and the second end of the pipe are connected with each other,representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the positions and colors of the pixels is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the number of clusters, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the row is,representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), note pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there isλsIs the influence factor of the spatial scale of the image,λgis the factor that affects the gray scale by which,is given by a pixel xiThe mean gray variance of the neighborhood pixels at the center,is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into an objective function, and an amplification function A is introduced for enhancing edge informationtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of gray levels for infrared image, NnThe number of the pixel points with the gray value of n,for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Niis a pixel xiA set of neighborhood pixels that is the center,is a set NiThe number of the pixel points in (1),g(xi) And g (x)j) Respectively representing pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula of (2) is:
thereby completing the construction of the infrared image segmentation function.
Preferably, the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with Kn of pixel points specifically comprises: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TA set of candidate cluster centers is represented. And decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the division objective function.
The specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,Is λiThe most recent T weight vectors;
step S513, uniformly sampling in a feasible space satisfying the multi-objective optimization problem to generate an initial population S1,…,sKnOrder FVi=F(si),i=1,…,Kn;
Step S514, initializationSatisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
in the above-mentioned formula, the compound has the following structure,is the weight vector of the jth sub-question,the weight of the noise suppression function is controlled,the weight of the detail-preserving function is controlled,the weight of the edge-preserving function is controlled, andrespectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select from sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, update reference point f*: numerical value f of reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sj|λj,f*) J ∈ B (i), then sj=e′,,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the populationTo the current cluster centerThe distance of (c):
step S5222, findHigh dimensional sphere region with radius r as centerAll individuals inComputingStandard deviation of all individuals within:
wherein the content of the first and second substances,to be distributed inAverage of all individuals in the population, R isThe number of individuals in the group of individuals,r is 1, …, R is distributed in(ii) an individual;
step S5223, selecting standard deviation from U individualsMinimum Up individuals as preference region reference points, for each of the Up individualsAnd its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the populationThe most distant European individualsWherein the content of the first and second substances,and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1,λUpnew2,λUpnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference pointAnd the most distant individualRandomly generating a new solutionComprises the following steps:
step S52235, generating a new individualAs a new clustering center: to be provided withAs a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
calculating new clustering center according to current membership
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53, terminating iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
Preferably, the sixth step of constructing the full-pixel infrared image segmentation objective function includes the specific steps of: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points being M multiplied by N obtained through reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
when solving this objective function, the function f is preserved due to detail2(v) The separability measure of (1) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the solution of the membership function and the clustering center under the Lagrange multiplier method is carried out on the following functions:
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
meanwhile, the updating formula of the clustering center is as follows:
and completing the construction of the full-pixel infrared image segmentation target function.
Preferably, the step seven of implementing infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically comprises the following steps:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
Step S73, according to the formula
Updating the membership degree;
step S74, according to the formula
Updating the clustering center;
Step S76, ifOr T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
The invention at least comprises the following beneficial effects: the method for automatically identifying the damaged area of the aerospace composite material obtains the step length of a transformation column by carrying out row-direction searching and comparing on the maximum value of the temperature point in the infrared thermal image sequence data, blocks the data by using the maximum value of the temperature in the transient thermal response curve to obtain the step length of the transformation row of each data block, samples by using the step length of the transformation column and the step length of the transformation row to obtain a sampling data set formed by the transient thermal response curve containing typical temperature change, and obtains the membership degree of the classification of the sampling data set by using an FCM algorithm. And classifying each transient thermal response curve in the data set by using the membership degree, and reconstructing a defect image by using the classified typical thermal response curve. And constructing a thermal image segmentation framework with double layers and multi-target optimization to realize accurate segmentation of the defects.
Meanwhile, the method for automatically identifying the damaged area of the aerospace composite material has the following beneficial effects:
(1) the double-layer multi-target optimized thermal image segmentation framework provided by the invention introduces a multi-target theory, establishes a target function respectively aiming at three target problems to be solved, and solves the segmentation problem in a targeted manner, so that the obtained segmented image is balanced among the three, and the result image obtained by segmentation has three performances of noise elimination, detail retention and edge retention. In order to select the most appropriate weight coefficient in the space, the invention combines weight vector adjustment in the process of multi-objective algorithm iterative solution, when adjusting the weight vector, because the infrared image is greatly influenced by noise, and the set detail retention function and the edge retention function are related to the detail information of defects, we hope that in the segmentation objective function, the influence of the noise elimination function is a little larger, take noise elimination as preference, adjust the weight vector based on the preference, and search in the low-quality infrared image to obtain the weight coefficient which can best reflect the importance degree of each objective function. And searching the low-quality infrared image to obtain a weight coefficient, returning to the full-pixel infrared image, and segmenting the image according to a segmentation objective function obtained by solving the weight coefficient.
(2) The double-layer segmentation model provided by the invention can solve the problem of low calculation efficiency caused by the fact that a multi-target algorithm and infrared data acquired by experiments are huge under the premise of ensuring the segmentation quality.
(3) The double-layer multi-target optimized thermal image segmentation framework provided by the invention does not need to repeatedly calculate the weight coefficients of the target functions corresponding to the noise elimination, the detail retention and the edge maintenance, and has stronger applicability.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Description of the drawings:
FIG. 1 is a flow chart of an automatic identification method for damaged areas of aerospace composites according to the invention;
FIG. 2 is a PF surface map obtained after solving a multi-objective optimization problem in the embodiment of the present invention;
FIG. 3 is a TTR curve of a background region of an impact pit in an embodiment of the present invention;
FIG. 4 is an infrared reconstructed image corresponding to a TTR curve of a background region of an impact pit according to an embodiment of the present invention;
FIG. 5 is a TTR curve of a composite material impacting the interior of a pit in an embodiment of the present invention;
FIG. 6 is an infrared reconstructed image corresponding to a TTR curve inside a composite material impact pit according to an embodiment of the present invention;
FIG. 7 is a TTR curve of a composite material impacting the edge of a pit in an embodiment of the present invention;
FIG. 8 is an infrared reconstructed image corresponding to a TTR curve of a composite impact pit edge in an embodiment of the invention;
FIG. 9 is a graph of the composite impact pit edge reconstructed image defect segmentation result in accordance with an embodiment of the present invention;
FIG. 10 is a graph of the defect segmentation result of the reconstructed image inside the composite material impact pit according to the embodiment of the invention.
The specific implementation mode is as follows:
the present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
As shown in fig. 1: the invention discloses an automatic identification method of an aerospace composite material damage area, which comprises the following steps:
step one, after effective information is extracted from collected test piece infrared data, classifying the collected test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail preservation and edge preservation;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method uses a multi-objective optimization algorithm to combine with a weight vector to obtain a weight coefficient of an objective function for realizing each segmentation performance, and comprises the following specific steps:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
In the above technical solution, the specific method of the first step includes: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
In the above technical solution, the method for obtaining the infrared reconstructed image in the second step includes: obtaining a linear change matrix H with the dimensionality of d multiplied by K from K d-dimensional typical transient thermal responses extracted in the step one1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.WhereinIs a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
In the above technical solution, in the fourth step, the infrared reconstructed image with the defect is subjected to the problems of large background noise, weak color information of the infrared reconstructed image, and poor contrast caused by a complex energy source, an imaging link, and impurities on the surface of the test piece, so that a general segmentation method cannot obtain a good segmentation result. To realize an infrared reconstructed image containing M × N pixels, x ═ x (x)1,…,xMN) The method and the device perform accurate separation of the background area and the defect area, and utilize multi-target measurement to remove noise, retain details and maintain edges in the corresponding low-quality infrared reconstruction image containing Kn pixel points and complete defect information of each infrared reconstruction image obtained by processingAccording to the segmentation performance of three aspects, the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function. The infrared image segmentation function constructed under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weight coefficients of the three objective functions respectively;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, utiIs pixel point xiFor the clustering center vtThe degree of membership of (a) is,is a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1), is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);satisfy the requirement ofWhere the spatial distance constraint ζdcSatisfy the requirement ofSpace gray scale constraint ζgcSatisfy the requirement ofWherein the content of the first and second substances,representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering the local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the pixel position and the color of the pixel is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the row is,representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there isλsIs the influence factor of the spatial scale of the image,λgis a gray scale influence factor that is,is given by a pixel xiThe mean gray variance of the centered neighborhood pixels,is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into the objective function, and in order to strengthen the edge informationIn addition, an amplification function A is introducedtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of grey levels of the infrared image, NnThe number of the pixel points with the gray value of n,for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Niis a pixel xiA set of neighborhood pixels that is the center,is a set NiThe number of the pixel points in (1),g(xi) And g (x)j) Respectively representing pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula of (2) is:
thereby completing the construction of the infrared image segmentation function.
In the above technical solution, the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with Kn pixels includes the specific steps of: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TA set of candidate cluster centers is represented. And decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the division objective function.
The specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,Is λiThe most recent T weight vectors;
step S513, uniformly sampling in a feasible space satisfying the multi-objective optimization problem to generate an initial population S1,…,sKnOrder FVi=F(si),i=1,…,Kn;
Step S514, initializationSatisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
in the above formula, the first and second carbon atoms are,is the weight vector for the jth sub-question,the weight of the noise suppression function is controlled,the weight of the control detail retention function is,the weight of the edge-preserving function is controlled, andrespectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select sequence number sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, updating the reference point f*: if the value f of the reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sj|λj,f*) J ∈ B (i), then sj=e′,,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the populationTo the current cluster centerThe distance of (c):
step S5222, findHigh dimensional sphere region with radius r as centerAll individuals inComputingStandard deviation of all individuals within:
wherein the content of the first and second substances,to be distributed inAverage of all individuals in the population, R isThe number of individuals in the group of individuals,r is 1, …, R is distributed in(ii) an individual;
step S5223, selecting standard deviation from U individualsThe smallest Up individuals as preference area reference points, for each of the Up individualsAnd its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the populationThe most distant European individualsWherein the content of the first and second substances,and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1,λUpnew2,λUpnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference pointAnd the most distant individualsRandomly generating a new solutionComprises the following steps:
step S52235, generating a new individualAs a new clustering center: to be provided withAs a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
calculating new clustering center according to current membership
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53. Terminating the iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
In the above technical solution, the specific step of constructing the full-pixel infrared image segmentation objective function in the sixth step includes: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points being M multiplied by N obtained through reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
when solving this objective function, the function f is preserved due to detail2(v) The separability measure in (2) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the membership function and the clustering center under the Lagrange multiplier method are solved for the following functions:
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
meanwhile, the updating formula of the clustering center is as follows:
and completing the construction of the full-pixel infrared image segmentation target function.
In the above technical solution, the step seven of implementing infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically includes the steps of:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
Step S73, according to the formula
Updating the membership degree;
step S74, according to the formula
Updating the clustering center;
Step S76, ifOr T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
In conclusion, the method provides a defect detection method algorithm based on multi-objective optimization segmentation. The automatic segmentation method for variable interval search is used for achieving infrared video segmentation to obtain a data set to be classified, and the data set comprises a temperature curve with typical change characteristics. The FCM algorithm obtains corresponding clusters of the data set, and soft division is carried out by utilizing the membership degree of pixel points and cluster centers, so that the reliability of classification results is improved. Each classified data subset contains corresponding temperature change characteristics. And reconstructing the infrared thermal image sequence by using the main characteristics to obtain an infrared reconstructed image of the defect, and reflecting the defect characteristics of the test piece. The result image obtained by target segmentation of the infrared reconstructed image containing the prominent defects can not only realize noise elimination, but also ensure detail retention, and the edge retention can also improve the precision of image segmentation.
Example (b):
in the present embodiment, the thermal infrared imager acquires 502 frames of images with pixel size of 512 × 640. I.e. there are 327680 temperature points in each graph, and the temperature value of each temperature point is recorded 502 times, and this time-varying temperature condition constitutes the transient thermal response TTR of the temperature point. After effective transient thermal response is extracted from the infrared thermal sequence, area division is carried out according to defect types, and typical transient thermal response is extracted from each divided area. Setting the parameter Re in extracting the effective transient thermal responseCL=0.92,From the 327680 temperature points, 375 effective transient thermal responses containing complete defect information were extracted. And according to the pixel points, softening the membership degree of each class center, and dividing 185 thermal response curves, 43 thermal response curves and 147 thermal response curves into corresponding classes. Typical transient thermal response representing the defect information is extracted from each type of defect area, and the typical transient thermal response representing the three defect areas forms a matrix X1. For original two-dimensional matrix P (x, y)502×327680Performing a linear transformation usingWherein the content of the first and second substances,is X1Obtaining a two-dimensional image matrix O, reconstructing the two-dimensional image matrix O into two-dimensional images with the original image size of 512 multiplied by 640 according to row values, and obtaining 3 infrared reconstructed images with the size of 512 multiplied by 640, wherein the infrared defect reconstructed images and corresponding TTR curves are shown in figures 3, 4, 5, 6, 7 and 8; wherein fig. 3 and 4 are TTR curves and corresponding infrared reconstructed images of the composite impact pit background area, fig. 5 and 6 are TTR curves and corresponding infrared reconstructed images of the composite impact pit interior, and fig. 7 and 8 are TTR curves and corresponding infrared reconstructed images of the composite impact pit edge, respectively.
The TTR curves classified as shown in fig. 3, 5, and 7 can observe that the TTRs in different classifications have different differences in temperature rise rate and temperature fall rate, and the type of the expression region in the reconstructed image can be determined according to the difference and the highlighted region of the infrared reconstructed image color, and the region type of the test piece has a background region, the inside of the composite impact pit, and the edge of the composite impact pit.
The maximum algebra of the multi-objective optimization segmentation algorithm is set to be 200, and the weight vector is adjusted once every time an individual is updated based on preference. In the objective function set according to the segmentation performance, the smoothing parameter m is set to 2, and the number of clusters c is set to 3. A curved PF front surface formed spatially by the pareto optimal set is obtained as shown in fig. 2. Selecting a weighing solution from the front face of the PF, wherein the corresponding weight vector component reflects the weight coefficient of each objective function, constructing a full-pixel infrared image segmentation objective function model according to the weight coefficient, realizing image segmentation, and obtaining segmented images as shown in FIGS. 9 and 10, wherein FIG. 9 is a segmentation result of an infrared reconstructed image at the edge of a composite material impact pit, and FIG. 10 is a segmentation result of an infrared reconstructed image inside the composite material impact pit. Experimental results confirm that the function SGNSf constructed herein1(v)、SGDRf2(v) And edge preservation function SOEMf3(v) The method can respectively play roles in inhibiting noise, retaining details and keeping edges, accurately strip the defect area and the background area and realize accurate segmentation of the infrared image.
The number of apparatuses and the scale of the process described herein are intended to simplify the description of the present invention. Applications, modifications and variations of the present invention will be apparent to those skilled in the art.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.
Claims (7)
1. An automatic identification method for a damaged area of an aerospace composite material is characterized by comprising the following steps:
step one, after effective information is extracted from collected test piece infrared data, classifying the test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail retention and edge maintenance;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method comprises the following steps of obtaining weight coefficients of objective functions for realizing each segmentation performance by using a multi-objective optimization algorithm and combining weight vectors, wherein the specific steps comprise:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
2. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 1, wherein the specific method in the first step comprises: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
3. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 2, wherein the method for obtaining the infrared reconstruction image in the second step comprises: obtaining a linear change matrix H with the dimensionality of d multiplied by K from K d-dimensional typical transient thermal responses extracted in the step one1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.WhereinIs a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
4. The automatic identification method for the damaged area of the aerospace composite material as claimed in claim 1, wherein the infrared image segmentation function constructed in the fourth step under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weights of three objective functions respectivelyA coefficient;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, utiIs a pixel point xiFor the clustering center vtDegree of membership, Wi rIs a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1), is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);satisfy the requirement ofWhere the spatial distance constraint ζdcSatisfy the requirement ofSpace gray scale constraint ζgcSatisfy the requirement ofWherein the content of the first and second substances,representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering the local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the pixel position and the color of the pixel is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the pixel array,representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there isλsIs the influence of spatial scaleIn the case of a hybrid vehicle,λgis the factor that affects the gray scale by which,is given by a pixel xiThe mean gray variance of the centered neighborhood pixels,is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula is:
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into an objective function, and an amplification function A is introduced for enhancing edge informationtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of gray levels for infrared image, NnThe number of the pixel points with the gray value of n,for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Niis a pixel xiA set of neighborhood pixels that is the center,is a set NiThe number of the pixel points in (1),g(xi) And g (x)j) Respectively represent pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:method for obtaining pixel x by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Cluster center vtThe update formula of (2) is:
thereby completing the construction of the infrared image segmentation function.
5. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 1, wherein the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with the number of pixels Kn comprises the following specific steps: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TRepresenting a group of candidate clustering centers, decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the segmentation objective function;
the specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,Is λiThe most recent T weight vectors;
step S513, uniformly sampling in feasible space satisfying the multi-objective optimization problem to generate initial populationLet FVi=F(si),i=1,…,Kn;
Step S514, initializationSatisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
in the above formula, the first and second carbon atoms are,is the weight vector for the jth sub-question,the weight of the noise suppression function is controlled,the weight of the control detail retention function is,controlling the weight of the edge-preserving function, f1 *、Andrespectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select from sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, update reference point f*: if the value f of the reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sj|λj,f*) J ∈ B (i), then sj=e′,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the populationTo the current cluster centerThe distance of (c):
step S5222, findHigh dimensional sphere region with radius r as centerAll individuals inComputingStandard deviation of all individuals within:
wherein the content of the first and second substances,to be distributed atAverage of all individuals in the group, R isThe number of individuals in the group of individuals,r is 1, …, R is distributed in(ii) an individual;
step S5223, selecting standard deviation from U individualsThe smallest Up individuals as preference area reference points, for each of the Up individualsAnd its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the populationThe most distant European individualsWherein the content of the first and second substances,and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1,λUpnew2,λUpnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference pointAnd the most distant individualsRandomly generating a new solutionComprises the following steps:
step S52235, generating a new individualAs a new clustering center: to be provided withAs a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
calculating new clustering center according to current membership
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53, terminating iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
6. The aerospace composite material damage region automatic identification method according to claim 1, wherein the sixth step of constructing a full-pixel infrared image segmentation objective function includes the specific steps of: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
when solving this objective function, the function f is preserved due to detail2(v) The separability measure of (1) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the membership function and the clustering center under the Lagrange multiplier method are solved for the following functions:
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
meanwhile, the updating formula of the clustering center is as follows:
and completing the construction of the full-pixel infrared image segmentation target function.
7. The aerospace composite material damage region automatic identification method according to claim 1, wherein the seventh step of achieving infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically comprises the following steps:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
Step S73, according to the formula
Updating the membership degree;
step S74, according to the formula
Updating the clustering center;
Step S76, ifOr T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110118568.3A CN112818822B (en) | 2021-01-28 | 2021-01-28 | Automatic identification method for damaged area of aerospace composite material |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110118568.3A CN112818822B (en) | 2021-01-28 | 2021-01-28 | Automatic identification method for damaged area of aerospace composite material |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112818822A CN112818822A (en) | 2021-05-18 |
CN112818822B true CN112818822B (en) | 2022-05-06 |
Family
ID=75859916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110118568.3A Active CN112818822B (en) | 2021-01-28 | 2021-01-28 | Automatic identification method for damaged area of aerospace composite material |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112818822B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113537236B (en) * | 2021-06-21 | 2023-04-21 | 电子科技大学 | Quantitative identification method for defect of thermal diffusion effect for infrared detection of spacecraft damage |
CN113506292B (en) * | 2021-07-30 | 2022-09-20 | 同济大学 | Structure surface crack detection and extraction method based on displacement field |
CN113763368B (en) * | 2021-09-13 | 2023-06-23 | 中国空气动力研究与发展中心超高速空气动力研究所 | Multi-type damage detection characteristic analysis method for large-size test piece |
CN114862879B (en) * | 2022-07-05 | 2022-09-27 | 深圳科亚医疗科技有限公司 | Method, system and medium for processing images containing physiological tubular structures |
CN116907677B (en) * | 2023-09-15 | 2023-11-21 | 山东省科学院激光研究所 | Distributed optical fiber temperature sensing system for concrete structure and measuring method thereof |
CN117576488B (en) * | 2024-01-17 | 2024-04-05 | 海豚乐智科技(成都)有限责任公司 | Infrared dim target detection method based on target image reconstruction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102692429A (en) * | 2011-03-24 | 2012-09-26 | 中国科学院沈阳自动化研究所 | Method for automatic identification and detection of defect in composite material |
CN110895814A (en) * | 2019-11-30 | 2020-03-20 | 南京工业大学 | Intelligent segmentation method for aero-engine hole detection image damage based on context coding network |
CN111652252A (en) * | 2020-06-11 | 2020-09-11 | 中国空气动力研究与发展中心超高速空气动力研究所 | Ultrahigh-speed impact damage quantitative identification method based on ensemble learning |
CN112016627A (en) * | 2020-09-04 | 2020-12-01 | 中国空气动力研究与发展中心超高速空气动力研究所 | Visual detection and evaluation method for micro-impact damage of on-orbit spacecraft |
CN112037211A (en) * | 2020-09-04 | 2020-12-04 | 中国空气动力研究与发展中心超高速空气动力研究所 | Damage characteristic identification method for dynamically monitoring small space debris impact event |
CN112233099A (en) * | 2020-10-21 | 2021-01-15 | 中国空气动力研究与发展中心超高速空气动力研究所 | Reusable spacecraft surface impact damage characteristic identification method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10254907B4 (en) * | 2002-11-25 | 2008-01-03 | Siemens Ag | Process for surface contouring of a three-dimensional image |
-
2021
- 2021-01-28 CN CN202110118568.3A patent/CN112818822B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102692429A (en) * | 2011-03-24 | 2012-09-26 | 中国科学院沈阳自动化研究所 | Method for automatic identification and detection of defect in composite material |
CN110895814A (en) * | 2019-11-30 | 2020-03-20 | 南京工业大学 | Intelligent segmentation method for aero-engine hole detection image damage based on context coding network |
CN111652252A (en) * | 2020-06-11 | 2020-09-11 | 中国空气动力研究与发展中心超高速空气动力研究所 | Ultrahigh-speed impact damage quantitative identification method based on ensemble learning |
CN112016627A (en) * | 2020-09-04 | 2020-12-01 | 中国空气动力研究与发展中心超高速空气动力研究所 | Visual detection and evaluation method for micro-impact damage of on-orbit spacecraft |
CN112037211A (en) * | 2020-09-04 | 2020-12-04 | 中国空气动力研究与发展中心超高速空气动力研究所 | Damage characteristic identification method for dynamically monitoring small space debris impact event |
CN112233099A (en) * | 2020-10-21 | 2021-01-15 | 中国空气动力研究与发展中心超高速空气动力研究所 | Reusable spacecraft surface impact damage characteristic identification method |
Non-Patent Citations (2)
Title |
---|
Sparse superpixel unmixing for exploratory analysis of CRISM hyperspectral images;David R. Thompson et al.;《2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing》;20091016;1-4 * |
多分类器融合的光学遥感图像目标识别算法;姬晓飞 等;《计算机技术与发展》;20191130;第29卷(第11期);52-56 * |
Also Published As
Publication number | Publication date |
---|---|
CN112818822A (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112818822B (en) | Automatic identification method for damaged area of aerospace composite material | |
CN112819775B (en) | Segmentation and reinforcement method for damage detection image of aerospace composite material | |
CN111598887B (en) | Spacecraft defect detection method based on LVQ-GMM algorithm and multi-objective optimization segmentation algorithm | |
CN112884716B (en) | Method for strengthening characteristics of ultra-high-speed impact damage area | |
CN110210463B (en) | Precise ROI-fast R-CNN-based radar target image detection method | |
CN108537102B (en) | High-resolution SAR image classification method based on sparse features and conditional random field | |
CN112784847B (en) | Segmentation and identification method for ultra-high-speed impact damage detection image | |
Xie et al. | SRUN: Spectral regularized unsupervised networks for hyperspectral target detection | |
CN113392931B (en) | Hyperspectral open set classification method based on self-supervision learning and multitask learning | |
Liu et al. | BraggNN: fast X-ray Bragg peak analysis using deep learning | |
Panati et al. | Feature relevance evaluation using grad-CAM, LIME and SHAP for deep learning SAR data classification | |
CN113538331A (en) | Metal surface damage target detection and identification method, device, equipment and storage medium | |
CN111652252A (en) | Ultrahigh-speed impact damage quantitative identification method based on ensemble learning | |
Devi et al. | Change detection techniques–A survey | |
CN114332444B (en) | Complex star sky background target identification method based on incremental drift clustering | |
CN112183237A (en) | Automatic white blood cell classification method based on color space adaptive threshold segmentation | |
CN109558803B (en) | SAR target identification method based on convolutional neural network and NP criterion | |
Choi et al. | Rain-type classification from microwave satellite observations using deep neural network segmentation | |
Reghukumar et al. | Vision based segmentation and classification of cracks using deep neural networks | |
CN113781445A (en) | Multi-region complex damage defect feature extraction fusion method | |
Guofeng et al. | Image segmentation of thermal waving inspection based on particle swarm optimization fuzzy clustering algorithm | |
Chen et al. | Remote aircraft target recognition method based on superpixel segmentation and image reconstruction | |
CN112906713B (en) | Aerospace composite material damage visualization feature extraction method | |
CN113763368B (en) | Multi-type damage detection characteristic analysis method for large-size test piece | |
CN112819778B (en) | Multi-target full-pixel segmentation method for aerospace material damage detection image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |